Hostname: page-component-745bb68f8f-d8cs5 Total loading time: 0 Render date: 2025-02-06T13:53:58.541Z Has data issue: false hasContentIssue false

HOW SHOULD WE RECONCILE SELF-REGARDING AND PRO-SOCIAL MOTIVATIONS? A RENAISSANCE OF “DAS ADAM SMITH PROBLEM”

Published online by Cambridge University Press:  07 January 2021

Natalie Gold*
Affiliation:
Philosophy, University of Oxford, United Kingdom
Rights & Permissions [Opens in a new window]

Abstract

“Das Adam Smith Problem” is the name given by eighteenth-century German scholars to the question of how to reconcile the role of self-interest in the Wealth of Nations with Smith’s advocacy of sympathy in Theory of Moral Sentiments. As the discipline of economics developed, it focused on the interaction of selfish agents, pursuing their private interests. However, behavioral economists have rediscovered the existence and importance of multiple motivations, and a new Das Adam Smith Problem has arisen, of how to accommodate self-regarding and pro-social motivations in a single system. This question is particularly important because of evidence of motivation crowding, where paying people can backfire, with payments achieving the opposite effects of those intended. Psychologists have proposed a mechanism for the crowding out of “intrinsic motivations” for doing a task, when payment is used to incentivize effort. However, they argue that pro-social motivations are different from these intrinsic motivations, implying that crowding out of pro-social motivations requires a different mechanism. In this essay I present an answer to the new Das Adam Smith problem, proposing a mechanism that can underpin the crowding out of both pro-social and intrinsic motivations, whereby motivations are prompted by frames and motivation crowding is underpinned by the crowding out of frames. I explore some of the implications of this mechanism for research and policy.

Type
Research Article
Copyright
© Social Philosophy & Policy Foundation 2020

I. A New Renaissance of Das Adam Smith Problem: There and Back Again

“Das Adam Smith Problem” is the name given by eighteenth-century German scholars to the question of how to reconcile the role of self-interest in The Wealth of Nations with Smith’s advocacy of sympathy in The Theory of Moral Sentiments. It seemed to them that Adam Smith had written two very different books. Their (now disputed) reading was that The Wealth of Nations is founded on an egoistic theory of behavior, showing how the interaction of self-interested individuals could lead to benefits for all. In contrast, The Theory of Moral Sentiments not only espouses a theory of human nature in which we have multiple motivations, especially “sympathy,” which can underpin moral judgments and virtuous actions, but Smith argues that we ought not to be purely self-interested: “And hence it is, that to feel much for others and little for ourselves, that to restrain our selfish, and to indulge our benevolent affections, constitutes the perfection of human nature; and can alone produce among mankind that harmony of sentiments and passions in which consists their whole grace and propriety” (TMS, Part I, Ch. 1). In the twenty-first century, few scholars believe there is a contradiction between the two books; however, there is no consensus about the right way to solve Das Adam Smith Problem.Footnote 1

Regardless of the status of Das Adam Smith Problem, the point remains that in the eighteenth-century it was standard to acknowledge that multiple motivations are relevant for the study of political economy. But this picture was on the wane. The lure of Smith’s idea that an agent who intends only his own gain is led by an “invisible hand” to pursue the good of society, “more effectually than when he really intends to promote it” (WN, Book IV, Ch. 2) proved compelling for many. By the nineteenth century, James Stewart Mill wrote of political economy that, ”It is concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging the comparative efficacy of means for obtaining that end.”Footnote 2 Nevertheless, political economists did not offer the pursuit of wealth as a complete theory of human nature. Mill wrote that the desire for wealth was not the whole of Man’s nature; that there are other human motives, such as “the affections, the conscience, or feeling of duty, and the love of approbation.” However, he considered these to be the subject matter of philosophy. This prefigures the turn of economics toward treating people as solely pursuing their own private and selfish material interests, which I will call the principle of self-regard.

The principle of self-regard became increasingly important in the late nineteenth century, as economics replaced political economy as the subject that studies production, exchange, and the distribution of resources. Economists such as Alfred Marshall and Francis Edgeworth emphasized the way in which the interaction of individual agents causes economic outcomes. They pioneered a theory of behavior in which individuals maximize utility and firms maximize profits, subject to constraints on their budgets and resources. This is the core of neoclassical economics, the current mainstream of the subject. Strictly speaking, “utility” is an empty placeholder that includes anything that might make an agent choose one option over another. However, in practice it is usually taken to be a function of the agent’s own consumption of goods and services. A standard graduate textbook in microeconomic theory states that, “A defining feature of microeconomic theory is that it aims to model economic activity as an interaction of individual economic agents pursuing their private interests.”Footnote 3 This approach arguably has its roots in The Wealth of Nations; it discards Smith’s insights about other sources of motivation in The Theory of Moral Sentiments.

However, in the twenty-first century, economics is seeing a renaissance of some of the traditional themes of political economy. It has rediscovered the existence and importance of pro-social motivations, both for the design of institutions and in market settings. Economists have studied altruism, fairness, equity, kindness, reciprocity, and trustworthiness, to name a few. These are studied alongside the principle of self-regard, which is still acknowledged as an important driver of behavior in many circumstances. Therefore we have a renewed Das Adam Smith Problem for the twenty-first century: How do we integrate the fact that much economic analysis is based on self-regard (via the price mechanism) with renewed interest in and evidence of the importance of pro-social motivations? The acuteness of this problem is demonstrated by evidence that paying people can backfire if they are driven by pro-social motivations. A synthesis would provide directions and instructions for the designers of institutions. Which motivations people use—and should use—in a given context has implications for how to structure institutions and incentives.

In order to set up the problem (and to introduce some of the distinctions that will play a part in later discussion), in Section II, I present a taxonomy of motivations from psychology and relate it to evidence from behavioral economics. In Section III, I explain why the problem is of more than theoretical interest. There is a large literature, which originated in psychology, that shows that paying people can have perverse effects on their behavior, the so-called “motivation crowding” effect. The original demonstrations of motivation crowding involved payments for effort, but economists have tended to assume that motivation crowding also applies to pro-social behavior. However, psychologists have argued that pro-social behavior is relevantly different from payment for effort, in a way that means their standard theoretical explanation does not apply, leaving a question about the mechanism behind the crowding out of pro-social motivations. In Section IV, I propose a mechanism, drawing on framing, that can explain why payments affect both effort and pro-social motivations. In Section V, I explore its implications for research and institutional design.

II. Evidence for Pro-Social Behavior and Pro-Social Motivations

The principle of self-regard makes mistakes about the ends that people pursue and the reasons for which they pursue them: they may be concerned with ends other than their own outcomes, and their reasons for pursuing them need not be completely self-interested. In contrast to the assumption of the principle of self-regard, people’s behavior may be pro-social, promoting the well-being of others. (Note that this can include promoting the well-being of specific others, which may not promote the well-being or interests of society as a whole. For example, a mafioso can act pro-socially toward other members of the cosa nostra, but that can lead to bad outcomes for society.) There is a vast amount of evidence, from behavioral economics as well as psychology, that people are not only concerned with their own outcomes. Participants in experiments give money in dictator games and return money in one-shot trust games. Psychologists have also studied helping-behavior in a more contextualized manner, putting subjects in actual helping-situations, which they do not realize are an experimental set-up.

Pro-social behavior promotes the well-being of others. Pro-social motivation is a motivation to promote the well-being of others. Motivation is a slippery concept, it means different things to different writers. For instance, in psychology a motivation could be a goal-directed force,Footnote 4 while in philosophy a motivation might be shorthand for a motivating reason.Footnote 5 For the purposes of this essay, either of these two ways of casting motivation would do and they could be used interchangeably. Indeed, one psychologist describes motivations in a manner that combines these two ideas, as “the reasons that drive actions.”Footnote 6

There can be chains of motivations. If we ask of any individual’s behavior “Why did she do that?” we can often take the answer and run at least one more iteration of the question. For instance, if someone gives to a food bank, then we can ask of our donor: “Why did she give food?” Our answer might be: “Because she was concerned with the welfare of people who cannot afford to feed themselves.” But then we can ask the further question: “Why was she concerned with the welfare of people who cannot afford to feed themselves?” One possible answer is “Because she takes pleasure in others’ welfare gains”; another possibility is that there is no further answer—improving people’s welfare is her ultimate motivation or her ultimate goal. Some motivations or goals may be seen as instrumental, pursued for the sake of a higher motivation or goal. The ultimate motivation or goal is the place where the buck stops.

As well as debates about the possibility of pro-social behavior and proximate pro-social motivations and goals, there is also debate about the nature of ultimate motivations and goals. Some researchers argue that all behavior is ultimately self-interested, that pro-social behavior is really enlightened self-interest, a position that is known as psychological egoism.Footnote 7 This position has seemed attractive to some because the reasons for which I act are my reasons and the goals I pursue are my goals. However, nothing follows from this: there can be a separation between my goals and my welfare; it is not true that pursuing my goals and my reasons will always make me better off.Footnote 8

We can understand this distinction—between goals that are motivated by enlightened self-interest, where helping others positively impacts the agent’s own welfare, and goals that do not promote the agent’s welfare—in the context of Sen’sFootnote 9 distinction between sympathy and commitment. Sympathy is when the concern for others directly affects the agent’s own welfare, an idea that Sen takes from Smith and Edgeworth, although arguably it is closer to our modern notion of empathy: the agent takes pleasure in others’ gains and pain in others’ losses. Commitment is when the outcome that the agent is concerned about does not directly impact his or her welfare, but the agent is nevertheless motivated to achieve it. Commitment covers a class of reasons for acting that result from normative imperatives including, but not limited to, moral imperatives. People’s actions can be overdetermined. An agent who is committed to making a charitable donation might also take pleasure in it, even though that wasn’t her reason for contributing. Therefore, deciding whether or not an agent acts from commitment may require making judgments about counterfactual cases. An agent who acts with commitment is one who would have made the donation even if it had not made her better off by giving her pleasure.

In the same way that some researchers argue that all behavior is ultimately self-interested, some philosophers might argue that all behavior ought ultimately to be underpinned by morality. For instance, for a Utilitarian, the ultimate goal is the maximization of utility. For a Kantian, one should always ask whether one is acting on a principle that could be willed as a universal law.

But there are also other possible ultimate motivations. BatsonFootnote 10 identifies four ultimate motivations:

  1. (1) egoism—increasing the actor’s own welfare; the benefits can be material, social or self-rewards (for example, monetary rewards, praise, self-esteem) or the avoidance of material, social or self-punishment (for example, fines, social censure, guilt, shame)

  2. (2) collectivism— increasing the welfare of a group or collective

  3. (3) altruism—increasing the welfare of one or more individuals other than oneself

  4. (4) principlism—upholding some standard or principle; Batson specifies moral principles, but it is possible to act to uphold standards and principles that are not moral, for example, professionalism involves working to a professional standard, or one might act to uphold the law and legal principles.

For Batson, these are all at least potentially ultimate motivations. He has spent his career studying pro-social behavior and showing that altruism can be an ultimate motivation. His hypothesis is that we are motivated by empathy-induced altruism, that "feeling empathy for [a] person in need evokes motivation to help [that person] in which these benefits to self are not the ultimate goal of helping; they are unintended consequences.”Footnote 11 Thus, empathy-induced altruism is a form of commitment. Batson’s strategy is to take instances of helping behavior and to show that they are not caused by plausible egoistic motives; that high empathizers continue to help even when the egoistic motivation is neutralized.Footnote 12 It is not a direct test of the hypothesis that altruism is caused by commitment but, by excluding a variety of egoistic explanations and showing that there is helping-behavior that they cannot account for, Batson increases the probability that altruistic behavior is caused by commitment rather than being “a subtle and sophisticated form of egoism.”Footnote 13

Examples of all four types of motivation can be found in experimental and behavioral economics. Egoism is the standard currency of economists, and behavior in the lab varies a lot by individual, so any experiment that shows that at least some subjects are pro-socially motivated also has some subjects who are egoists. Therefore I do not address it specifically.

Altruism: The classic example of altruism in experimental economics is giving in dictator games. In a paradigm set-up, a subject is given $10 and can choose how much of it to give to another anonymous subject. Usually more than 60 percent of subjects give some money, with the mean transfer being approximately 20 percent of the total.Footnote 14

Collectivism: When group identity is manipulated, people are more favorable to in-group members.Footnote 15 Some economists have argued that groups can be agents, and individuals in groups use “team reasoning,” asking themselves the question “What should we do?” and that this is the best way of explaining the vast empirical literature showing that people cooperate and coordinate in ways that standard individualist economic theory cannot explain.Footnote 16

Principlism: An example of principlism can be found in the literature on tax compliance. According to the principal of self-regard, tax evasion—like all other criminal behavior—should be viewed simply as a choice whether to take a gamble that has a positive payoff if successful but a penalty if caught.Footnote 17 However, subjects in the lab do not act according to this model: subjects are less likely to take gambles if they are presented as a tax evasion decisionFootnote 18 and their behavior is affected by moral constraints.Footnote 19 Of course, the lab is an artificial environment (and, one might argue, subjects could be influenced by “experimenter demand effects”); but self-regard cannot explain actual tax evasion behavior either, while the hypothesis that at least some tax payers are motivated by moral principles can.Footnote 20

Although BatsonFootnote 21 does not mention social norms in his taxonomy, they have been prominent topics of research in behavioral economics.Footnote 22 However, this is not an important omission for Batson, given that he is concerned with ultimate motivation. Many researchers think that social norms are enforced by social approval and disapproval, or similar social evaluations, in which case they are ultimately an egoist motivation, according to Batson’s typology. Alternatively, we can imagine someone who had completely internalized social norms (someone who, if she found herself alone on a desert island, would still follow conventions such as “walk on the right, stand on the left” or continue to keep up her manners, things that have conventionally been instilled in her as “the right thing to do”). This would seem to be a variety of principlism, albeit a slightly strange one. So while social norms are an important form of proximate motivation, they can be subsumed within Batson’s categories of ultimate motivations.

A similar thing could be said about other types of non-self-regarding behavior that experimental and behavioral economists have been interested in, such as equity, reciprocity, and trust and trustworthiness. BatsonFootnote 23 has a concise list because it is a list of ultimate motivations. In contrast, economists are better thought of as investigating proximate motivations and their models need not imply anything about ultimate motivations. The standard way of representing motivations in economics is as arguments in a utility function. Despite the terminology, the utility function only represents an agent’s goals; it is a functionalist method of predicting action. The same function could represent either a “warm glow” from sympathy or a non-sentimental commitment. Further, there is no presumption that agents know their own utility functions: utility theory describes how people act but does not presume that people are aware of their own motivations.

Economists now agree that there is a multiplicity of types of proximate motivation, including many that are not self-interested, and arguably there are multiple types of ultimate motivations. Economists tend to study each motivation in a particular setting or laboratory game. In order to rationalize the number of explanations of behavior, they have developed hybrid models, which include multiple motivations that aim to explain behavior in multiple types of experiments. But even these models cannot explain all the empirical evidence.Footnote 24 The new Das Adam Smith problem, as I investigate it here, is a question about how these different motivations are related; it arises for proximate as well as ultimate motivations, so the question requires an answer regardless of one’s view on ultimate motivations.

III. The Importance of the Problem: The Motivation Crowding Effect

It’s important to have a theory of motivation because different motivations respond to different incentives, and using monetary incentives when people are acting on non-self-regarding motivations can be counterproductive. Well known examples of financial incentives backfiring include: payment for blood leading to less blood being collected;Footnote 25 fines for parents who failed to pick up their children on time from daycare leading to increased lateness, which persisted even after the fine was removed;Footnote 26 the offer of financial compensation increasing NIMBY-ism, when people were asked if they would permit a nuclear waste repository to be sited in their community;Footnote 27 the use of financial penalties for untrustworthy behavior increasing the amount of untrustworthy behavior;Footnote 28 and the use of financial penalties to enforce contracts leading to more contracts being breached.Footnote 29 Why does this happen and what are the implications for institutional design?

The examples I just gave are all instances where payment affects pro-social behavior. They are also often given by economists as examples of the motivation crowding effect, where payment for a task crowds out intrinsic motivation.Footnote 30 The concept of intrinsic motivation is slippery. An early definition in the literature is that “[o]ne is said to be intrinsically motivated to perform an activity when one receives no apparent reward except the activity itself.”Footnote 31 Conversely, one is extrinsically motivated when one does something to receive a reward or avoid a punishment. Let us call this Definition 1. (This is already slippery: what is an “apparent reward”? My reading is that it is a tangible, physical reward, that is, it does not include intangible rewards like esteem.) Another way of thinking about the difference between intrinsic and extrinsic motivation is that one is intrinsically motivated when one does something for its own sake. Let us call this Definition 2. So while Definition 1 characterizes motivation crowding according to the environment in which the behavior occurs, Definition 2 characterizes it in terms of ultimate motivations, which has some different implications, as we will see below

The original and paradigm example of motivation crowding from psychology involved payment for effort. Subjects who had been paid to solve puzzles were less likely to return to them later, after payment had been withdrawn, than a control group who had received no reward for their activity in the first period; and the paid subjects also reported a lesser interest in the task than the unpaid.Footnote 32 Among psychologists, the predominant explanation for motivation crowding is the over-justification of the agent, where the payment is seen as controlling, and the external intervention therefore undermines feelings of self-determination and autonomy, which causes the agent to relinquish the intrinsic motivation.Footnote 33

In this early work, intrinsic motivation was defined in contrast to extrinsic motivation, as anything that is not done for a tangible reward (Definition 1). So it was natural to interpret intrinsic motivation as encompassing many different sorts of motivations for undertaking an activity, including both enjoyment of a task and pro-social motivations. As we saw in the examples at the beginning of this section, payments can crowd out pro-social motivations as well as effort. However, in later work, Ryan and DeciFootnote 34 provide a rather more refined definition of intrinsic motivation. They say that it is “the doing of an activity for its inherent satisfactions rather than for some separable consequence. When intrinsically motivated, a person is moved to act for the fun or challenge entailed rather than because of external prods, pressures, or rewards.”Footnote 35 They take Definition 2 and extend it, by specifying the exact motivation: for fun or challenge. Therefore, according to Ryan and Deci’s definition, pro-social motivation is not an intrinsic motivation, since it is based on benefitting others rather than on interest in and enjoyment of a task.Footnote 36

Even if we discard the stipulation that intrinsic motivation involves acting for fun or challenge, retaining only the idea that it involves doing something for its own sake (that is, adopt Definition 2), psychologists have noted differences between pro-social motivations and the motivation to make an effort, which imply that the over-justification theory does not apply to pro-social motivations. GrantFootnote 37 starts from the position that intrinsic motivation is associated with pleasure and enjoyment, and pro-social motivation with meaning and purpose.Footnote 38 He argues that: intrinsic motivation phenomenologically pulls people to do things, whereas pro-social motivation may require people to push themselves, necessitating self-regulation to achieve a goal; intrinsic motivation focuses on the process, whereas pro-social motivation focuses on the outcome or goal;Footnote 39 and that—relatedly—intrinsic motivation involves a focus on the present experience, whereas pro-social motivation involves a focus on the future, on the meaningful outcome that will result from the behavior.Footnote 40

However, this implies that there is problem with using the over-justification theory to explain the effect of incentives on pro-social motivations. For Grant,Footnote 41 it follows from the differences between them that intrinsic and pro-social motivations involve different levels of autonomy. He says that intrinsic motivation is “fully volitional, self-determined and autonomous” whereas pro-social motivation “is less autonomous, as it is based more heavily on conscious self-regulation and self-control to achieve a goal.” If pro-social motivations are not associated with autonomy, then the explanation for the crowding out of pro-social motivations cannot be that autonomy is impaired.

There are two possible counters, neither of which entirely solves the problem. First, one could get into a philosophical debate about what constitutes autonomy, arguing that self-regulation is a form of Kantian autonomy, where one follows a rule that one makes for oneself.Footnote 42 However, this response misses that Grant’sFootnote 43 point is really about the phenomenology of behavior: if the mechanism of motivation crowding is that applying incentives makes people lose their feeling of being autonomous, and if pro-social behavior often does not feel autonomous in the first place, then there is no reason to expect pro-social behavior to respond to the mechanism—even if it belongs to the philosophical category of Kantian autonomy. Second, psychologists allow that, to the extent that we value and identify with pro-social behaviors, we may experience greater autonomy in their performance.Footnote 44 But they also make it quite clear that they consider pro-social motivation a type of extrinsic motivation because acting for the benefit of others, even if that fulfills core values and identities, is a type of external goal. (Though this would seem to conflate having an external goal and wanting to achieve that goal for its own sake, as an ultimate goal).

One response would simply be to follow BowlesFootnote 45 in endorsing a variety of mechanisms of motivation crowding, so different instances of crowding are explained by different mechanisms. But that leaves us in a place where we cannot conclude much. Bowles’s recommendations to policy makers are: (i) to use more realistic psychological assumptions when doing mechanism design, and (ii) to create policies and constitutions that support socially valued ends by evoking, cultivating, and empowering public-spirited motives. These are all very sensible, but not very specific. It would be nice to be able to say something more specific about institutional design or the direction of research needed to do good design.

Instead, I will propose a different mechanism, which can explain both the crowding out of intrinsic motivations and the crowding out of pro-social motivations, and explore the implications for policy and research.

IV. Framing and Motivation Crowding

One solution to the original Das Adam Smith Problem is that different motivations are used in different spheres, and with different people.Footnote 46 That has an intuitive plausibility about it. I want to think about this in the context of research on the prisoner’s dilemma, which is extensive, and suggests a more specific mechanism.

It should not come as a surprise to anyone that there is a higher rate of cooperation in the prisoner’s dilemma when it is called the “Community Game” rather than the “Wall Street Game.”Footnote 47 Changing the labels on a decision-problem and observing that this causes people to choose differently is an example of a framing effect. Framing is often implicitly and sometimes explicitly offered as an explanation of the effect of payments on effort.Footnote 48 Lindenberg and FreyFootnote 49 claim that when motivation crowding occurs a “gain frame” crowds out a “normative frame,” but this is not explained in any further detail.

We can think of the agent’s frame as the set of concepts that she uses to think about her situation.Footnote 50 Framing is notorious because of Tversky and Kahneman’sFootnote 51 work on framing effects, where two groups of subjects were put in the position of policy makers facing an epidemic and asked to choose between two vaccination programs. Subjects who were given the decision problem in terms of “lives saved” by each program tended to choose a different program to those who were given the problem in terms of “lives lost” by each program. Similarly, Ross and WardFootnote 52 took a laboratory prisoner’s dilemma but for one set of subjects they referred to it as the “Community Game” and for another they referred to it as the “Wall Street Game.” Two-thirds of subjects cooperated in the Community Game, compared to one-third in the Wall Street Game.

There is another framing effect involving prisoner’s dilemmas that researchers have hypothesized is caused by a change in motivations. The standard way of presenting a prisoner’s dilemma is as a 2x2 payoff matrix. However, it is possible to “decompose” the payoffs and present them as a choice between two different allocations of payoffs between Player 1 and Player 2.Footnote 53 Figure 1 gives an example of a prisoner’s dilemma matrix and an associated decomposed game.

Figure 1. Alternative presentations of the prisoner’s dilemma.

Both players choose a payoff allocation and then each gets the total of the payoff each awarded to him- or herself plus the payoff s/he was awarded by the other player. For instance, if Player 1 chooses allocation C and Player 2 chooses allocation D then Player 1 has assigned 0 to herself and 12 to Player 2 while Player 2 has assigned 6 to herself and 0 to Player 1. So Player 1 gets 0 from herself and 0 from Player 2, a total of 0. Player 2 gets 12 from his choice and 6 from player one, a total of 18. The outcome is 0 for Player 1 and 18 for Player 2, which is the same as the payoffs for (C, D) in the game matrix. The totals from each combination of allocations is the same as the payoff from the equivalent strategy combination in the prisoner’s dilemma. Therefore, in any decomposed game, it is possible to work out the payoff matrix from the choices in the allocation decision.Footnote 54 The decomposition and the parent game are two different ways of presenting the four possible payoff outcomes. However, experimenters have found higher rates of cooperation with the decomposed game compared to the matrix presentation.Footnote 55

In an investigation designed to discover why behavior was different in the decomposed games, Pruitt asked subjects to record the thinking behind their decisions.Footnote 56 He discovered that, in accordance with expectations derived from game theoretic reasoning, those who played D in the above games were motivated by the payoff they could get by doing so. In the decomposed game, responses to open-ended questions showed that many subjects viewed alternative C as a way of being “helpful” or “generous.” PruittFootnote 57 postulated that “the games produce differing motives, which in turn produce differing behavior,” a suggestion that has also been echoed by Colman.Footnote 58

Kahneman and TverskyFootnote 59 explained their framing effect using Prospect Theory, drawing on the idea that people display different risk preferences depending on whether options are framed as losses or gains. However, it is hard to see how Prospect Theory could explain the difference in play between these differently framed prisoner’s dilemmas—or, for that matter, the examples of collectivism and principlism in Section II, which were also demonstrated by taking a laboratory game and changing the framing: manipulating the group identity of the players (collectivism) or calling a gamble a tax evasion decision (principlism). To explain these examples, we need a more general theory of framing effects.

Decision theorists have given explanations of framing effects that relate them to reasons.Footnote 60 What the different models have in common is that the reasons that underpin an agent’s choices depend on how they frame or, in the case of Schick,Footnote 61 “understand” the decision. Note that acting and choosing for a reason does not have to be understood as involving a conscious reasoning process. A minimal requirement is that the agent is disposed to be responsive to reasons, where these are based on facts that count in favor of a particular decision or action. In this paradigm, framing effects occur when there are reasons in favor of both options and the reason that the agent responds to depends on the way in which the decision is presented or described. In effect, these agents are not weighing all their reasons, but act on the basis of a single reason. If they have an acceptable reason to hand, then they do not search for others. This has psychological plausibility. It is consistent with evidence of “concrete thinking,” whereby decision-makers appear to use only surface information, and information that has to be inferred from the display or created by some mental transformation tends to be ignored.Footnote 62 Concrete thinking may be connected to people’s desire to justify decisions by saying that they chose for a (single) reason, even to the extent of constructing and selecting choice situations such that there is always a dominant reason for choice.Footnote 63 Once a reason for choice has presented itself, people are not motivated to seek out further reasons. Call this “one-reason decision-making.”

In Tversky and Kahneman’s problem, the fact that some people will die for sure is a reason not to choose the policy with certain outcomes, while the fact that there is a possible outcome where no one is saved is a reason not to choose the risky policy. The two different ways of framing the decision make these different reasons salient, which affects people’s choices.Footnote 64 This explanation is in accordance with the psychological literature on “reason-based choice.” A classic example there is the custody decision, where the question of which parent should get custody elicits the same answer as the question of which parent should not get custody: the questions elicit a search for positive and negative attributes respectively, which would be reasons for giving or not giving custody, and one parent has both more positive and more negative attributes.Footnote 65

The idea that the presentation of the decision affects the reason that people act on can explain a wide class of framing effects, including ones that involve motivations. Reasons are connected to motivations. We can think of the reason for which an agent acts as her motivating reason, so framing can affect an agent’s motivating reason.

In the decomposed prisoner’s dilemma, game-theoretic reasoning about monetary payoffs conflicts with being helpful or generous. This is sometimes referred to as “might versus morality.”Footnote 66 According to PruittFootnote 67 and Colman,Footnote 68 the decomposition makes helpfulness, or the moral side of the coin, more salient. Their suggestion is supported by evidence that the way subjects frame the prisoner’s dilemma correlates with the move they make. Subjects who perceive playing C as cooperative and playing D as noncooperative are more likely to play C.Footnote 69 Similarly, cooperative types (defined as such because they behave cooperatively) tend to frame the dilemma in terms of morality.Footnote 70 If moral reasons support a different choice from game-theoretic dominance reasoning and the salience of these reasons can be affected by the presentation of the decision, then people will make different choices in different frames. Further, BrunerFootnote 71 postulates that once an agent has categorized a situation, incongruent cues may be “gated out.” Bruner does not say how or why gating out occurs but, in cognitive psychology, there is a well-known effect called assimilation, where an agent perceives an object’s attributes as more typical of the category that is being used than it actually is.Footnote 72

The reason-based explanation of framing effects is consistent with the evidence of a connection between framing and behavior in prisoner’s dilemmas, but refines it by offering a direction of causality, namely that framing the game in moral terms may lead to cooperative behavior by increasing the perception of, and hence the chance that people act on, moral or other-regarding reasons.

A framing theory of motivation crowding can also explain the paradigm examples of motivation crowding, where payment crowds out intrinsic motivations. These do not involve changes in explicit descriptions. However, the monetary payment may still affect the way subjects frame the situation. Take Deci’sFootnote 73 experiments, where subjects were given puzzles to solve. There were two periods in this experiment. In the second period, subjects were left alone in the room with the puzzles. This was the same for all subjects. In the first period, half of the subjects were paid to solve puzzles and half of the subjects played with them without payment. Deci found that first period activity affected second period behavior, even though naïve theory suggested that the second period was the same for all subjects. The first period activity may have served as an implicit framing task. The puzzles were supposed to be interesting to solve for their own sake. Subjects who were paid were given another way to think about the puzzles: as an activity engaged in to make money. In the second period, the monetary payment was withdrawn. If the subjects who had been paid framed the task of solving them in terms of money, and acted on their monetary motivations, then their reason for solving the puzzles would have gone. Concrete thinkers, who do not generally search for information, would not investigate whether there were other reasons to carry on solving the puzzles.Footnote 74

When an agent is performing a task that she has intrinsic reasons or other-regarding reasons to do and she is also being paid, then her action is overdetermined. There is a sense in which she is over-justified—because she has multiple reasons in favor of her action, not because the price is seen as an instrument of control. If people are one-reason decision makers, then one of the motivations will become the primary motivating reason, at the expense of any others. Why should the monetary rather than the nonmonetary reason become the motivating reason?

We can answer this question by drawing on attribution theory, according to which actors are more likely to attribute their behavior to external factors than internal ones.Footnote 75 So attribution theory would predict that, if agents are offered payment, then they will attribute their motivation to the payment, rather than any intrinsic or pro-social motivation they may also have had. This is also supported by evidence from the Fundamental Attribution Error.Footnote 76 The Fundamental Attribution Error is an asymmetry in the way people explain behavior, with people explaining their own behavior differently from the way they explain the behavior of others. The important thing for us is that people tend to attribute the causes of their own behavior to their external situation (whereas they tend to attribute other people’s behavior to internal traits). So if I send in my paper late, then I explain it by saying things such as “I had some important emergencies that prevented me from finishing on time” (whereas if your paper is late I am more likely to say that you are bad at time management or cannot stick to deadlines). So the Fundamental Attribution Error supports the idea that if we offer someone a reward for performing a behavior, then she is likely to attribute her behavior to the presence of the reward. In that case, it is not surprising that she would stop the behavior when the reward is withdrawn.Footnote 77

The cases I have discussed so far have all been examples of both framing effects and motivation crowding. However, the framing mechanism can also explain examples where there is an actual change in the situation, as well as a change in framing, that is, which are not framing effects. So I am not claiming that all motivation crowding effects are framing effects and I do not mean to make any claim about the rationality of motivation crowding. But the process of framing, which operates in framing effects, also operates in cases of motivation crowding. Introducing a reward also introduces a new way to think about the behavior, as being done for a reward. There is a change in the agent’s frame. Changing the way that people frame a problem may change their motivating reason. Once someone has the concept of doing something for a reward in their frame (or, as Lindenberg and Frey put it, uses a “gain frame”), then it becomes likely that withdrawal of payment leads to cessation of the activity. This mechanism explains the contention of Lindenberg and Frey,Footnote 78 that a “gain frame” will “crowd out” other ways of framing the task.

V. Implications for Research

There has been a tendency for economists to resist adding frames as a primitive to their theories and a tendency to think of framing as irrational. Both of these tendencies are mistakes.

Frames are usually thought of as a purely cognitive feature. In the classic accounts of framing effects, frames may affect the attractiveness of options, quite literally. For instance, describing beef as 25 percent fat instead of 75 percent lean makes people rate it as less likely to be tasty.Footnote 79 And the standard question that follows from framing effects is how can people be so irrational as to change what they want when all that has changed is the description? If the decision is whether to have a surgical procedure with a 90 percent survival rate and a 10 percent mortality rate,Footnote 80 then there are serious consequences that follow from the choice. In my account of motivation crowding as involving framing, frames also have normative features. A change in frame is not just about changing the attractiveness of an option, but it also changes what motivations and behaviors are seen as appropriate.

Framing is a part of the decision-making process, prior to assessing options and making choices. Most motivation crowding effects are not framing effects, even if they do involve framing, so the rationality or otherwise of framing effects is orthogonal to this discussion. But we might note that these general effects on motivation cast doubt on at least some of the reasons for declaring framing effects irrational. The core assumption is that it is irrational for one’s choice to depend merely on the description. One reason that has been given for this is that the sort of selective seeing of a situation that is involved in framing is irrational; that rationality requires us to see all possible ways of framing a situation and that this requirement is imposed by orthodox decision theory.Footnote 81 The classic examples of framing effects have two obvious frames: the opposition between positive and negative. However, once we move to a theory where frames can activate motivations, then there are an infinite number of ways of framing the situation. For instance, in the prisoner’s dilemma, if there is one way of decomposing a matrix then there are an infinite number of possible decompositions. We have finite minds so we cannot see them all. This casts some doubt on whether it really is irrational not to see all the decompositions, unless rationality is merely a standard to which we aspire rather than a state we have any hope of achieving. Separating discussion of framing from the presumption of irrationality is a good thing because the presumption of irrationality may be a barrier to economists incorporating framing in their models.

Another unsuccessful argument against adding frames to the primitives of rational choice theory is that we can do all of the work using expectations. Some of the examples I discussed above might involve a change in expectations. For instance, changing the framing in the decomposed prisoner’s dilemma by decomposing the game or by calling it the “Community Game” may change a player’s expectations about what the other player will do. In many theories this change in expectations will cause a change in behavior. For example, in the RabinFootnote 82 model of reciprocal fairness, agents want to be kind to agents who they expect will be kind to them. If a player cares about Rabin-kindness and the decomposed dilemma leads her to expect that her co-player will cooperate, then the change in expectations could lead her to cooperate. But why would decomposing the dilemma increase a player’s expectation that her co-player will be kind, without including an increased perception of the possibility of kindness in the explanation?

To see more clearly why this must be the case, consider an alternative theory that gives a prominent role to expectations: the idea that people are acting on social norms, whereby they have a conditional preference that they conform given that others will too.Footnote 83 In order for a social norm to lead to cooperation in a prisoner’s dilemma, a player needs to know that a social norm exists and have an expectation that other players will cooperate. So there are two routes by which a change in presentation could lead to a change in behavior. Either it could directly cause a player to perceive that they are in a situation that is governed by a social norm, when s/he did not see that before, or it could change expectations about the other player’s behavior. If a player’s own frame has not changed but the change in presentation has changed her expectations about others, then the player’s beliefs about whether the other player has perceived the norm must have changed. So either her frame or her beliefs about the other player’s frame have changed; either way, we cannot dispense with the notion of a frame. The same applies to the case of Rabin-kindness; just replace “norm” with “Rabin-kindness” in the argument. In both cases, the change in expectations of behavior occurs because there is a change in expectation about how the other player frames the decision.

There is also evidence that framing can affect honest behavior without changing expectations. Cohn, Fehr, and MaréchalFootnote 84 ran an honesty experiment, where subjects’ payments depended on the outcome of a coin toss, which they self-reported, giving them the incentive to report dishonestly. (In this type of experiment, the subjects’ actions are anonymous. With a large number of participants we would expect the distribution of heads and tails to follow a binomial distribution; for instance with a single coin toss we would expect heads and tails each to come up 50 percent of the time, so if the distribution of the subjects’ reports is skewed away from that, it is a sign of dishonest reporting, and one can compare dishonesty between experimental conditions by comparing the distribution of the number of heads reported.) The subjects were bank employees and the researchers found that making subjects’ professional identities as bank employees salient increased dishonest reporting. However, they also measured subjects’ beliefs about other bank employees’ reporting behavior, and this was not affected by the framing. The change in behavior seems to have been caused by the framing, not by the expectations of what others would do.

When discussing situations with normative features, there has been a tendency for behavioral economists to focus on expectations, even when those expectations are connected to contexts. For instance, in Bicchieri’sFootnote 85 theory of social norms, a norm is triggered by the context, but the existence of a norm is defined as a network of expectations, and the tests of the theory involve fixing a context and testing the effect of changing expectations. Similarly, List,Footnote 86 when discussing his finding that the amount sent in dictator games is sensitive to whether the experiment also includes the option to take money, concludes that the traditional set-up “evokes expectations of the ‘givers’ and ‘receivers’ that seemingly demand a positive gift.” He concludes that the different choice sets invoke different social norms. One research implication that follows from the importance of framing is that we should be investigating what frames people bring to the situations we are studying, how that connects to their motivations, and using that knowledge to formulate testable hypotheses about what motivates their behavior and what will induce behavior change.Footnote 87 We need to focus on frames, not just on expectations.

Some of what we find might surprise us. For instance, the market frame is associated with the efficacy of financial incentives and the pursuit of self-interest, but it is also associated with fairness and trust. Societies with market structures are more likely to be cooperative;Footnote 88 priming markets leads to senders sending more money in a trust game, but not in a dictator game.Footnote 89 Markets are not only about the pursuit of narrowly defined self-regard, they are constrained by rules. Exchanges are not simultaneous, someone usually has to be the first mover, but when you hand your money to the butcher, the brewer, or the baker, you are confident that they will hand over the goods.Footnote 90 Many people do not count their change. They can feel safe doing that because, in markets, bargaining is permitted but cheating is not.

We need framing even if we think both market and non-market behavior are encompassed in a single overarching theory of behavior. One solution that has been offered to Das Adam Smith Problem is that, although there seem to be two spheres, the principles of action are actually the same in each.Footnote 91 For instance, SmithFootnote 92 argues that we maximize the gains from exchange in both markets and personal exchange, but in markets this is done through non-cooperative self-interest and in personal exchange through reciprocity. However, even in this theory, people need some way of identifying when market exchange is appropriate and when they should be engaging in personal exchange. People create and maintain strong distinctions among different kinds of social relations and meaning systems, to convey whether exchanges are gifts, entitlements, or payments.Footnote 93 If there is a mismatch between the two sides of an exchange, between people who are pro-socially motivated and people who are not, then there is the opportunity for exploitation.Footnote 94

The picture of decision-making proposed here recommends a different role for framing in institutional design than that suggested by the way framing is perceived in the “heuristics and biases” program. Instead of setting up framing in order to enable people to be rational, we should design institutions that support framings that produce good outcomes. Sometimes that will involve a market frame and financial incentives, but other times it will involve supporting non-market ways of seeing the interaction and pro-social motivations. Institutional designers need to ensure that incentives offered are congruent with motivations, if they are to achieve their required results. And when making institutional changes, the impact on frames should be considered. It may be the case that treating people as though they are extrinsically motivated will actually cause them to be so motivated, creating the need for incentives and rewards where none existed before. (Further discussion of the idea that designing institutions as if people are knaves causes them to behave as such can be found in FreyFootnote 95). Or, when we desire to change the culture of institutions, designers could consider what frames are in play and how to change them.

VI. Conclusion

There was something lost at the origins of political economy that we are rediscovering: the importance of pro-social motivations, and how they interact with and can be a corrective to self-regard. I have argued that we should understand motivations as being prompted by different normative frames. This leads to a new research direction, investigating a broader range of frames, and a policy recommendation, to design institutions to support frames that we consider desirable and efficacious. The currency of reward needs to be appropriate to the motivation, and feedback from rewards to frames should be considered. But we need to do this in ways that do not promote the exploitation of those who are pro-social, either because they are not adequately financially rewarded or because they are exploited by a self-regarding partner in the interaction.

One question I have not addressed is why we should design institutions to support pro-sociality, rather than letting market incentives take their course. One answer is that sometimes pro-social outcomes are more effective. Another might speak of the type of society we want to live in. A third brings me back to Adam Smith. In this essay, I have spoken quite narrowly of self-regard. But people’s enlightened self-interest includes behaving pro-socially. Helping others is welfare increasing.Footnote 96 The idea that social well-being is a part of our self-interest would have been familiar to Smith and is another return to the origins of political economy.Footnote 97

References

1 See Montes, Leonidas, “Das Adam Smith Problem: Its Origins, the Stages of the Current Debate, and One Implication for Our Understanding of Sympathy,” Journal of the History of Economic Thought 25, no. 1 (2003): 6390 for a survey of the current debate.CrossRefGoogle Scholar

2 Mill, John Stuart, “On the Definition of Political Economy, and on the Method of Investigation Proper to It,” London and Westminster Review, October 1836. Reprinted in Essays on Some Unsettled Questions of Political Economy, 2nd ed. (London: Longmans, Green, Reader and Dyer, 1874), essay 5, paragraphs 38 and 48.Google Scholar

3 Mas-Colell, A., Whinston, M. D., and Green, J. R., Microeconomic Theory (New York: Oxford University Press, 1995), 3.Google Scholar

4 Batson, C. D., “Why Act for the Public Good? Four Answers,” Personality and Social Psychology Bulletin 20, no. 5 (1994): 603610.CrossRefGoogle Scholar

5 Parfit, Derek, Reasons and Persons (Oxford: Oxford University Press, 1984).Google Scholar

6 Grant, A. M., “Does Intrinsic Motivation Fuel the Prosocial Fire? Motivational Synergy in Predicting Persistence, Performance, and Productivity,” Journal of Applied Psychology 93, no. 1 (2008): 48.CrossRefGoogle ScholarPubMed

7 Feinberg, Joel, “Psychological Egoism,” In Shafer-Landau, Russ and Feinberg, Joel, eds., Reason and Responsibility (Boston, MA: Wadsworth, 1978).Google Scholar

8 Butler argued long ago in his Sermons that it is not in one’s self-interest to be self-regarding (Joseph Butler, Fifteen Sermons Preached at the Rolls Chapel [Cambridge: Hilliard and Brown, 1726]); for a more recent argument against psychological egoism see E. Sober and D. S. Wilson, Unto Others (Cambridge, MA: Harvard University Press, 1998).

9 Sen, Amartya, “Rational Fools: A Critique of the Behavioral Foundations of Economic Theory,” Philosophy and Public Affairs 6 (1977): 317–44.Google Scholar

10 Batson, “Why Act for the Public Good?” 603–610.

11 Batson, C. D., Shaw, L. L., “Evidence for Altruism: Toward a Pluralism of Prosocial Motives,” Psychological Inquiry 2, no. 2 (1991): 14.Google Scholar

12 Batson, C. D., Altruism in Humans (New York: Oxford University Press, 2011)Google Scholar; Batson, “Experimental Tests for the Existence of Altruism,” Proceedings of the Biennial Meeting of the Philosophy of Science Association (1992); Batson and Shaw, "Evidence for Altruism.”

13 Batson, Altruism in Humans; Batson, “Experimental Tests for the Existence of Altruism,” 224.

14 Camerer, Colin F., Behavioral Game Theory (Princeton, NJ: Princeton University Press, 2003).Google Scholar

15 Chen, Yan and Li, Sherry Xin, “Group Identity and Social Preferences,” American Economic Review 99, no. 1 (2009): 431–57.CrossRefGoogle Scholar

16 Sugden, Robert, “Thinking as a Team: Towards an Explanation of Nonselfish Behavior,” Social Philosophy and Policy 10, no. 1 (1993): 6989; M. Bacharach. Beyond Individual Choice: Teams and Frames in Game Theory (Princeton, NJ: Princeton University Press, 2006).CrossRefGoogle Scholar

17 Becker, G. S., “Crime and Punishment: An Economic Approach,” The Economic Dimensions of Crime, ed. Fielding, Nigel, Clarke, Alan, and Witt, Robert (London: Palgrave Macmillan, 1968), 1368.CrossRefGoogle Scholar

18 Baldry, Jonathan C., “Tax Evasion is Not a Gamble: A Report on Two Experiments,” Economics Letters 22, no. 4 (1986), 333–35CrossRefGoogle Scholar; Jonathan C. Baldry, “Income Tax Evasion and the Tax Schedule: Some Experimental Results,” Public Finance 42, no. 3 (1987): 357–83.

19 Bosco, L. and Mittone, L., “Tax Evasion and Moral Constraints: Some Experimental Evidence,” Kyklos, 50, no. 3 (1997): 297324.CrossRefGoogle Scholar

20 Gordon, J. P., “Individual Morality and Reputation Costs as Deterrents to Tax Evasion,” European Economic Review 33, no. 4 (1989), 797805.CrossRefGoogle Scholar

21 C. D. Batson, “Why Act for the Public Good?” 603–610.

22 Fehr, E. and Fischbacher, U., “Third-Party Punishment and Social Norms,” Evolution and Human Behavior 25, no. 2 (2004): 6387CrossRefGoogle Scholar; C. Bicchieri, The Grammar of Society: The Nature and Dynamics of Social Norms (New York: Cambridge University Press, 2005).

23 Batson, “Why Act for the Public Good?” 603–610.

24 Fehr, E. and Schmidt, K. M., “The Economics of Fairness, Reciprocity and Altruism: Experimental Evidence and New TheoriesHandbook of the Economics of Giving, Altruism and Reciprocity 1 (2006): 615–91.CrossRefGoogle Scholar

25 Mellström, C. and Johannesson, M., “Crowding Out in Blood Donation: Was Titmuss Right?Journal of the European Economic Association 6, no. 4 (2008): 845–63.CrossRefGoogle Scholar

26 Gneezy, U. and Rustichini, A., “A Fine Is a Price,” The Journal of Legal Studies 29, no. 1 (2000): 117.CrossRefGoogle Scholar

27 Frey, B. S. and Oberholzer-Gee, F., “The Cost of Price Incentives: An Empirical Analysis of Motivation Crowding-Out,” The American Economic Review 87, no. 4 (1997): 746–55.Google Scholar

28 Fehr and Fischbacher, “Third-Party Punishment and Social Norms,” 63–87.

29 E. Fehr and S. Gächter, “Do Incentive Contracts Undermine Voluntary Cooperation?” University of Zurich, Working Paper Series, 2002. Available at: https://www.researchgate.net/publication/228260609_Do_Incentive_Contracts_Undermine_Voluntary_Cooperation.

30 Frey, Bruno S., Not Just for the Money (London: Edward Elgar, 1997)Google Scholar; Samuel Bowles, “Policies Designed for Self-Interested Citizens May Undermine the ‘Moral Sentiments’: Evidence from Economic Experiments,” Science 320, no. 5883 (2008), 1605–1609; Bowles, “The Moral Economy: Why Good Incentives Are No Substitute For Good Citizens,” (New Haven, CT: Yale University Press, 2016).

31 Deci, E. L., Intrinsic Motivation (New York: Plenum Publishing Co., 1975), 175.CrossRefGoogle Scholar

32 Deci, Intrinsic Motivation; E. L. Deci, “Effects of Externally Mediated Rewards on Intrinsic Motivation,” Journal of Personality and Social Psychology 18, no. 1(1971), 105.

33 Deci, E. L. and Ryan, R. M., “The ‘What’ and ‘Why’ of Goal Pursuits: Human Needs and the Self-Determination of Behavior,” Psychological Inquiry 11, no. 4 (2000): 227–68CrossRefGoogle Scholar; R. M. Ryan and E. L. Deci, “Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being,” American Psychologist 55, no. 1 (2000): 68.

34 Ryan and Deci, “Self-Determination Theory,” 68.

35 Ryan and Deci, “Self-Determination Theory,” 56.

36 See also Grant, A. M., “Does Intrinsic Motivation Fuel the Prosocial Fire? Motivational Synergy in Predicting Persistence, Performance, and Productivity,” Journal of Applied Psychology 93, no. 1 (2008): 4858.CrossRefGoogle ScholarPubMed

37 Grant, “Does Intrinsic Motivation Fuel the Prosocial Fire? 48–58.

38 In his paper, Grant refers to motivations as desires, so that intrinsic motivation is “the desire to expend effort based on interest in and enjoyment of the work itself” and pro-social motivation is “the desire to expend effort to benefit other people” (Grant, “Does Intrinsic Motivation Fuel the Prosocial Fire?" 49). I have not repeated this full definition because it is pretty clear to me that it is incorrect to define a motivation as a desire; at the very least this needs to be amended to the desire that is acted on, since we may have plenty of desires that are latent or not acted on.

39 It is not clear that motivations like fairness fit so neatly into this dichotomy, since fairness can be about following correct processes (procedural) as well as about fair outcomes.

40 We might note that it is not so clear whether this contention is true. Grant’s position (see “Does Intrinsic Motivation Fuel the Prosocial Fire”) is consistent with that of other researchers who have hypothesized that self-control and cooperation (especially in prisoner’s dilemmas) both require the subjugation of short-term goals for long-term ones (Dewitte, S., and Cremer, D. D., “Self-Control and Cooperation: Different Concepts, Similar Decisions? A Question of the Right Perspective,” The Journal of Psychology 135 no. 2 [2001]: 133–53.CrossRefGoogle ScholarPubMed) However, there is evidence that cooperation in prisoner’s dilemmas is the spontanteous, intuitive response, which is reigned in by reflective decision-making (D. G. Rand and M. A. Nowak, “Human Cooperation,” Trends in Cognitive Sciences 17, no. 8 [2013] 413–25), which suggests that self-control doesn’t require self-regulation so much as not thinking.

41 Grant, “Does Intrinsic Motivation Fuel the Prosocial Fire?” 49.

42 Grant, R. W., Strings Attached: Untangling the Ethics of Incentives (Princeton, NJ: Princeton University Press, 2011) discussing a slightly different question around the ethics of incentives, takes this sort of line.Google Scholar

43 Grant, “Does Intrinsic Motivation Fuel the Prosocial Fire? 48–58.

44 Ryan and Deci, E. L., “Self-Determination Theory,” 68.

45 Bowles, “Policies Designed for Self-Interested Citizens,” 1605–1609.

46 Nieli, R., “Spheres of Intimacy and the Adam Smith Problem,” Journal of the History of Ideas 47, no. 4 (1986), 611–24CrossRefGoogle Scholar; R. Roberts, “How Adam Smith Can Change Your Life: An Unexpected Guide to Human Nature and Happiness,” (New York: Portfolio Press, 2015).

47 Ross, L. and Ward, A., “Naive Realism in Everyday Life: Implications for Social Conflict and Misunderstanding,” in Reed, E., Turiel, E., and Brown, T., eds., Values and Knowledge (Mahwah, NJ: Lawrence Erlbaum, 1996), 103135.Google Scholar

48 Gneezy and Rustichini, “A Fine Is a Price,” 1–17; Heyman and Ariely, “Effort For Payment: A Tale of Two Markets,” 787–793.

49 Lindenberg, S. and Frey, B. S., “Alternatives, Frames, and Relative Prices: A Broader View of Rational Choice Theory,” Acta sociologica 36, no. 3 (1993): 191205.CrossRefGoogle Scholar

50 Bacharach, M., “Framing and Cognition: The Bad News and the Good,” ed. Dimitri, N., Basili, M., and Gilboa, I., Proceedings of ISER Workshop XIV: Cognitive Processes in Economics (London: Routledge, 2003), 6374.Google Scholar

51 A. Tversky and D. Kahneman, “The Framing of Decisions and the Psychology of Choice,” Science 211, no. 4481 (1981): 453–58.

52 L. Ross and A. Ward, “Naive Realism in Everyday Life: Implications for Social Conflict and Misunderstanding,” in Reed, Turiel, and Brown, Values and Knowledge, 103–135.

53 Messick, D. M. and McClintock, C. G., “Motivational Bases of Choice in Experimental Games,” Journal of Experimental Social Psychology 4, no. 1 (1968): 125CrossRefGoogle Scholar; D. Pruitt, “Reward Structure and Cooperation: The Decomposed Prisoner’s Dilemma Game,” Journal of Personality and Social Psychology 7, no. 1 (1967): 21–27.

54 However, we cannot assume that a player would see any given decomposition from the parent game because, if a prisoner’s dilemma is decomposable, then there are an infinite number of possible decompositions (Messick and McClintock, “Motivational Bases of Choice in Experimental Games).

55 Pruitt, “Reward Structure and Cooperation”; Komorita, S. S., “Cooperative Choice in Decomposed Social Dilemmas,” Personality and Social Psychology Bulletin 13, no. 1 (1987): 5363CrossRefGoogle Scholar; Cookson, R., “Framing Effects in Public Goods Experiments,” Experimental Economics 3, no. 1 (2000): 5579.CrossRefGoogle Scholar

56 Pruitt, D., “Motivational Processes in the Decomposed Prisoner’s Dilemma Game,” Journal of Personality and Social Psychology 14, no. 3 (1970): 227–38.CrossRefGoogle Scholar

57 Pruitt, “Motivational processes in the Decomposed Prisoner’s Dilemma Game,” 235.

58 Colman, A. M., Game Theory and Its Applications in the Social and Biological Sciences, 2nd ed. (Oxford: Butterworth-Heinemann, 1995).Google Scholar

59 Kahneman, D. and Tversky, A., “Prospect Theory: An Analysis of Decision under Risk,” Econometrica 47, no. 2 (1979): 263–91.CrossRefGoogle Scholar

60 Dietrich, F. and List, C., “Reason-Based Choice and Context-Dependence: An Explanatory Framework,” Economics and Philosophy 32, no. 2 (2016): 175229CrossRefGoogle Scholar; Weirich, P., “Utility and Framing,” Synthese 176, no. 1 (2010): 83103CrossRefGoogle Scholar; Gold, Natalie and List, C., “Framing As Path Dependence,” Economics and Philosophy 20, no. 2 (2004): 253–77CrossRefGoogle Scholar; Schick, F., Ambiguity and Logic (Cambridge: Cambridge University Press, 2003).CrossRefGoogle Scholar

61 Schick, Ambiguity and Logic.

62 Slovic, P., Fischoff, B., and Lichtenstein, S., “Response Mode, Framing, and Information-Processing Effects in Risk Assessment,” in: Bell, D., Raiffa, H., and Tversky, A., eds., Decision Making: Descriptive, Normative and Prescriptive Interactions (Cambridge: Cambridge University Press, 1988), 152–66CrossRefGoogle Scholar; B. Fischhoff, P. Slovic, and S. Lichtenstein, “Fault Trees: Sensitivity of Estimated Failure Probabilities to Problem Representation,” Journal of Experimental Psychology: Human Perception and Performance 4, no. 2 (1978): 330.

63 Montgomery, H., “Decision Rules and the Search for a Dominance Structure: Towards a Process Model of Decision Making,” Advances in Psychology 14 (1983): 343–69.CrossRefGoogle Scholar

64 Gold and List, “Framing As Path Dependence.”

65 Shafir, E., Simonson, I., and Tversky, A., “Reason-Based Choice,” Cognition 49, nos. 1–2 (1993): 1136.CrossRefGoogle ScholarPubMed It is possible to translate between the “value-based” model given by Tversky and Kahneman and the “reason-based” tradition. Roughly, what Kahneman and Tversky describe as a change in curvature of the utility function becomes a difference in how people value the options. See also Gold and List, “Framing As Path Dependence.”

66 Liebrand, W. B., Jansen, R. W., Rijken, V. M., and Suhre, C. J., “Might Over Morality: Social Values and the Perception of Other Players in Experimental Games,” Journal of Experimental Social Psychology 22, no. 3 (1986): 203215.CrossRefGoogle Scholar

67 Pruitt, “Motivational Processes in the Decomposed Prisoner’s Dilemma Game.”

68 Colman, Game Theory and Its Applications in the Social and Biological Sciences.

69 Baranowski, T. A. and Summers, D. A., “Perception of Response Alternatives in a Prisoner’s Dilemma Game,” Journal of Personality and Social Psychology 21, no. 1 (1972): 35.CrossRefGoogle Scholar

70 Liebrand, Jansen, Rijken, and Suhre, “Might Over Morality.”

71 Bruner, J., “On Perceptual Readiness,” Psychological Review 64, no. 2 (1957): 123–52;CrossRefGoogle ScholarPubMed

Camerer, C. F., Behavioral Game Theory: Experiments in Strategic Interaction (Princeton, NJ: Princeton University Press, 2011).CrossRefGoogle Scholar

72 Herr, P. M., Sherman, S. J., and Fazio, R. H., “On the Consequences of Priming: Assimilation and Contrast Effects,” Journal of Experimental Social Psychology 19, no. 4 (1983): 323–40.CrossRefGoogle Scholar

73 Deci, Intrinsic Motivation; Deci, “Effects of Externally Mediated Rewards on Intrinsic Motivation,” 105.

74 In further support of the framing theory of motivation crowding for the paradigm effects, we know that the salience of the reward affects motivation crowding. M. Ross (“Salience of Reward and Intrinsic Motivation,” Journal of Personality and Social Psychology 32, no. 2 [1975]: 245–54) has shown that a highly salient reward is more detrimental to intrinsic interest than the same reward when it is relatively non-salient. He also showed that reward is less detrimental when the subject’s attention is distracted from it.

75 Jones, E. E., Kannouse, D., Kelley, R., Nisbett, R., Valins, S., and Weiner, B., eds., Attribution: Perceiving the Causes of Behavior (Morristown, NJ: General Learning Press, 1972)Google Scholar; Heider, F., The Psychology of Interpersonal Relations (New York: Wiley, 1958).CrossRefGoogle Scholar

76 Ross, L., “The Intuitive Psychologist And His Shortcomings: Distortions in the Attribution Process,” in Advances in Experimental Social Psychology, ed. Berkowitz, L. (New York: Academic Press, 1977), 173220.Google Scholar

77 Interestingly, the Fundamental Attribution Error might also explain why people do not predict motivation crowding effects in others, tending to choose to use incentives even when their effect is counterproductive (E. Fehr and J. A. List, “The Hidden Costs and Returns of Incentives: Trust and Trustworthiness Among CEOs,” Journal of the European Economic Association 2, no. 5 [2004], 743–71).

78 Lindenberg and Frey, “Alternatives, Frames, and Relative Prices.”

79 Levin, I. P. and Gaeth, G. J., “How Consumers Are Affected by the Framing of Attribute Information before and after Consuming the Product,” Journal of Consumer Research 15, no. 3 (1988): 374–78.CrossRefGoogle Scholar

80 McNeill, B. J., Pauker, S. G., Sox, H., and Tversky, A., “On the Elicitation of Preferences for Alternative Therapies,” New England Journal of Medicine 306, no. 2 (1982): 1259–62.CrossRefGoogle Scholar

81 Skyrms, Brian, “Review of Frederick Schick’s Making Choices,” The Times Literary Supplement 949 (1998): 30.Google Scholar

82 Rabin, M., “Incorporating Fairness into Game Theory and Economics,” The American Economic Review 83, no. 5 (1993): 12811302.Google Scholar

83 Bicchieri, The Grammar of Society.

84 Cohn, A., Fehr, E., and Maréchal, M. A., “Business Culture and Dishonesty in the Banking Industry,” Nature 516, no. 7529 (2014): 8689.CrossRefGoogle ScholarPubMed

85 Bicchieri, The Grammar of Society.

86 List, J. A., “On the Interpretation of Giving in Dictator Games,” Journal of Political Economy 115, no. 3 (2007): 84.CrossRefGoogle Scholar

87 G. Lakoff (The All New Don’t Think of An Elephant!: Know Your Values and Frame the Debate [White River Junction, VT: Chelsea Green Publishing, 2014]) has already been doing this sort of thing, not in the context of incentives, but in the context of political persuasion.

88 Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., et al., “In Search of Homo-Economicus: Behavioral Experiments in 15 Small-Scale Societies,” American Economic Review 91 (2001): 7378.CrossRefGoogle Scholar

89 Al-Ubaydli, O., Houser, D., Nye, J., Paganelli, M. P., and Pan, X. S., “The Causal Effect of Market Priming on Trust: An Experimental Investigation Using Randomized Control,” PloS One 8, no. 3 (2013): e55968.CrossRefGoogle ScholarPubMed

90 Natalie Gold, “Trustworthiness and Motivations,” in D. Vines and N. Morris, eds., Capital Failure: Rebuilding Trust in Financial Services (Oxford: Oxford University Press, 2014), 129–53.

91 Otteson, James, “Adam Smith’s Marketplace of Morals,” Archiv fur Geschichte der Philosophie 84, no. 2 (2002): 190211CrossRefGoogle Scholar; Smith, V. L., “The Two Faces of Adam Smith,” Southern Economic Journal 65, no. 1(1998): 219.CrossRefGoogle Scholar

92 V. L. Smith, “The Two Faces of Adam Smith.”

93 Zelizer, V. A. R., The Social Meaning of Money (Princeton, NJ: Princeton University Press, 1997).Google Scholar

94 Anderson, E. S., “Is Women’s Labor a Commodity?Philosophy and Public Affairs 19, no. 1 (1990): 7192.Google ScholarPubMed

95 Frey, B. S. and Oberholzer-Gee, F., “The Cost of Price Incentives: An Empirical Analysis of Motivation Crowding-Out,” The American Economic Review 87, no. 4 (1997): 746–55.Google Scholar

96 Grant, A. M., Give and Take: A Revolutionary Approach to Success (Westminster: Penguin, 2013).Google Scholar

97 Schmidtz, David, “Adam Smith on Freedom,” in Hanley, Ryan Patrick, ed., Adam Smith: His Life, Thought, and Legacy (Princeton, NJ: Princeton University Press, 2016), 208227CrossRefGoogle Scholar; Paganelli, Maria Pia, “The Adam Smith Problem in Reverse,” History of Political Economy 40, no. 2 (2008), 365–82.CrossRefGoogle Scholar

Figure 0

Figure 1. Alternative presentations of the prisoner’s dilemma.