The target article by Baumard et al. argues that an intrinsic motive for fairness has been socially selected and has thus evolved as one of the “mental and social mechanisms that produce moral judgments and interactions” (Abstract). Alternatively (it seems), the authors suggest that people may feel like selfishly free-riding, but are restrained by “a prudence which . . . is built into our evolved moral disposition” (sect. 2.2.2, para. 3). Either way, an innate moral preference is said to account for three otherwise anomalous kinds of self-depriving behavior: where a subject (1) helps strangers without expectation of return, (2) cooperates in anonymous, one-shot games, and (3) pays to punish others for their moves in public goods games (sect. 2.2). The argument for social selection is well thought out. However, before we add either special motive to the long list of elementary needs, drives, and other incentives that have been discerned in human choice (e.g., Atkinson & Raynor Reference Atkinson and Raynor1975), we should examine whether known properties of reward might not explain a preference for fairness, or for the very similar traits of inequity aversion (Frohlich et al. Reference Frohlich, Oppenheimer and Kurki2004) and game-theoretic choreography (Gintis Reference Gintis2009, pp. 41–44).
Much of the target article discusses how people arrive at cognitive judgments of fairness, but the tough problem is motivational. It may be that “competition among cooperative partners leads to the selection of a disposition to be intrinsically motivated to be fair” (sect. 2.2.1, para. 12), but people continue to have a disposition to be selfish as well, and perhaps also a disposition to be altruistic and leave themselves open to exploitation. Among these dispositions, morality does not compete like just another taste, but leads people to “behave as if they had passed a contract” (sect. 3.2.2, para 1, italics in the original; see also sects. 1 and 2.2.2). The article's central problem is, “since [people] didn't, why should it be so?” (sect. 1, para. 2). The authors' proposal of an innate moral preference to solve this “puzzle of the missing contract” (sect. 1, para. 3) just names the phenomenon, rather than supplying a proximate mechanism for the contract-like faculty.
Rather, we should look at the purpose of the contract. The payoffs for selfish choices are almost always faster than the payoffs for moral ones. If I fake fairness like an intelligent sociopath, I may eventually be found out, but I will reap rewards in the short run; and the likelihood that I will get away with any given deception increases my temptation to try it. Thus, even if I realize that fairness serves my own long-term interests, I face ongoing pressure from my short-term interests to cheat. There is still controversy over whether people overvalue imminent rewards generally (hyperbolic discounting; see Ainslie Reference Ainslie, Ross, Kincaid, Spurrett and Collins2010; Reference Ainslie2012) or only when we are emotionally aroused (hyperboloid discounting; see McClure et al. Reference McClure, Ericson, Laibson, Loewenstein and Cohen2007), but in either case I will often have the impulse to cheat when it is against my long-term interest. Since faking my motives is an entirely intrapsychic process, the only way I can commit myself not to do it is to interpret my current choice as a test case for how I am apt to choose in the future: “If I am hypocritical [or biased, or selfish . . .] this time, why wouldn't I expect to be next time?” Thus bundled together, a series of impulses loses leverage against a series of better, later alternatives – greatly if the discounting is hyperbolic, less so but still possibly if the discounting is hyperboloid (Ainslie Reference Ainslie2012). Then, to the extent that I am aware of my temptation problem, I will have an incentive to make personal rules against deciding unfairly – that is, to interpret each choice where I might be unfair as a test case of whether I can expect to resist this kind of temptation in the future. I draw the line between fair and unfair by the kind of reasoning that Baumard et al. describe, and then face reward contingencies that will be similar to those of a repeated prisoner's dilemma. Whatever my reputation is with other people, I will have a reputation with myself that is at stake in each choice, and which, like my social reputation, is disproportionately vulnerable to lapses (Monterosso et al. Reference Monterosso, Ainslie, Toppi-Mullen and Gault2002).
This dynamic can account for two of the three phenomena that the authors highlight as seeming anomalies for mutualism:
1. Although helping strangers without expectation of return can be rewarding in its own right, I may also help them because of a personal rule for fairness at times when I would rather cheat and could do so without social consequences. Then I do behave as if I had made a social contract. The contract is real, but exists between my present self and my expected future selves. Like the oral contracts among traders that Baumard et al. list (sect. 2.1.3, para. 1), my contract is self-enforcing. I may still get away with cheating, by means of the casuistry with personal rules called rationalization; or I may instead become hyper-moral, if I am especially fearful of giving myself an unfavorable self-signal (Bodner & Prelec Reference Bodner, Prelec, Brocas and Carillo2001). Either deviation moves me away from optimal social desirability, but my central anchor is just where Baumard et al. say it should be.
2. To the extent that my reputation with myself feels vulnerable, I may reject an experimenter's instruction to maximize my personal payoff in a one-shot Prisoner's Dilemma or Dictator game, and instead regard the game as another test case of my character (Ainslie Reference Ainslie2005). Such an interpretation makes it “not that easy . . . to shed one's intuitive social and moral dispositions when participating in such a game” (sect. 3.3.2, para. 1).
3. No further explanation seems necessary for the punishment phenomenon. It is not remarkable that subjects become angry at either cheating or moralizing stances by other subjects, and pay to indulge this anger. As with problem (2), the seeming anomaly arises from experimenters' assumptions that the reward contingencies they set up for a game are the only ones in subjects' minds.
As for the cognitive criteria for partners' value, talent, and effort probably do not exhaust the qualities that are rationally weighed in social choice. Wealth or status conveyed by inheritance or the happenstance of history have always been factors, and transparency itself – how easy it is to be evaluated – must be one. But the authors' proposal of social selection will work perfectly well with other criteria for estimation. The hard part of their goal (“to contribute . . . proximate and ultimate explanations of human morality”; target article, Abstract) has been to explain the semblance of bargaining when counterparties are apparently absent. This can be accomplished by the logic of internal intertemporal bargaining, without positing a specially evolved motive.
The target article by Baumard et al. argues that an intrinsic motive for fairness has been socially selected and has thus evolved as one of the “mental and social mechanisms that produce moral judgments and interactions” (Abstract). Alternatively (it seems), the authors suggest that people may feel like selfishly free-riding, but are restrained by “a prudence which . . . is built into our evolved moral disposition” (sect. 2.2.2, para. 3). Either way, an innate moral preference is said to account for three otherwise anomalous kinds of self-depriving behavior: where a subject (1) helps strangers without expectation of return, (2) cooperates in anonymous, one-shot games, and (3) pays to punish others for their moves in public goods games (sect. 2.2). The argument for social selection is well thought out. However, before we add either special motive to the long list of elementary needs, drives, and other incentives that have been discerned in human choice (e.g., Atkinson & Raynor Reference Atkinson and Raynor1975), we should examine whether known properties of reward might not explain a preference for fairness, or for the very similar traits of inequity aversion (Frohlich et al. Reference Frohlich, Oppenheimer and Kurki2004) and game-theoretic choreography (Gintis Reference Gintis2009, pp. 41–44).
Much of the target article discusses how people arrive at cognitive judgments of fairness, but the tough problem is motivational. It may be that “competition among cooperative partners leads to the selection of a disposition to be intrinsically motivated to be fair” (sect. 2.2.1, para. 12), but people continue to have a disposition to be selfish as well, and perhaps also a disposition to be altruistic and leave themselves open to exploitation. Among these dispositions, morality does not compete like just another taste, but leads people to “behave as if they had passed a contract” (sect. 3.2.2, para 1, italics in the original; see also sects. 1 and 2.2.2). The article's central problem is, “since [people] didn't, why should it be so?” (sect. 1, para. 2). The authors' proposal of an innate moral preference to solve this “puzzle of the missing contract” (sect. 1, para. 3) just names the phenomenon, rather than supplying a proximate mechanism for the contract-like faculty.
Rather, we should look at the purpose of the contract. The payoffs for selfish choices are almost always faster than the payoffs for moral ones. If I fake fairness like an intelligent sociopath, I may eventually be found out, but I will reap rewards in the short run; and the likelihood that I will get away with any given deception increases my temptation to try it. Thus, even if I realize that fairness serves my own long-term interests, I face ongoing pressure from my short-term interests to cheat. There is still controversy over whether people overvalue imminent rewards generally (hyperbolic discounting; see Ainslie Reference Ainslie, Ross, Kincaid, Spurrett and Collins2010; Reference Ainslie2012) or only when we are emotionally aroused (hyperboloid discounting; see McClure et al. Reference McClure, Ericson, Laibson, Loewenstein and Cohen2007), but in either case I will often have the impulse to cheat when it is against my long-term interest. Since faking my motives is an entirely intrapsychic process, the only way I can commit myself not to do it is to interpret my current choice as a test case for how I am apt to choose in the future: “If I am hypocritical [or biased, or selfish . . .] this time, why wouldn't I expect to be next time?” Thus bundled together, a series of impulses loses leverage against a series of better, later alternatives – greatly if the discounting is hyperbolic, less so but still possibly if the discounting is hyperboloid (Ainslie Reference Ainslie2012). Then, to the extent that I am aware of my temptation problem, I will have an incentive to make personal rules against deciding unfairly – that is, to interpret each choice where I might be unfair as a test case of whether I can expect to resist this kind of temptation in the future. I draw the line between fair and unfair by the kind of reasoning that Baumard et al. describe, and then face reward contingencies that will be similar to those of a repeated prisoner's dilemma. Whatever my reputation is with other people, I will have a reputation with myself that is at stake in each choice, and which, like my social reputation, is disproportionately vulnerable to lapses (Monterosso et al. Reference Monterosso, Ainslie, Toppi-Mullen and Gault2002).
This dynamic can account for two of the three phenomena that the authors highlight as seeming anomalies for mutualism:
1. Although helping strangers without expectation of return can be rewarding in its own right, I may also help them because of a personal rule for fairness at times when I would rather cheat and could do so without social consequences. Then I do behave as if I had made a social contract. The contract is real, but exists between my present self and my expected future selves. Like the oral contracts among traders that Baumard et al. list (sect. 2.1.3, para. 1), my contract is self-enforcing. I may still get away with cheating, by means of the casuistry with personal rules called rationalization; or I may instead become hyper-moral, if I am especially fearful of giving myself an unfavorable self-signal (Bodner & Prelec Reference Bodner, Prelec, Brocas and Carillo2001). Either deviation moves me away from optimal social desirability, but my central anchor is just where Baumard et al. say it should be.
2. To the extent that my reputation with myself feels vulnerable, I may reject an experimenter's instruction to maximize my personal payoff in a one-shot Prisoner's Dilemma or Dictator game, and instead regard the game as another test case of my character (Ainslie Reference Ainslie2005). Such an interpretation makes it “not that easy . . . to shed one's intuitive social and moral dispositions when participating in such a game” (sect. 3.3.2, para. 1).
3. No further explanation seems necessary for the punishment phenomenon. It is not remarkable that subjects become angry at either cheating or moralizing stances by other subjects, and pay to indulge this anger. As with problem (2), the seeming anomaly arises from experimenters' assumptions that the reward contingencies they set up for a game are the only ones in subjects' minds.
As for the cognitive criteria for partners' value, talent, and effort probably do not exhaust the qualities that are rationally weighed in social choice. Wealth or status conveyed by inheritance or the happenstance of history have always been factors, and transparency itself – how easy it is to be evaluated – must be one. But the authors' proposal of social selection will work perfectly well with other criteria for estimation. The hard part of their goal (“to contribute . . . proximate and ultimate explanations of human morality”; target article, Abstract) has been to explain the semblance of bargaining when counterparties are apparently absent. This can be accomplished by the logic of internal intertemporal bargaining, without positing a specially evolved motive.
ACKNOWLEDGMENT
This material is the result of work supported with resources and the use of facilities at the Department of Veterans Affairs Medical Center, Coatesville, PA. The opinions expressed are not those of the Department of Veterans Affairs or of the US Government.