R1. Introduction
The most important and influential contributions to the study of human cooperation and morality in the past thirty years have focused on group selection and altruistic morality. Formal models, experimental economic games studies, and cross-cultural investigations have remarkably enriched the evolutionary study of morality. We are grateful for these contributions, and we share the sense of intellectual challenge and excitement they have created. Our theoretical approach is, however, a different one. As we explained in the target article, we see the evolution of morality as resulting from the individual-level selection of a moral sense of fairness enhancing one's chances of doing well in the competition to be chosen as a partner in cooperative ventures. Our article focused on the presentation and defense of this mutualistic view and on arguing that it has deep and wide relevance to the study of morality. In particular, we argued that the mutualistic approach provides an attractive interpretation of results of economic games experiments that have been heralded as strong evidence for group selection, and moreover that it explains some subtle features of these results that have been relatively ignored.
We chose to focus on economic games because they are the methodology most used by evolutionary-minded behavioral scientists and they provide a way to study a range of moral behaviors (distributive justice, mutual aid, retributive justice) with the exact same methodology (thus avoiding the risk of cherry-picking the convenient peculiar experiment fitting with one's prediction). However, we agree with Clark & Boothby, Binmore, Dunfield & Kuhlmeier, and Graham that economic games, whatever their merits, have serious limitations in the study of morality. In the target article, for reasons of space, we could not do more than allude to a variety of other sources of evidence: economic anthropology, legal anthropology, behavioral economics (other than economic games), econometrics, experimental psychology, and developmental psychology (but see Baumard [Reference Baumard2010a] for a comprehensive review). Other fields, such as social psychology and in particular equity theory – we agree with Binmore – are also of great relevance.
Our discussion of the economic games literature was not meant to offer a knockdown argument for the mutualistic approach and against the group selection altruistic approach (which need not be seen as mutually exclusive). Rather, it was meant to present an array of challenges to uncritical reliance on group selection in explaining human morality by highlighting, on many specific issues, alternative mutualistic explanations. These specific challenges were not taken up in the commentaries, and, in consequence, our response mostly focuses on the internal challenges of the mutualistic approach rather than on a comparison with its altruistic counterpart.
The first part of this response, section R2, is focused on the evolutionary level and on issues raised by our account of the evolution morality in terms of partner-choice mutualism. The second part, section R3, is focused on the cognitive level and on the characterization and workings of fairness. In the third part, section R4, we discuss the extent to which our fairness-based approach to morality extends to norms that are commonly considered moral even though they are distinct from fairness.Footnote 1 We are very grateful to all our commentators for thoughtful, insightful, and constructive comments!
R2. Partner choice
The core notion of the theory put forward in the target article is that of partner choice, a notion which is perhaps best understood when contrasted to that of partner control. In partner-control models, such as the iterated Prisoner's Dilemma, individuals cannot choose their partners: They are stuck with a given partner, and they can only either cooperate with this partner or defect, thereby losing all the benefits of the interaction. We argued that fairness is unlikely to have evolved in such a constrained environment since the least powerful partner in the interaction has no choice but to accept offers, even the most unfair ones. By contrast, in partner-choice models, individuals can choose their partners. The least powerful partner therefore always has the option to refuse being exploited and to look for more generous partners. In the end, since individuals have equal outside options, the evolutionary stable strategy is to share the benefits of cooperation impartially.
R2.1. Can partner control be as effective as partner choice?
DeScioli mentions several ways in which even splits might occur in specific games and under specific conditions without outside options. However, he does not show how these special cases might realistically generalize to the evolution of fairness. True, the Nowak et al. (Reference Nowak, Page and Sigmund2000) article, mentioned by DeScioli in his commentary, claims to show that reputation does allow the evolution of fairness in the absence of partner choice. André and Baumard (Reference André and Baumard2011b) argued, however, that this is an artifactual consequence of a restriction of parameter space without which Nowak et al.'s model could not yield fairness. DeScioli suggests that the well-known Nash Bargaining Solution provides another possible explanation of fairness (a solution explored by Gauthier Reference Gauthier1986). However, while it does indeed sometimes correspond to fairness, the Nash Bargaining Solution is not a strategic equilibrium of “standard” (i.e., non-cooperative) game theory; it is chosen rather on the basis of a priori axioms (including Pareto optimality). Hence, it cannot be seen as a way to explain the existence of fairness in nature.
DeScioli also mentions Thomas Schelling's well-known idea of salient coordination points. Could fairness, as DeScioli suggests, simply emerge as such a salient point in a game of coordination? Equality can be a salient point for simple cognitive reasons (a pair of equal quantities stands out among various pairs of unequal quantities). Not all fair distributions, however, are equal; when contributions are unequal, so are fair distributions. Then, either the relevant salient point is equality, and unequal fair distributions are not explained in terms of salient points; or else fair splits are always salient – but, if so, presumably they are salient because they are fair and people care for fairness, rather than the other way around. This, of course, leaves wholly unexplained the existence, evolution, and role of fairness. It is partner choice, we have argued, that explains why humans tend to coordinate on fair splits, leaving open the possibility that saliency plays a role in explaining the how.
R2.2. Is partner choice as unconstrained as partner control?
The multiplicity of equilibria (also known as the folk model) stems from the fact that there are many different ways to cooperate that are more profitable to both partners than not cooperating at all (see, e.g., Aumann & Shapley Reference Aumann and Shapley1974). Classic mutualistic models, in the form of partner-control models, typically fail to determine a single distribution of the benefits of cooperation, let alone a fair one. Alvard, Binmore, and Fessler & Holbrook argue that our approach suffers from the same weakness. A proper discussion would require a formal development, but it is worth informally explaining here why we disagree.
In contrast to partner-control models, partner-choice models are characterized by individuals having richer outside options than just forsaking cooperation altogether. This strongly restricts the range of distributions that can be mutually agreeable. Fewer outcomes are acceptable when one can also cooperate elsewhere, differently, than when one is trapped with a single partner. More precisely, the richness of outside options in partner-choice models has two relevant effects: First, it has an effect on the fairness of cooperation (the distribution of the benefits). Second, it has an effect on the amount of cooperation (the amount of benefit). The first is the only one we have formalized so far. If two individuals involved in an interaction could each play the other's role with third parties, this prevents biased outcomes and secures fairness (André & Baumard Reference André and Baumard2011b).
The effect of partner choice on the amount of cooperation is less straightforward. If everyone in a population cooperates exactly at a given intensity h, whatever may be the value of h, partner choice cannot move the population away from this state. If, for instance, everyone in a population either hunts stag or hunts rabbit, it will be an equilibrium in both cases, even with partner choice. Partner choice therefore does not automatically eliminate the diversity of equilibria with regard to the amount of cooperation. In reality, however, populations are never entirely monomorphic for a single level of cooperation (as exemplified by the experiments of Gill, Packer, & van Bavel [Gill et al.]). There are always natural sources of variations such that it pays to compare potential partners and to choose the most cooperative, yielding a selective pressure in favor of ever more cooperative individuals. So, we would argue, both the fairness and the amount of cooperation are constrained in partner-choice models in a way in which they are not in partner-control models.
It is worth noting here – and this is the point at which to answer Roberts' important remarks – that the second consequence of partner choice (its effect on the amount of cooperation) is historically the first to have been considered by evolutionary biologists, in particular in Roberts' own seminal work (Roberts Reference Roberts1998; see also, more recently, Aktipis Reference Aktipis2004; Reference Aktipis2011; McNamara et al. Reference McNamara, Barta, Fromhage and Houston2008; for discussions on the importance of variability in social behavior, see McNamara & Leimar Reference McNamara and Leimar2010). We want to underscore the key role that these predecessors have played in helping the scientific community, including ourselves, understand the importance of partner choice for cooperation. Our own contribution is original as compared to these earlier models in that we are primarily interested in the first effect of partner choice – its effect on the fairness of cooperative interactions rather than on the amount of cooperation.
R2.3. Does fairness boil down to bargaining power?
We see partner choice, and hence market-like phenomena, as the key factor in the evolution of human fairness. However, DeScioli is right to underscore the relationship that our model bears with bargaining theory. Even more than with bargaining theory in general, our approach to fairness is specifically related to the study of bargaining in markets, as pioneered by Rubinstein (Reference Rubinstein1982).
This, however, leads to an apparent paradox, well highlighted by DeScioli and by Guala and also suggested by Fessler & Holbrook. If fairness is a consequence of bargaining with outside options, then fairness should be nothing but a translation into moral norms of the relative bargaining power of individuals, or, even worse, fairness should simply be a form of bargaining. DeScioli and Guala rightly remark that this would run counter to our current understanding of fairness.
If fairness is a direct translation of bargaining power, then why, for instance, should we be outraged by hotels in New York increasing their prices after the 9/11 attacks? Why do we find it unfair to raise the price of snow shovels after a snowstorm? Why, more generally, do we often find free-market outcomes unfair? Why is our moral compass, in other words, more stable and constant than the caprices and versatility of bargaining power? The answer is that individuals, in their social interactions, look for good partners, and good partners do not behave in accordance with their strategic options at each and every instant. Let us explain.
In essence, the problem of cheating and the problem of partiality are similar. Evolutionary approaches have focused on the problem of cheating, but cheating can be described as an extreme case of partiality consisting in taking the benefits without paying any cost. Cheating and partiality are versions of the same problem of commitment and have the same solution, namely, reputation. Just as it is not advantageous for an individual to choose a partner who is likely to cheat when in a position to do so (e.g., when his partner will have no other option but to accept his decision, as in the prisoner's dilemma), it is not advantageous to choose a partner who is likely to be partial when he is in a position to be so (i.e., when his partner will have no better option than to accept his offer).
The reason why individuals do not cheat on prices in a focal interaction, for instance, when they sell snow shovels after a snowstorm (Kahneman et al. Reference Kahneman, Knetsch and Thaler1986a), is because their reputation depends on their being committed to being reliably fair over time, rather than each time getting the best they are in a position to bargain for. In the long-term interaction between customers and the hardware storekeeper, there will be circumstances in which either the customers (after a dry winter) or the storekeeper (after a snowstorm) would be in a position to extract a bigger share of the benefits, but doing so would precisely compromise the mutual commitment to fairness that is beneficial to both. The “fair” price is thus the price that corresponds not to each and every local bargaining situation, but to the more long-term relationships that renders the interaction between customers and shopkeeper mutually advantageous. Of course, this price takes into account the costs and benefits of each partner (the production cost of snow shovels, the transportation costs, etc.), but these costs and benefits are assessed with long-term considerations in mind.
R2.4. On the relationship between partner choice and group selection
In the three preceding subsections, we have answered objections to our claim that, among mutualistic approaches, partner-choice models provide a better explanation of morality than do partner-control models. There are, of course, altogether different approaches to morality. In particular, as we noted, a well-developed and highly influential approach (or family of approaches) sees the evolution of altruism through group selection as key to explaining morality. Several commentators (Atran; Binsmore; Rachlin, Locey, & Safin [Rachlin et al.]; and Gintis implicitly) suggest that group selection may provide a better account of at least some aspects of morality than does the mutualistic approach.
Herbert Gintis is, together with Christopher Boehm, Sam Bowles, Rob Boyd, Ernst Fehr, Joe Henrich, and Pete Richerson, one of the developers of the most comprehensive and influential group selection (or multi-level selection) approach to human cooperation, now called the Beliefs, Preferences, and Constraints (BPC) model (see references in Gintis's commentary), an approach that has greatly contributed to making the field an intellectually exciting one. We were therefore both gratified and surprised to see Gintis stating that “we are in broad agreement” and that “all of the human behaviors affirmed by [us] fit nicely into the BPC model, and are in no way in conflict with [their] stress on altruistic cooperation and punishment.” After all, we argue that partner-choice mutualism evolved on the basis of individual-level selection and give no role in our approach to group selection. We claim that, among humans, partner choice created selective pressure for the evolution of a moral sense of fairness. We argue that this moral sense provides a better explanation of evidence from economic games than does an altruistic disposition resulting from group selection. It is true that multi-level selection has no problem giving a relatively minor role in its global picture of cooperation to the individual-level selection of mutualistic disposition. The stress on altruistic cooperation and punishment that Gintis mentions implies, however, giving the main role in the evolution of cooperation and morality to group-level selection, and, on this, we beg to disagree.
What then is the relationship of our approach to group selection? There are two ways to see this relationship. The two approaches may be complementary (as Alvard argues and Rachlin et al. suggest), or they could be alternatives (as Binmore suggests). Let us consider these two possibilities in turn.
Is group selection needed as a complement to partner choice? Are we, in fact, proposing a mere amendment to a general paradigm in which group selection would remain a central component? Rachlin et al. implicitly raise this question in their commentary. Alvard explicitly argues that group selection does remain indispensable to explain human cooperation, even with partner choice. Here, we disagree. In our framework, group selection is not necessary to explain the existence of human morality. Indeed, group selection, at least in its latest form (see Boyd et al. [Reference Boyd, Richerson and Henrich2011] for a recent review), is presented as a mechanism to select among the multiple equilibria entailed by the folk theorem. As we have argued in section R2.2 above, partner choice can select among equilibria just as well.
Partner choice and group selection do therefore offer alternative accounts of human cooperation (as Binmore suggests). The two theories entail different evolutionary processes and predict partly different patterns of cooperative interaction. In principle, group selection should lead to utilitarian forms of social behaviors, whereby individuals behave so as to maximize the total welfare of their group. In contrast, partner choice, as we have argued, leads to a fair form of cooperation, because no one can accept an outcome in which she gains less than what she could gain with other partners. Therefore, each time there is a tension between the utilitarian outcome (maximizing global welfare) and the fair outcome, the two theories make different predictions. As we have argued in the target article, most empirical observations show that humans prefer fair, not utilitarian, arrangements, thereby contradicting the predictions derived from group selection, and supporting the predictions derived from partner choice.
R2.5. Morality among nonhuman animals
In their comments, Bshary & Raihani and Warneken raise the important question of the species-specificity of the sense of fairness. After all, partner choice occurs not only among humans but also among many nonhuman species.
As long as there are mutualistic interactions between individuals, choosing and being chosen do partly determine one's reproductive success. This may be the case among great apes where some (though not many) mutualistic interactions seem to take place (Muller & Mitani Reference Muller, Mitani, Slater, Rosenblatt, Snowdon, Roper and Naguib2005). This could occur in mutualistic interactions between species (i.e., the standard biological ecological sense of “mutualism,” as Bshary & Raihani justly remind us), such as in the cleaner fish–client fish mutualism (Bshary & Schäffer Reference Bshary and Schäffer2002), or in the interaction between terrestrial plants and their symbiotic fungi (see, e.g., Kiers et al. Reference Kiers, Duhamel, Beesetty, Mensah, Franken, Verbruggen, Fellbaum, Kowalchuk, Hart, Bago, Palmer, West, Vandenkoornhuyse, Jansa and Bücking2011).Footnote 2 In principle, this could be the case in all species in which individuals cooperate to hunt or to raise young (Burkart et al. Reference Burkart, Hrdy and van Schaik2009; Scheel & Packer Reference Scheel and Packer1991). In every case, the distribution of benefits of the interactions is open to a conflict of interests that could be resolved through partner choice.
What, then, is fundamentally different in the human case? In Warneken's words:
If nonhuman primates also engage in mutually beneficial interactions and seek out other good cooperators, why does this not scale up to a “full-fledged moral sense” (…) characterizing humans? … [T]his suggests that some other factors are necessary to explain human-like morality beyond mutualism and partner choice.
Note that what defines morality is not fairness as a property of interactions, but that these interactions are guided and evaluated on the basis of a sense of fairness, a property of the social cognitive capacities of the individuals interacting. Consider a species involved in just one type of mutualistic interaction; say, the collective hunting of one kind of prey. The distribution of the benefits of this activity may be determined by partner choice and may result in a fair distribution. The members of that species, however, don't have to choose their partners on the basis of fairness, but only on the basis of their behavior in hunting and sharing these prey. To be chosen, individuals must have in this respect, and in this respect only, a disposition to behave in a quite specific way that we, the external observers, might judge to be fair, but that is sufficiently and more economically defined by its behavioral properties.
In contrast, humans have a wide, diverse, and quite open range of forms of interaction that may yield mutual benefits and where choosing the right partner and being chosen matter. In such conditions, effective partner choice involves inferring general psychological dispositions from a wide variety of evidence – not only observation of behavior but also communicative interaction with potential partners and communication with third parties about candidates' reputations. The general psychological disposition that is desirable in a potential partner is, we claim, a disposition to act fairly across situations, as we discussed in the target article. This then creates a social selective pressure for the development of a true sense of fairness.
At present, and in the current state of our knowledge, we believe that the much narrower and relatively fixed range of mutually beneficial interactions occurring in nonhuman species (see Tomasello & Moll [Reference Tomasello and Moll2010] for a discussion of cooperation among great apes) does not result in the social selection of a general and hence properly moral sense of fairness. It is conceivable, however, that we might be underestimating the richness of nonhuman cooperation. For instance, the diversity and complexity of mutual aid in dolphins is extremely impressive (see Connor Reference Connor2007; Connor & Norris Reference Connor and Norris1982), leaving open the possibility that this species might be endowed with the ability to evaluate the fairness of their partners in a way that could be similar to our own.
R3. The sense of fairness
Before discussing specific aspects of fairness, we must correct three misunderstandings. Some earlier mutualistic approaches, for instance Gauthier's, could be understood as portraying mutualists as rational maximizers of their own interest. In an evolutionary perspective, however, the distinction between the evolutionary level and the cognitive level allows combining selfishness (at the evolutionary level) and genuine morality (at the psychological level). We may not have been clear enough on this since Shaw & Knobe have based their discussion on an understanding of mutualism as mere self-interested reciprocity, whereas we understood it as fairness – and we stressed the distinction in our article. Therefore, we do not see their examples and interesting experimental evidence as weighing against our approach, but rather, quite the opposite. Ramlakhan & Brook similarly have based their discussion on the incorrect idea that self-interest may motivate one to behave fairly, when what we discuss is the evolution of an intuitive sense of, and preference for, fairness that is genuinely moral. Finally, it is important to distinguish between people's moral intuitions and the rationalizations and folk theories they build on these intuitions (Haidt Reference Haidt2001). Hence, while we agree with Machery & Stich that it may well be the case that “some cultures do not distinguish moral from non-moral norms,” we do not agree that, if so, then “the moral domain fails to be a psychological universal whose evolution calls for explanation.” Moral intuitions and folk theories of morality are two very different things.
R3.1. An evolved sense of fairness?
Several commentators express broad skepticism towards the central role we give to evolution in our approach to morality: Rochat & Robbins “smell circularity” in our appeal to evolution; according to Ainslie, our “proposal of an innate moral preference … just names the phenomenon, rather than supplying a proximate mechanism,” and what we set out to do “can be accomplished … without positing a specially evolved motive.” Still – to move to issues that are more specific and more open to fruitful discussion – everyone agrees that there must be evolved abilities without which humans would not be a moral species, that there is a developmental story to be told that is crucial to explaining individual and cultural differences, and that the proximate mechanisms of morality must be described and explained. On our part, we certainly do not believe, contrary to what Rachlin et al. attribute to us, that the acquisition of a moral sense is “solely as an evolutionary process occurring over the history of the species.” What we do believe is that the individual development of moral capacities – and the cultural evolution of morality on which Rachlin et al. rightfully insist – is made possible by a domain-specific adaptation, a biologically evolved moral sense. Some of our critics, on the other hand, think that the relevant evolved dispositions are not specific to morality.
Guala writes, “Humans may have evolved a much more general capacity to normativize behaviour.” And, according to Rochat & Robbins, “The product of natural selection would be conformists rather than moralists. In this account, moral values would derive from conventions, and this is evident by looking at children in their development.” While we do agree that “infants are born with … a sensitivity for how things appear to be done in their social surroundings” (Rochat & Robbins), we miss an explanation of why and in what sense the norms this sensitivity would help stabilize across generations should be moral norms.
For Ainslie, the psychological basis of moral choice is a more general ability to adopt personal rules so as to resist the lure of short-term rewards and pursue more valuable long-term interest, since, he writes, “The payoffs for selfish choices are almost always faster than the payoffs for moral ones.” Similarly, Ainsworth & Baumeister point out that “fairness impulses must compete in the psyche against selfish impulses” and argue that self-regulation – that is, “the executive capacity to adjudicate among competing motivations, especially in favor of socially and culturally valued ones” – must play “a decisive role in social cooperation.” Rachlin et al. also underscore the role of self-control in morality. We agree that being fair typically requires forsaking immediate gratification, that a sense of fairness does not by itself provide the ability to do so, and that therefore, for morality to be possible at all, there must indeed be an ability to give precedence to long-term goals (whatever the exact workings of this ability). Such an ability, however, is relevant not just to moral behavior but also to any form of long-term enterprise, from the raising of cattle to the waging of war. So, at best, the ability to pursue long-term goals together with a good understanding of the role of reputation in cooperation might cause rational individuals to decide to be systematically fair (as suggested by Gauthier Reference Gauthier1986), which is quite different from having an intuitive moral sense.
At this point, evidence about the development of morality becomes particularly relevant. As recalled by Warneken, classical studies in developmental psychology (Damon Reference Damon1975; Piaget Reference Piaget1932) suggested that a sense of equity does not develop before the age of 6 or even later. They seemed to indicate that judgments of justice develop slowly and follow a stage-like progression starting off with simple rules (e.g., equality) and only later evolving into more complex ones (e.g., equity). This picture has been very much altered, with several of our commentators, Dunfield & Kuhlmeier, Rochat & Robbins, and Warneken, having contributed to our updated understanding of moral development. As Dunfield & Kuhlmeier summarize: “Taken together, recent research supports the idea that, under certain circumstances (e.g., instrumental need as opposed to material desire), early prosocial behaviours conform to the predictions of the presented mutualistic approach to morality.”
Studies have shown that children as young as 12 months of age react to an unequal distribution (Geraci & Surian Reference Geraci and Surian2011; Schmidt & Sommerville Reference Schmidt and Sommerville2011; Sloane et al. Reference Sloane, Baillargeon and Premack2012). Baumard et al. (2011) show that children as young as age 3 are able to take merit into account and to give more to a character who contributed more to the production of a common good. This developmental pattern is found cross-culturally. Children living in Asian societies, who are often thought to be more collectivistic (Markus & Kitayama Reference Markus and Kitayama1991; Triandis Reference Triandis1989), also show an early development of justice (Baumard et al., submitted). In the same way, despite culturalist theories postulating that justice and merit are linked to Western development (capitalist market, state institutions, world religion; e.g., Henrich et al. Reference Henrich, Ensminger, McElreath, Barr, Barrett, Bolyanatz, Cardenas, Gurven, Gwako, Henrich, Lesorogol, Marlowe, Tracer and Ziker2010), children living among the Turkana in northern Kenya find it equally intuitive to give more to the character that contributed more to the common good (Liénard et al., submitted). We see this early and universal pattern as strongly suggesting that true morality is not a sophisticated, late, and non-universal intellectual achievement (as Kohlberg Reference Kohlberg1981 implied) but is based rather on an evolved sense of fairness.
R3.2. Morality and the emotions
Several of our commentators bring up the important topic of the relationship between morality and the emotions. Humans are endowed with a wide range of emotions: fear, disgust, anger, envy, shame, guilt, sympathy, pride, joy, and so on. Some of these emotions are moral in the sense that their proper function is to motivate individuals to behave morally or to react appropriately to the moral or immoral behavior of others. Most human emotions, however, are non-moral, and have other functions: managing one's reputation, deterring future aggressions, motivating self-interested behavior, helping one's close associates, and the like.
As Cova, Deonna, & Sander (Cova et al.) observe, if the mutualistic theory is true, then moral emotions should conform to the mutualistic logic of impartiality while others need not do so. This gives us a principled way to contrast moral and non-moral emotions. Consider sympathy (or empathy), mentioned by Dunfield & Kuhlmeier, Gintis, Rochat & Robbins, and Warneken, and often considered moral because of its prosocial character. Of course, sympathy often plays a role in motivating moral behavior. Still, it is not always in line with morality: It may lead us to be partial, for instance when we unduly favor our friends at the expense of others or when, in order to protect those we love, we put others at risk. This suggests that sympathy is not an intrinsically moral emotion; its function is not to cause us to be fair, but to help individuals – friends, spouses, children – whose welfare matters to us.
Shame is not intrinsically moral either. We can be ashamed of our physical aspect, of our ignorance, or of our relatives. When we are ashamed of our wrongdoings, we hide them rather than repairing them and we flee from our victims rather than confront them (Tangney & Dearing Reference Tangney and Dearing2002). The function of shame, indeed, is not to be moral but to manage one's reputation (Fessler & Haley Reference Fessler, Haley and Hammerstein2003), which explains why it may lead us to hide our crime rather than to do our duty. By contrast, guilt has been described as a purely moral emotion, and, in line with the mutualistic theory, it follows quite neatly the logic of fairness: It motivates us to repair our misdeeds, to compensate the victims, and, if not possible, to inflict some costs to ourselves so that we feel even with the people we harmed (Tangney & Dearing Reference Tangney and Dearing2002; Trivers Reference Trivers1971).
Similarly, anger (discussed by Cova et al.) – as opposed to outrage – may contradict morality. This is the case, for instance, when people are angry at infants for crying too much or at animals for being dirty. Anger is an ancient psychological mechanism, present in many nonhuman species (Clutton-Brock & Parker Reference Clutton-Brock and Parker1995), that does not aim at being fair but, mainly, at deterring future aggressions (McCullough et al. Reference McCullough, Kurzban, Tabak, Shaver, Mikulincer, Shaver and Mikulincer2010) and at using physical force to coerce or to obtain a better bargaining position (Sell et al. Reference Sell, Tooby and Cosmides2009).
Of course, moral and non-moral emotions are often at play at the same time. For instance, when someone harms our interests, we feel both angry and outraged simultaneously. Our anger comes from our wanting to retaliate and defend ourselves, while our outrage arises from the injustice that was inflicted on us. This, as we note in the target article, explains why humans seem to be motivated to altruistically punish wrongdoers while, we suggest, they are just defending their interests by inflicting a cost to someone who is likely to attack again if not deterred further. The reason why we think they are punishing others is that their retaliation is not blind (as it would be if they were solely motivated by consideration of deterrence). It is limited by consideration of fairness and is proportionate to the cost originally inflicted on us. We can thus distinguish between retaliation, a non-moral behavior motivated only by anger, and revenge, an act of anger aimed at inflicting a cost to the other party without going beyond what justice prescribes (Baumard Reference Baumard2010b). In line with this distinction, people discuss whether someone's retaliation is fair (proportionate) or unfair (selfish).
Similarly, disgust and outrage are sometimes triggered by the same events. If someone farts during a meal, we may feel both disgusted and outraged. Does this mean that disgust is a moral emotion, as claimed by Ramlakhan & Brook? We would argue that what can be seen as unfair and immoral is the causing of disgust. Similarly, wantonly causing physical pain or causing disappointment, and more generally inflicting any kind of cost on others (unless this cost is unavoidable or imposed as a price justly paid for a benefit), are commonly seen as immoral. Disgust in itself is not more intrinsically moral than pain or disappointment; it is the unfair causing of such negative emotions that is morally objectionable. We can agree therefore with Ramlakhan & Brook that “inflicting harm without justification” is immoral, but this is, we would suggest, not because of a distinct harm-based moral principle, but because doing so is grossly unfair.
Of note (see Graham) is that disgust can also bias moral judgment. We suggest that such a bias occurs not because disgust is at the basis of moral judgment but more simply because disgust biases the evaluation of the costs inflicted upon others. When a judge is tired or hungry, for instance, she may be more irritated or exasperated by a criminal behavior and consequently will inflict harsher punishment on the criminal (e.g., Danziger et al. Reference Danziger, Levav and Avnaim-Pesso2011). Non-moral feelings can thus impact on moral judgments.
R3.3. Mutualistic versus utilitarian and deontological view of human morality
One way to test a theory is by spelling out some of its consequences. Bonnefon, Girotto, Heimann, & Legrenzi's [Bonnefon et al.'s] commentary is relevant to such an endeavor by highlighting a possible case of conflict between fairness and reputation. They describe the dilemma of an individual who “obtains an unfair benefit and faces the dilemma of hiding it (to avoid being excluded from future interactions) or disclosing it (to avoid being discovered as a deceiver).” They argue convincingly that an individual guided by a fairness morality should solve this dilemma in a principled way and disclose the unfair benefit. We appreciate the suggestion and agree. It would be very valuable to have experimental confirmation of this prediction. It would confirm our claim that, while the biological function of fairness morality is to enhance one's reputation, the psychological mechanism is that of a genuine moral preference.
Another way to test the theory empirically is by comparing it to its rivals. In moral philosophy, the standard theory is utilitarianism, the doctrine according to which morality aims at maximizing the welfare of the greatest number of people. In the last ten years, a range of works have consistently demonstrated that humans are not utilitarian (a point noted by Atran and Kirkby, Hinzen, & Mikhail [Kirkby et al.]): They prefer a society that is less efficient and poorer but that treats everyone in a fairer way (Mitchell et al. Reference Mitchell, Tetlock, Mellers and Ordóñez1993); they refuse to sacrifice one life to save many (Cushman et al. Reference Cushman, Young, Greene, Doris, Harman, Nichols, Prinz, Sinnott-Armstrong and Stich2010; Mikhail Reference Mikhail2007); they refuse harsh punishment even if it provides benefits (Baron & Ritov Reference Baron and Ritov1993; Carlsmith et al. Reference Carlsmith, Darley and Robinson2002; Sunstein et al. Reference Sunstein, Schkade and Kahneman2000); and they don't see themselves as having the duty to share part of their resources with others in need even when this would benefit society (Singer Reference Singer1972; Unger Reference Unger1996). Of course, it is possible that human morality, albeit based on utilitarianism, often fails to follow the utilitarian doctrine (Baron Reference Baron1994; Cushman et al. Reference Cushman, Young, Greene, Doris, Harman, Nichols, Prinz, Sinnott-Armstrong and Stich2010; Sunstein Reference Sunstein2005). A more parsimonious way to explain this consistent departure from consequentialism, though, is to abandon the idea that morality is about maximizing the welfare of the society in favor of the view that it is about the impartial distribution of the benefits of cooperation.
In their comments, Kirkby et al. suggest another way to account for the non-utilitarian structure of the moral sense: the idea that morality is deontological. According to this view, the maximization of welfare would be constrained by a set of principles such as the prohibition of intentional battery and the principle of double effect (Mikhail Reference Mikhail2007). Though we believe that these principles are to a large extent descriptively valid, we do not consider them as “ultimate moral facts” but rather as moral regularities that, at a deeper level, can be better explained in terms of fairness (Baumard Reference Baumard2010a).
R3.4. Is the sense of fairness universal?
A universal sense of fairness can combine with different beliefs (linked to the social context and to the information available) and yield quite different judgment or decisions. Is this sufficient to explain why, as Cappelen & Tungodden note, even in the well-controlled environment of the lab, “there appears … to be considerable disagreement about what are legitimate sources of inequality in distributive situations”? Participants with very similar backgrounds have, they observe, “three distinct fairness views: egalitarians (who always find it fair to distribute equally), meritocrats (who find it fair to distribute in proportion to production), and libertarians (who find it fair to distribute in proportion to earnings).” How can we account for such a diversity of opinions?
Fairness, we argued, is based on mutual advantage. There are always several ways to consider what might be mutually advantageous. Consider this example given by Gerald Cohen (Reference Cohen2009). We usually see a camping trip as a communal enterprise:
There is no hierarchy among us… . We have facilities with which to carry out our enterprise: we have, for example, pots and pans, oil, coffee, … . And, as usual on camping trips, we avail ourselves of those facilities collectively… . Somebody fishes, somebody else prepares the food, and another person cooks it. People who hate cooking but enjoy washing up may do all the washing up, and so on. (Cohen Reference Cohen2009, pp. 3–4)
As Cohen notes, we could also imagine a very different camping trip where:
everyone asserts her rights over the pieces of equipment, and the talents that she brings, and where bargaining proceeds with respect to who is going to pay to whom to be allowed, for example, to use a knife to peel the potatoes and how much he is going to charge others for those now-peeled potatoes that he bought in an unpeeled condition from another camper, and so on. (p. 6)
Of course, this kind of organization would destroy what makes a camping trip fun (besides being quite time consuming and inefficient), and most people would hate it. Cohen's example shows that, for any kind of cooperative interactions, there are many ways to organize both the contributions of the cooperators and the distributions of the resources. Similarly, they are many ways to interpret an economic game. Moreover, their very artificiality means that they have no conventional interpretation. Neither the situation nor the cultural background provides participants with clear and univocal guidance as to the kind of cooperative interaction they are having with one another: Is it more mutually advantageous to consider the game as a communal interaction (and be egalitarian), as a joint venture (and be meritocratic), or as a market exchange (and be libertarian)?
Public goods games, as Gill et al.'s commentary suggests, raise similar questions: One may or may not contribute to the common good, depending on whether one considers that participants' mutual interest is in cooperating and earning money together or that the configuration of the game (its anonymity, its artificial character) makes the sole pursuit of profit the only reasonable option (because one cannot trust other participants, or because it is windfall money). Moreover the “consistent contributors” identified by Gill et al. may be systematically obeying what Ainslie calls a “personal rule” independently of the particular of the situation, with, in the long run, reputational gains that offset the failure to take advantage of possible short-term gains, and also, as they show, a beneficial influence on other cooperators.
Ultimately, the mutualistic approach considers that all moral decisions should be grounded in consideration of mutual advantage. Tummolini, Scorolli, & Borghi [Tummolini et al.] may be right in arguing that there is an evolved sense of ownership found also in other species and independent from fairness. We see that as a reason to claim that mere ownership, in the sense of possession, is not a moral fact. What transforms possession into property – that is, a right – is the consideration of mutual advantage. People acknowledge that it is mutually advantageous to recognize the property rights of one another, allowing everyone to feel secure, make transactions, invest, and so on (De Soto Reference De Soto2000; North Reference North1990). However, the same considerations limit property rights: Expropriation in the public interest is considered legitimate; owners of architectural landmarks or recognized works of art are not free to destroy them or transform at will, and so on. The reason for these limits is that a wholly unbounded property right would be less mutually beneficial.
Given the diversity of situations where issues of fairness arise and the fact that quite often they can be interpreted in more than one way, a universal fairness morality does not imply that across cultures or even within a culture there should be unanimity as to what is fair or not fair. So, when Cappelen & Tungodden say that “it seems that a truly mutualistic process should make us all libertarians,” or when DeScioli says that our model “seems to predict that humans will perceive free-market capitalism as maximally fair,” we do not agree. Yes, people who defend libertarianism or free-market capitalism may do so in the name of fairness, but a fairness-based critique of libertarianism or capitalism is also possible and in fact common. These opposed views, we suggest, are based on different interpretations of the arrangements to which the same fairness criterion is being applied. The fact that people disagree about what is fair no more entails that they have a different conception of fairness, than the fact that people disagree about what is true entails that they have a different conception of truth.
The obvious fact that people commonly depart from fairness in their behavior is even less an argument against the idea of a universal sense of fairness. We agree with Fessler & Holbrook that “most people appear somewhat flexible in their moral behavior in general, and in their mutualistic behavior in particular. True, many people behave in what is locally construed as a moral manner much of the time, but this is not the same as being invariantly moral or invariantly fair.” We do not see this, however, as an objection to our account. The sense of fairness is only one of the psychological factors at work in taking decisions, for obvious evolutionary reasons: achieving and maintaining a good moral reputation is not the sole priority of individuals. They also have to secure other kinds of goods (food, safety, sexual partners, etc.) and to make trade-offs between these goods and their moral reputation (for a review of life-history trade-offs, see Stearns Reference Stearns1992). Hence, we agree with Ainslie that “people continue to have a disposition to be selfish as well.” This is no evidence against the claim that a sense of fairness is a human adaptation.
R4. Extending the mutualistic framework
For a long time, scholars of morality, from moral philosophers (Gauthier Reference Gauthier1986; Rawls Reference Rawls1971) to evolutionary biologists (Alexander Reference Alexander1987; Trivers Reference Trivers1971) and developmental psychologists (Kohlberg Reference Kohlberg1981; Turiel Reference Turiel2002), have focused almost exclusively on the sharing of jointly produced resources and the prevention of harm. In this context, conceiving morality in terms of fairness seemed if not mandatory, at least quite reasonable. In the last two decades, though, following in particular the impulsion of Richard Shweder (cf. Shweder et al. Reference Shweder, Mahapatra, Miller, Kagan and Lamb1987) and Jonathan Haidt (cf. Haidt et al. Reference Haidt, Koller and Dias1993), scholars of morality have enlarged their inquiry to a much wider range of normative issues, such as care for the needs of others, coalitional behavior, hierarchical relationship, and issues of purity and impurity.
Many commentators (Atran; Graham; Machery & Stich; Ramlakhan & Brook; Rochat & Robbins; Sachdeva, Iliev, & Medin [Sachdeva et al.]; Warneken), accepting this broadening of the moral domain, have questioned the scope of an account of morality in terms of fairness: Can it explain morality in general, or is it relevant to just a subset of moral phenomena? For several of these commentators, a mutualistic theory offers a plausible evolutionary and psychological account of interactions clearly governed by considerations of fairness, but its relevance beyond this is questionable. As Graham writes: “This is a good first step; the theory's predictions should now be tested in other domains, using other methods, to determine how well mutualism can explain the moral sense in all its instances.”
There are, from a mutualistic point of view, two main ways in which to approach this challenge, one involving a broader and the other a narrower understanding of morality. On the one hand, one could make the argument that the mutualistic framework well understood readily extends to all these normative systems, providing a way to unify morality understood fairly broadly. This position does not deny that humans are equipped with a variety of other dispositions, such as kin altruism, coalitionary psychology, or disgust, that evolved to solve other evolutionary challenges (such as raising offspring or avoiding pathogens). It claims that in so far as these behaviors are moralized, they are so because they are regulated by considerations of fairness. The argument is similar to the case of anger previously discussed (see sect. R3.2). Anger did not evolve to motivate individuals to behave morally, and indeed it often leads individuals to be immoral. Sometimes, however, it is regulated and constrained by moral considerations, for instance, when individuals accept not going too far in their retaliation. In these cases, anger appears to be regulated by considerations of fairness (proportionality between the tort inflicted on the victim and the harm inflicted on the attacker). Thus, according to this view, fairness does not give rise to sexual or maternal behaviors, but regulates their expression in mutually advantageous situations.
According to a second, narrower approach to morality, many norms, including some norms associated with a sense of rights and duties, are not moral norms. Not only are parental care or in-group versus out-group preferences largely governed by domain-specific dispositions such as kin altruism or group solidarity, but also these dispositions are typically given a normative expression in thought and in communication (with important cultural variability). This, however, is not enough to make these norms moral norms. This approach does not deny that considerations of fairness are relevant to behavior in these domains and that fairness-based, hence truly moral norms may also apply. It is often difficult, moreover, to pry apart norms that are truly based on the moral sense from norms that are based on other dispositions, and some norms may be either ambiguous or mixed in this respect. In some cultural contexts, moreover, all these norms, whatever their evolved basis, are thought of as part of a single system (often with a strong religious tenor). As we have argued earlier, the existence of broad cultural views of morality is compatible with a narrower scientific view of morality proper as interacting with, but not encompassing, all systems of rights and duties.
These two approaches – arguing that the fairness approach readily extends to morality broadly construed, or doubting that the fairness approach can be sufficiently extended to account for all the relevant norms and arguing that some of these norms, however strong and respected, are not in fact moral norms – are both compatible with the theory presented in the target article. We, the authors of that article, do not agree among ourselves as to which of these two approaches might be the best: Jean-Baptiste André and Nicolas Baumard are keener to explore the broad approach, and Dan Sperber the narrow one (while we all three entertain the possibility that a position more fine-grained than we have been able to develop so far would cause us to converge on a compromise approach). In answering our commentators on the issue of the extension of moral systems, we briefly outline, therefore, not one but two possible answers, both of which are compatible with the mutualistic theory and either one of which, we believe, would address their legitimate concerns.
R4.1. Need-based morality
In the target article, we stressed proportionality, merit, and rights. But, as Clark & Boothby, Warneken, as well as Sachdeva et al. observe, not all interactions are based on these considerations. “Communal interactions,” in particular with friends, are based on needs. We help our friends when they need us, without expecting from them a strict compensation for our help (see Clark & Jordan [Reference Clark and Jordan2002], Deutsch [Reference Deutsch1975], and Fiske [Reference Fiske1992] in social psychology, as well as the literature on care [Gilligan Reference Gilligan1982] in developmental psychology). Does this mean that humans do not “have just one general moral strategy” (Clark & Boothby)? Or that, in some situations, morality does not rely on fairness but rather on empathy (Warneken)? Or again, that while some moralities are based on rights and reciprocity, others are based on duties and needs (Sachdeva et al.)?
As we pointed out (sects. 2.2.2 and 2.3.2 of the target article), cooperation is not restricted to exchange and collective actions; it also takes the form of mutual help. Individuals are members of formal or informal mutual insurance networks in which they help those in need and expect to be helped when in need themselves. Morality in mutual help (or “communal relationships” to use Clark & Boothby's term), however, may follow the same “general moral strategy” as in collective actions (or “exchange relationships”).
Consider, for instance, the duty to help our friends. A priori, it seems to be based only on the notion of need, and there is no bookkeeping of who brings what to the relationship: One friend can help the other more than she is helped. And yet, impartiality is everywhere: “I spent a week at the hospital, and she never visited me. Yet, it was just a thirty-minute drive!”; “Do you think that I can ask her to come every day to water my plants while I am away? I mean, she has her children and it is quite far away”; “I know that he does not understand anything about computers, but this is the third times this week he's asked me to come over to his home and help him with his new software!”. In each case, the cost of helping needs to be proportionate to the benefits of being helped, just as the cost of buying insurance needs to be in proportion to the benefit provided by the insurance. Here, being partial would mean paying less than what mutual insurance requires (not visiting one's friend when the journey is quite short) and asking others to pay more than what the same mutual insurance requires (being helped each and every time one has a computer problem, no matter other people's priorities).
In this perspective, “right based moralities” and “duty based moralities” (to use Sachdeva et al.'s terms) may in fact be two sides of the same coin. Duties are the counterpart of rights, and emphasis on independence or interdependence can be a matter of contextual constraints and opportunities. In societies where individuals depend heavily on one another, it makes sense to emphasize interdependence, collective goals, and duties toward others because failing in one's duty is the most obvious way of harming others' interests. In contrast, in societies where individuals rely less on solidarity and mutual help, individual goals, rights, and freedom are more salient. In both cases, social interactions follow the logic of fairness. As ethnographic studies show, members of traditional societies where duties dominate are nevertheless quite capable of recognizing and defending their rights (Abu-Lughod Reference Abu-Lughod1986; Neff Reference Neff2003; Turiel Reference Turiel2002).
To what extent can the argument be extended to the case of help among kin? In the course of a normal life, people are in turn helpless children with strong needs, parents with greater capacities to help, and elders with limited capacities and greater needs. Given this plurality of individual positions, it makes sense to consider the duties of adults and, in particular, of parents towards needy children and adult children towards needy elderly parents, as a matter of not reciprocal but nevertheless mutual help over time, governed by considerations of fairness.
According to the narrow approach to morality, some of the main norms governing the care of one's children and other close relatives are grounded in an evolved disposition to favor carriers of one's own genes (Hamilton Reference Hamilton1964a; Reference Hamilton1964b). There are cases of conflict between these and fairness-grounded norms; for instance, in the treatment of biological children versus stepchildren. In such cases, not only do people often behave unfairly to children who are part of their household but not their own biological children, they commonly consider that they are entitled to do so. For them, the right thing to do in these cases is not the fair thing to do. Still, among humans, fairness considerations do play an important role in care for relatives (arguably a decisive role when people too old to help with the family chores are nevertheless being cared for). On this narrow view of morality, then, care for relatives involves both moral and non-moral norms (independently of how the people themselves think of morality).
R4.2. Group-based morality
Atran, drawing on his own work on “sacred values” (Atran Reference Atran2010), questions whether a mutualistic account of everyday moral interactions throws light on what Choi and Bowles (Reference Choi and Bowles2007) call “parochial altruism,” which they define as the combination of altruism towards fellow group members and hostility towards members of other groups (see also Bernhard et al. Reference Bernhard, Fischbacher and Fehr2006). To answer, we first note that mutualism does not at all imply that an individual should have the same duties and expectations toward everyone. On the contrary, mutual moral commitments depend on social relationships and the opportunities they offer for mutually beneficial interactions. If being moral is having the qualities and behaving in a way that makes you a good partner, then it stands to reason that moral duties and rights are different among, say, spouses who spend their lives together and people who occasionally greet one another at the bus stop, or among members of the same soccer team and soccer players in general.
The logic of mutual advantage thus explains why people's sense of moral obligation is modulated according to closeness, distance, or absence of social relationships. In particular, since helping other members of one's group and benefiting in turn from their help is precisely what makes belonging to the group advantageous, treating everyone in the same way independently of affiliation would undermine the value that we accord to our stronger relationships. Hence, group solidarity is a direct consequence of mutualistic relationships. As mutualists, people recognize each other's right to have special commitments to members of their groups and networks. This right entails its own limits because it is normal and rightful to belong to several nested and overlapping groups, each of which is a source of legitimate rights and duties. If we want to enjoy the benefit of groups, we need to favor in-group members just as we expect them to favor us – not in every respect, but in those respects that make the group beneficial to its members. Thus, when David Kaczynski denounced his brother Theodore (a.k.a. the Unabomber), he felt that his duty to help his brother did not include being an accomplice in his serial bombings, whereas his duty as a citizen included helping to free others of the threat of such bombings, given that he was in a unique position to do so. In both cases, his duties were mutualistically calibrated to what he assumed he was entitled to expect from others, as a brother and as a citizen.
In this perspective, mutual interest may even command individual heroism. In special circumstances where the interest of individuals become identified with those of a group, as in the case of a military squad in an ambush or of citizens in an insurrection against a dictatorship, self-sacrifice of some of its members may be necessary for the group to achieve its goal or simply to survive. So, it is arguable that the “heroism, martyrdom, and other forms of self-sacrifice for the group” mentioned by Atran, while appearing to go well beyond fairness, are in fact a marginal but striking application of mutualism in extraordinary circumstances.
An alternative narrower approach to morality would give a greater role in explaining parochial morality to the hypothesis that in-group solidarity and out-group hostility have evolved as autonomous human dispositions. They may have evolved as a biological adaptation, as has been argued with regard to other primate species (e.g., Wilson & Wrangham Reference Wilson and Wrangham2003) and as has been developed by John Tooby and Leda Cosmides (see Tooby & Cosmides Reference Tooby, Cosmides and Høgh-Olesen2010). And/or they may have evolved culturally, in relationship to religion, as argued in particular by Atran and Henrich (Reference Atran and Henrich2010). Either way, humans would be endowed with motivations to act for their own group and against other groups. Such motivations are distinct from fairness-based moral motivations. They nevertheless give rise to a sense of rights and duties. In the name of one's religion, for instance, one may feel entitled to kill members of another religion, including children. One may see this as one's sacred duty without necessarily seeing it as fair to one's victims: it is just that one's sacred duty takes precedence over fairness considerations. Again, this narrow approach to what is truly moral does not involve denying the role of fairness considerations in attitudes and behavior to in-group and out-group. What it involves is denying that all or even most of the relevant norms are ultimately grounded in such considerations.
R4.3. The morality of social hierarchies
According to the mutualistic theory, human morality is about impartially sharing the costs and benefits of social interactions. At first blush, this seems to lead naturally to the idea that, by default, resources should be shared equally. There are exceptions, of course: Fairness departs from equality when partners make unequal contributions to the common good. As Guala and Sachdeva et al. note, the mutualistic theory thus seems to apply well to egalitarian societies such as the hunter-gatherer groups in which our ancestors evolved or, to some extent, the modern capitalist societies where equality of rights is at least affirmed. By contrast, it seems at odds with traditional hierarchical societies.
How can the mutualistic theory account for the acceptance of rigid inequalities? How is it possible that humans, despite their taste for fairness, condone the enduring privileges of a minority? It may help, in addressing this question, to remember that in modern, which are in principle egalitarian societies, inequalities in resources are actually far greater than they have ever been in most traditional hierarchical societies. One could say say of modern societies exactly what Sachdeva et al. say of traditional societies: “those at the bottom give considerably more to those on the top without reaping the reward of their contribution.” Still, leaving aside very high incomes (like those of finance managers or rock stars) that are quite often seen as unfair, most people in modern societies do not find unfair a ratio of, say, 1 to 10 in income (between a cashier and a lawyer or a surgeon, for instance). They commonly consider that professionals deserve to earn more because they bring more to society than unskilled workers do (Piketty Reference Piketty1999). The other main source of inequality, inheritance, is also commonly accepted as legitimate. People think that parents should be allowed to pass their wealth to their children, and that forbidding this would unfairly deprive people of the product of their life-time's work. This shows that accepting high inequalities is, for many people, quite compatible with the view that the distribution of resources should be fair.
Also in traditional societies, inheritance and market exchanges are a main source of inequality of resources that may, to that extent, also be seen as fair. Still, there is more to social hierarchies than differences in skills and inherited capital. Being an aristocrat or a slave, or member of a given caste, with all the differences of rights these entail, is hardly ever thought of as a matter of fairness. Nevertheless, even in such birthright hierarchies, it can be argued that social interactions retain a clear mutualistic character. Shweder et al. (Reference Shweder, Much, Mahapatra, Park, Brandt and Rozin1997; cf. Shweder et al. [1987] mentioned by Sachdeva et al.) observe, for instance, that in India:
The person in the hierarchical position is obligated to protect and satisfy the wants of the subordinate person in specified ways. The subordinate person is also obligated to look after the interests and “well-being” of the superordinate person. (Shweder et al. Reference Shweder, Much, Mahapatra, Park, Brandt and Rozin1997, p. 145)
Hierarchies are considered, it seems, as something given, the natural of the god-given order of things. Fairness-based morality may be prevalent within this given order.
When we look at such arrangements from outside, we do not consider each interaction in particular or ask whether it is fair. What we consider is the “basic structure” (Rawls Reference Rawls1971), and we typically judge it unfair. When we live inside a society, however, we rarely if ever focus on its basic structure. We take it for granted and evaluate social interactions within it. The individual behavior of aristocrats, slave owners, or members of high caste is judged more or less fair, rather than automatically considered unfair.
Still, there are circumstances when people look at their own institutions with a fresh eye, as did the American, the French, and the Russian revolutionaries, and then they often question their fairness. Moreover, a range of empirical works suggest that, even in the daily life of traditional societies, women do occasionally revolt against men, the poor against the wealthy, the young against the elders (e.g., Abu-Lughod Reference Abu-Lughod1986; Neff Reference Neff1997; Turiel Reference Turiel2002). In other terms, hierarchies are to some extent protected from moral evaluation. When they are, however, being so evaluated, and when moral issues arise within hierarchical societies, the morality involved is grounded in considerations of fairness.
An alternative approach to the norms that regulate behavior in hierarchies is to claim that, to a large extent, they are grounded not in a sense of fairness but in an evolved sense of hierarchy (that has counterparts among other primates). Even if hierarchy is not something that all humans accept, it is something that they all intuitively understand from infancy (Mascaro & Csibra Reference Mascaro and Csibra2012) and that, when they accept it, they view it as a source of authority and legitimacy in its own right. Hence, there are rights and duties that follow from hierarchical relationships and are not grounded in fairness. In some societies, they permeate all of social life. They often take precedence over the consideration of fairness. On a narrower view of fairness-based morality, this means that these norms of hierarchy are not intrinsically moral.
R4.4. The morality of purity
In a famous study, Haidt et al. (Reference Haidt, Koller and Dias1993) showed that a majority of participants in Brazil and the United States found objectionable the behavior of a man who buys a dead chicken in the supermarket and has sexual intercourse with it before cooking and eating it, even though they agreed that no one was harmed by this behavior.
At first blush, as many commentaries suggest (Ramlakhan & Brook; Graham; Rochat & Robbins; Machery & Stich), sex with the dead chicken seems to refute the idea that, to be morally condemned, an action should inflict a prejudice on someone. However, as Weeden et al. (Reference Weeden, Cohen and Kenrick2008) note, sexual practices and sexual proximity actually do inflict a cost on individuals, men and women, involved in a committed relationship:
For men pursuing these strategies, the basic bargain is that they are agreeing to high levels of investment in wives and children while foregoing extra-pair mating opportunities. In return, they receive increased paternity assurance and increased within-pair fertility. Given that these men are making high levels of familial investment, their central risk is cuckoldry.
For women pursuing these strategies, the basic bargain is that they are agreeing to provide increased paternity assurance and within-pair fertility while foregoing opportunities to obtain sexier genes for their children. In return, they receive increased male investment. Their central risk is male abandonment, especially when they have higher numbers of young children. (Weeden et al. Reference Weeden, Cohen and Kenrick2008, p. 328)
In this context, those who do not restrain their sexual activities and freely pursue their desire inflict a cost on others. In promoting sexual promiscuity, they render marriage more difficult and threaten the arrangement and the very possibility of monogamous families. In line with this observation, Weeden et al. (Reference Weeden, Cohen and Kenrick2008) show that people's own mating strategies are a very good predictor of their moral opinion not only about sexual issues, but about a range of others practices, indirectly related to monogamy, such as: “pornography, divorce, cohabitation, homosexuality, drinking and drug usage (which are transparently associated with promiscuity), and abortion and birth control (which reduce the costs of promiscuity and enhance the ability of small-family strategists to produce well-funded children)” (p. 329; see also Kurzban et al. Reference Kurzban, Dukes and Weeden2010). Again, fairness defines and regulates puritan morality: If one considers strong relationships as mutually advantageous, then promoting sexual promiscuity amounts to enjoying the benefit of living in a well-regulated society (with strong commitment, secure children, etc.) without paying its cost (i.e., restraining one's sexual behavior).
This is, of course, just an example of the way in which the broad approach to morality would seek to show that moral norms that seem unrelated to fairness are, on closer analysis, based on it.
The narrow approach to morality, while not denying that fairness considerations may in some cases play a role in norms of purity, would not assume – in fact, would be skeptical – that it is systematically so. It would be ready to discover that many or most of these norms owe their cultural evolution and their psychological robustness to evolved dispositions linked to disgust and a “prophylactic” function.
R5. Conclusion
Let us, in conclusion, again express our gratitude to all the commentators. Our goal in writing this particular target article was (1) to give a synthesized and detailed outline of the mutualistic approach to morality that we have developed in various places (André & Baumard Reference André and Baumard2011a; Reference André and Baumard2011b; Baumard Reference Baumard2010a; Reference Baumard2011; Baumard et al. 2011; Baumard & Sperber, Reference Baumard, Sperber and Fassin2012; Sperber & Baumard Reference Sperber and Baumard2012), and (2), in so doing, to put right what we see as an imbalance in the current discussion of the evolution of morality where altruistic group selection approaches (the obvious importance of which we of course acknowledge) are often the only locus or only focus of the debate. The present discussion has shown, we hope, that a mutualistic approach can truly contribute to our understanding of morality and enrich the debate.
Target article
A mutualistic approach to morality: The evolution of fairness by partner choice
Related commentaries (28)
A strange(r) analysis of morality: A consideration of relational context and the broader literature is needed
Bargaining power and the evolution of un-fair, non-mutualistic moral norms
Baumard et al.'s moral markets lack market dynamics
Beyond economic games: A mutualistic approach to the rest of moral life
Biological evolution and behavioral evolution: Two approaches to altruism
Can mutualistic morality predict how individuals deal with benefits they did not deserve?
Competitive morality
Cooperation and fairness depend on self-regulation
Disentangling the sense of ownership from the sense of fairness
Does market competition explain fairness?
Ego function of morality and developing tensions that are “within”
Evidence for partner choice in toddlers: Considering the breadth of other-oriented behaviours
From mutualism to moral transcendence
From partner choice to equity – and beyond?
Heterogeneity in fairness views: A challenge to the mutualistic approach?
Intertemporal bargaining predicts moral behavior, even in anonymous, one-shot economic games1
Modeling justice as a natural phenomenon
More to morality than mutualism: Consistent contributors exist and they can inspire costly generosity in others
Mutualism is only a part of human morality
Non-mutualistic morality
Not all mutualism is fair, and not all fairness is mutualistic
Partner selection, coordination games, and group selection
Sense of fairness: Not by itself a moral sense and not a foundation of a lot of morality
The emotional shape of our moral life: Anger-related emotions and mutualistic anthropology
The paradox of the missing function: How similar is moral mutualism to biofunctional understanding?
You can't have it both ways: What is the relation between morality and fairness?
Your theory of the evolution of morality depends upon your theory of morality
“Fair” outcomes without morality in cleaner wrasse mutualism
Author response
Partner choice, fairness, and the extension of morality