Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-12T02:21:23.975Z Has data issue: false hasContentIssue false

Lab support for strong reciprocity is weak: Punishing for reputation rather than cooperation

Published online by Cambridge University Press:  31 January 2012

Alex Shaw
Affiliation:
Department of Psychology, Yale University, New Haven, CT 06511. Alex.Shaw@yale.eduLaurie.Santos@yale.eduhttps://sites.google.com/site/alexshawyale/
Laurie Santos
Affiliation:
Department of Psychology, Yale University, New Haven, CT 06511. Alex.Shaw@yale.eduLaurie.Santos@yale.eduhttps://sites.google.com/site/alexshawyale/

Abstract

Strong reciprocity is not the only account that can explain costly punishment in the lab; it can also be explained by reputation-based accounts. We discuss these two accounts and suggest what kinds of evidence would support the two different alternatives. We conclude that the current evidence favors a reputation-based account of costly punishment.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2012

Guala reviews the anthropological literature on costly punishment and convincingly argues that little evidence supports the notion that costly punishment is responsible for maintaining cooperation in small-scale societies. We agree with Guala's argument, but feel that it does not go far enough. Indeed, we think Guala could expand this critique to findings from the laboratory as well. In particular, we argue that costly punishment observed in the lab may not support a model of strong reciprocity either.

As the target article nicely reviews, proponents of strong reciprocity often use examples of laboratory-based costly punishment as evidence that cooperation evolved through strong reciprocity. Unfortunately, strong reciprocity is not the only account that can explain costly punishment in these laboratory settings. Another view that could account for the laboratory evidence is reputation-based models, in which costly punishment is favored by virtue of reputational gains from punishing (Price Reference Price2008; Santos et al. Reference Santos, Rankin and Wedekind2011). Under this view, individuals punish in order to signal some non-observable underlying quality, such as an understanding of social norms (Fessler & Haley Reference Fessler, Haley and Hammerstein2003).

Given that both strong reciprocity and reputation-based accounts predict punishment in laboratory economic games, how can we distinguish between these two alternatives empirically? One method is to explore the nuanced predictions that each specific model might make. The major claim of strong reciprocity is that individuals punish in order to increase cooperation, and this claim makes two behavioral predictions. First, people should be especially likely to punish non-cooperators relative to other norm violators. Punishment should thus be directed more often at non-cooperation than other immoral actions (e.g., infidelity, incest) that are unrelated to cooperation. If individuals punish those who violate other sorts of norms just as severely as those who violate cooperation norms, this would suggest that punishment may not have evolved to promote cooperation specifically. The second prediction of strong reciprocity accounts is that cooperation should be more influenced by punishment performed by human agents than other types of punishment. To date, many experiments have shown that people increase cooperation when punishment is allowed (Fehr & Gächter Reference Fehr and Gächter2002), but no studies have shown that people respond to punishment more when it is performed by agents rather than any negative contingency (Thorndike Reference Thorndike1927). To test this prediction, researchers would need to set up an experiment where the probability of having one's payments reduced for defection is the same, but the punishment would come from either a person or from a computer algorithm. Under strong reciprocity, the punishment by human agents should be more likely to increase cooperation than other types of punishment and this differential influence of punisher type should be more pronounced for punishment of non-cooperators than other violations.

Reputation-based accounts also make specific predictions that would not be expected under strong reciprocity accounts. Specifically, these accounts predict that people should be especially likely to punish if doing so can improve their reputation with others and should be sensitive to cues related to being observed. Much evidence in the laboratory has confirmed these predictions. There is evidence that people give more to punish defections when their decision will be known by other participants than when their decision to punish will remain anonymous (Kurzban et al. Reference Kurzban, DeScioli and O'Brien2007; Piazza & Bering Reference Piazza and Bering2008b). Additionally, individuals appear to improve their reputations by appropriately punishing non-cooperators; punishers are seen as more trustworthy and deserving of respect and are actually rewarded monetarily (Barclay Reference Barclay2006; Nelissen Reference Nelissen2008). These pieces of evidence favor a reputation-based account.

There is, however, one piece of evidence that at first glance might appear to go against reputation-based accounts: People do still punish when anonymous (Henrich & Fehr Reference Henrich, Fehr and Hammerstein2003). Nonetheless, there is at least one way to explain punishment in anonymous one-shot interactions – people may have mechanisms for punishing others to improve their reputation and this psychology may misfire in economic games causing anonymous participants to still punish at low rates (Price Reference Price2008). The target article dismisses this misfiring account for punishment because people still take costs to punish even when they self-report that they understand the one-shot nature of the interaction, give less in one shot dilemmas than they do in sequential situations, and continue to give even after repeated trials.

This is a point where we disagree with the target article, as we feel that a misfiring explanation can account for participants' behavior. To better understand this view, consider an analogous argument in a different domain, that of mating strategies. Teenage boys often take costs to buy pornography, even though they would surely self-report that they understand that they cannot reproduce with the attractive centerfold (Hagen & Hammerstein Reference Hagen and Hammerstein2006). This mating “misfiring” phenomenon is analogous to the performance of participants who punish at cost even though they report understanding the relevant aspects of anonymity. In the same way that pornography tricks men's well-designed mating psychology by providing images of attractive women that could provide a great mating opportunity in the real world, punishment studies may trick people's well-designed reputation psychology by providing clear norm violations that could provide a great reputation building opportunity if they happened in the real world. In both cases, people can tell the difference between the artificial (one-shot interaction/pornography) and the real thing (repeated interactions/real women), yet they still respond to the artificial stimulus and do so even after repeated trials. In the case of mating “misfiring” we don't demand a new psychological explanation (e.g., strong eroticism; Tooby et al. Reference Tooby, Krasnow, Delton and Cosmides2009), so it isn't clear that we should in the punishment case either.

A true understanding of the mechanisms underlying punishment in laboratory studies of cooperation will involve moving away from a focus on whether individuals take costs to punish others and instead investigating what cues influence one's willingness to punish. This focus on specific predictions in the lab and the target article's focus on investigating things more naturalistically will together move the field toward a better understanding of how punitive sentiments function in people's minds and in the broader human society as a whole.

References

Barclay, P. (2006) Reputational benefits for altruistic punishment. Evolution and Human Behavior 27:325–44.CrossRefGoogle Scholar
Fehr, E. & Gächter, S. (2002) Altruistic punishment in humans. Nature 415(6868):137–40. Available at: http://www.nature.com/nature/journal/v415/n6868/abs/415137.CrossRefGoogle ScholarPubMed
Fessler, D. M. T. & Haley, K. (2003) The strategy of affect: Emotions in human cooperation. In: Genetic and cultural evolution of cooperation, ed. Hammerstein, P., pp. 736. MIT Press.CrossRefGoogle Scholar
Hagen, E. H. & Hammerstein, P. (2006) Game theory and human evolution: A critique of some recent interpretations of experimental games. Theoretical Population Biology 69:339–48. Available at: http://linkinghub.elsevier.com/retrieve/pii/S0040580905001668.CrossRefGoogle ScholarPubMed
Henrich, J. & Fehr, E. (2003) Is strong reciprocity a maladaptation? On the evolutionary foundations of human altruism. In: Genetic and cultural evolution of cooperation, ed. Hammerstein, P., pp. 5582. MIT Press.Google Scholar
Kurzban, R., DeScioli, P. & O'Brien, E. (2007) Audience effects on moralistic punishment. Evolution and Human Behavior 28:7584.CrossRefGoogle Scholar
Nelissen, R. M. A. (2008) The price you pay: Cost-dependent reputation effects of altruistic punishment. Evolution and Human Behavior 29:242–48.CrossRefGoogle Scholar
Piazza, J. & Bering, J. M. (2008b) The effects of perceived anonymity on altruistic punishment. Evolutionary Psychology 6:487501.CrossRefGoogle Scholar
Price, M. E. (2008) The resurrection of group selection as a theory of human cooperation. Social Justice Research 21:228–40.CrossRefGoogle Scholar
Santos, M. D., Rankin, D. J. & Wedekind, C. (2011) The evolution of punishment through reputation. Proceedings of the Royal Society B: Biological Sciences 278:371–77.CrossRefGoogle ScholarPubMed
Thorndike, E. L. (1927) The law of effect. American Journal of Psychology 39:212–22.CrossRefGoogle Scholar
Tooby, J., Krasnow, M., Delton, A. & Cosmides, L. (2009) I will only know that our interaction was one-shot if I kill you: A cue theoretic approach to the architecture of cooperation. Talk presented at the Human Behavior and Evolution Society Meeting, California State University, Fullerton, CA, May 2009.Google Scholar