Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-02-11T14:13:38.149Z Has data issue: false hasContentIssue false

Why would we expect the mind to work that way? The fitness costs to inaccurate beliefs

Published online by Cambridge University Press:  22 March 2017

Jesse Marczyk*
Affiliation:
Department of Psychology, New Mexico State University, Las Cruces, NM 88003. jmarczyk@nmsu.eduhttp://Popsych.org

Abstract

An adaptationist analysis of beliefs yields the prediction that we ought to expect accuracy in the cognitive systems which generate them – stereotypes or otherwise – for the most part. There are, however, some limited situations in which some inaccuracy in beliefs advertised to others might be adaptive.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

It is an unfortunate state of affairs in psychology that adaptationist analyses have been, and still are, so seldom undertaken (Tinbergen Reference Tinbergen1963), as evolutionary theory can provide a framework for guiding and understanding empirical research (Tooby & Cosmides Reference Tooby, Cosmides, Barkow, Cosmides and Tooby1992). Indeed, no such analysis is presented in Jussim's (Reference Jussim2012) recent book, suggesting to me that such analyses are also likely missing in the empirical literature on stereotypes more generally. Though such an analysis will inevitably only be cursory, I will evaluate the main premise of the book – that beliefs about groups tend to be more accurate than error-prone – in an adaptive light.

The first point to consider is whether we should expect the mind to contain cognitive mechanisms adapted for the function of conforming to the beliefs about others; mechanisms which use the beliefs of others about you as inputs, transforming them into corresponding behavioral outputs. That is, if you believe I am aggressive, will I become more aggressive in turn? The answer to this question should be expected to be an unequivocal “no.” The reason we should not expect the beliefs of others per se to affect our behavior is that the believer and the target do not share the same set of adaptive best interests. If it is not in my interests to behave aggressively, for instance, perhaps owing to an inability to effectively win physical conflicts, allowing the beliefs of others to influence my behavior in such a fashion would be maladaptive for me. Similarly, if others believe I am unintelligent, shy, not trustworthy, emotionally cold, and so on, it might not be in my best interests to conform to their beliefs and adopt the associated behaviors. Indeed, if I were so manipulable, others would likely adopt a host of strategically unflattering beliefs about me so as to remove me from direct competition with them in the social world.

Conversely, if I had the potential to be more intelligent, friendly, outgoing, and so on, and it was adaptive to possess such traits, it would be strange indeed were I to wait for someone else's belief to realize my potential in those domains. To get an easy grasp on why these explanations do not work, one could try applying them to any non-human species and quickly find that they sound silly (e.g., “the deer showed evidence of poor long-term recall because the other deer did not believe him to be intelligent”). On that basic level, then, we should not expect the beliefs of others per se to manipulate our own behavior in such a fashion.

Given that we should not predict other people's beliefs about us to have much of an effect on our behavior, this leads naturally to the second point: There are often costs to holding inaccurate beliefs. If you believe me to be aggressive and I am not, you have made an error in perception, likely missing out on potential benefits of cooperation or enduring the costs of needless aggression against me. On the other hand, if you believe me to be a valuable and kind social asset with untapped potential and I am actually unable to live up to your lofty expectations, you will likely end up making poor social investment decisions, opting to spend more effort on a friendship with me than would otherwise be adaptive. As social budgets are limited, cognitive systems which make such mistakes should be selected against over time in the presence of more accurate mechanisms.

We should also expect that our stereotypes about groups should not be applied inflexibly to individual members, just as our beliefs about individuals should not be inflexible over time. If I believe you to be kind when you actually are not, stubbornly refusing to update that belief in the face of your many unkind behaviors towards me would, again, be costly. I would be pursuing less profitable relationships than I otherwise might. Cognitive mechanisms that fail to react in the face of new information would be at a selective disadvantage, relative to ones that did, all else being equal. The exact same logic applies at the level of beliefs about groups. If I believe members of a group to be strong fighters on average, I may well back down from fights I could otherwise win with members of that group.

It goes without saying that people will never achieve perfect accuracy in their beliefs about others; accuracy takes time and effort to obtain, and at some point there will likely be diminishing returns on learning more about someone relative to the effort it will take. Nevertheless, while errors in perceptions are expected because of such constraints, the cognitive systems themselves should be predicted to be ones that attempt to deliver accurate estimations of other people's probable behavior (Kurzban Reference Kurzban2012). In short, we should expect mechanisms which produce largely accurate stereotypes for the most part. According to the data presented by Jussim (Reference Jussim2012), this is indeed precisely what we find.

As a brief aside, it is also worth noting that not all stereotypes which are globally inaccurate need to be locally inaccurate. If, for instance, I believed that members of group X are hostile and they actually are consistently hostile to me, I would not necessarily be displaying any kind of adaptively meaningful inaccuracy in my belief even if group X is typically kind to others. My belief, while incorrect on the whole, is still as meaningfully accurate as it could be, given the information I possess. As such, not all evidence of stereotype inaccuracy need necessarily reflect error-prone cognitive mechanisms.

While we should expect accuracy in stereotypes (and beliefs more generally) when it comes to guiding one's behavior, there are some limited cases in which we might expect people to hold, or at least voice, incorrect beliefs: the realm of persuasion (Mercier & Sperber Reference Mercier and Sperber2011). If I am interested in getting you to like me, for example, that goal might be served better if you believe me to be kinder, more intelligent, etcetera, than I actually am. As such, it might serve my interests to strategically misrepresent certain features of myself or my future prospects to others. However, because it is maladaptive for others to believe incorrect things about me, the limits of such persuasion are likely to be relatively meager (for an example of this dynamic, see Perilloux et al. Reference Perilloux, Munoz-Reyes, Turiegano, Kurzban and Pita2015).

A second case in which inaccurate beliefs might be adaptive is when the beliefs serve a signaling function (Zahavi Reference Zahavi1975). Indeed, Jussim (Reference Jussim2012) makes a similar point in his opening chapter on stereotypes by noting that social benefits can accrue to people who hold certain sets of beliefs. In such contexts, the truth of a belief does not stop mattering, but there are other benefits one might reap on the basis of things other than accuracy, such as sending a social signal concerning one's value as an associate to others.

As a relevant for instance, academic faculty in most universities seem to exist in rather politically liberal and progressive environments, and, in such environments, certain social beliefs are more acceptable to hold than others. Accordingly, while it might be costly in general to hold inaccurate beliefs, there are particular scenarios in which holding an accurate belief can be even worse (such as when one is morally condemned for believing in biological differences between men and women, or between ethnic groups). That said, given the modular view of the mind (Tooby & Cosmides Reference Tooby, Cosmides, Barkow, Cosmides and Tooby1992), it is perfectly possible for both beliefs to be present in the same brain, with one cognitive mechanism generating as accurate a representation of reality as it can, which is used to guide one's own behavior, while another mechanism generates an inaccurate representation to broadcast to others. Placing this logic in a concrete example, it might behoove academics to believe that gender and racial stereotypes are not true publicly, but guide their own behavior with those beliefs all the same. This is why researchers can find it counterintuitive that academics appear to display the same patterns of discrimination, regardless of their own race or sex (Milkman et al. Reference Milkman, Akinola and Chugh2015; Moss-Racusin et al. Reference Moss-Racusin, Dovidio, Brescoll, Graham and Handelsman2012).

Accuracy in perception – or at least as close as we can approximate it with limited time and energy – should be the expected order of the day in psychology. While there are some scenarios in which other people believing inaccurate things can be adaptively useful, perceiving the world accurately yourself is valuable for guiding your own behavior and avoiding the associated costs of inaccuracy.

References

Jussim, L. (2012) Social perception and social reality: Why accuracy dominates bias and self-fulfilling prophecy. Oxford University Press.CrossRefGoogle Scholar
Kurzban, R. (2012) On the advantages of being wrong. Bioscience 62:516–17.Google Scholar
Mercier, H. & Sperber, D. (2011) Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences 34(2):57111.CrossRefGoogle ScholarPubMed
Milkman, K., Akinola, M. & Chugh, D. (2015) What happens before? A field experiment exploring how pay and representation differentially shape bias on the pathway into organizations. Journal of Applied Psychology 100:1678–712.CrossRefGoogle ScholarPubMed
Moss-Racusin, C., Dovidio, J., Brescoll, V., Graham, M. & Handelsman, J. (2012) Science faculty's subtle gender biases favor male students. Proceedings of the National Academy of Sciences USA 109:16474–79.Google Scholar
Perilloux, C., Munoz-Reyes, J., Turiegano, E., Kurzban, R. & Pita, M. (2015) Do (non-American) men overestimate women's sexual intentions? Evolutionary Psychological Science 1:150–54.CrossRefGoogle Scholar
Tinbergen, N. (1963) On aims and methods of Ethology. Zeitschrift für Tierpsychologie 20:410–33.CrossRefGoogle Scholar
Tooby, J. & Cosmides, L. (1992) The psychological foundations of culture. In: The adapted mind: Evolutionary psychology and the generation of culture, ed. Barkow, J., Cosmides, L. & Tooby, J., pp. 19136. Oxford University Press.CrossRefGoogle Scholar
Zahavi, A. (1975) Mate selection – A selection for a handicap. Journal of Theoretical Biology 53(1):205–14.Google Scholar