Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-10T13:07:07.589Z Has data issue: false hasContentIssue false

The expressive rationality of inaccurate perceptions

Published online by Cambridge University Press:  22 March 2017

Dan M. Kahan*
Affiliation:
Yale Law School, P.O. Box 208215, New Haven, CT 06510. dan.kahan@yale.eduwww.culturalcognition.net/kahan

Abstract

This commentary uses the dynamic of identity-protective cognition to pose a friendly challenge to Jussim (2012). Like other forms of information processing, this one is too readily characterized as a bias. It is no mistake, however, to view identity-protective cognition as generating inaccurate perceptions. The “bounded rationality” paradigm incorrectly equates rationality with forming accurate beliefs. But so does Jussim's critique.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Introduction

My aims in this commentary are two-fold. One is to remark my gratitude to Jussim by attempting to add some value to the enriching scholarly discussion he has initiated in his book (Jussim Reference Jussim2012). The other is to entice him, if possible, into addressing a body of research that seems quite relevant to his topic but that he unfortunately neglects.

That research examines identity-protective cognition (Sherman & Cohen Reference Sherman, Cohen and Zanna2006). Jussim disclaims interest in “political beliefs and ideologies,” because those, in his view, reflect “moral and philosophical issues,” not matters of “objective social reality” on which “issues of accuracy” in perception arise (p. 9). But what the study of identity-protective cognition shows is that “political beliefs,” “ideologies,” “cultural worldviews,” and so forth, are themselves sources of inaccurate perceptions of “objective” facts. Group attachments, according to this work, distort all manner of information processing – from logical inferences to assessments of expertise; from recollection of events to brute sense impressions. These dynamics inform myriad factual conflicts – over the contribution of human activity to global warming, the deterrent efficacy of the death penalty, and the impact of the HPV vaccine on teenage promiscuity, among others (Kahan Reference Kahan2010).

I'll elaborate on why I think this research supplies such fertile ground for engaging Jussim's concerns. Indeed, the prevailing characterization – I'd say mischaracterization – of identity-protective cognition can be used to buttress the charges Jussim makes against the “bounded rationality” paradigm (my words for his target) that animates contemporary decision science. The denigration of reason the field is guilty of here, however, doesn't reflect a mistake about the antagonism between identity-protective cognition and “accuracy.” Instead, it derives from the assumption that forming “accurate perceptions” is the only thing people use their reason for – an offense for which Jussim himself might justly be indicted as a co-conspirator. (Remember, I'm trying to lure him in!)

Identity-protective cognition and accuracy

Identity-protective cognition is a form of motivated reasoning – an unconscious tendency to conform information processing to some goal collateral to accuracy (Kunda Reference Kunda1990). In the case of identity-protective cognition, that goal is protection of one's status within an affinity group whose members share defining cultural commitments. Sometimes (for reasons more likely to originate in misadventure than conscious design) positions on a disputed societal risk become conspicuously identified with membership in competing groups of this sort. In those circumstances, individuals can be expected to attend to information in a manner that promotes beliefs that signal their commitment to the position associated with their group (Kahan Reference Kahan2015b; Sherman & Cohen Reference Sherman, Cohen and Zanna2006).

We can sharpen understanding of identity-protective reasoning by relating this style of information processing to a nuts-and-bolts Bayesian one. Bayes's Theorem instructs individuals to revise the strength of their current beliefs (“priors”) by a factor that reflects how much more consistent the new evidence is with that belief being true than with it being false. Conceptually, that factor – the likelihood ratio – is the weight the new information is due. Many cognitive biases (e.g., base rate neglect, which involves ignoring the information in one's “priors”) can be understood to reflect some recurring failure in people's capacity to assess information in this way.

That's not quite what's going on, though, with identity-protective cognition. The signature of this dynamic isn't so much the failure of people to “update” their priors based on new information, but rather, the role that protecting their identities plays in fixing the likelihood ratio they assign to new information. In effect, when they display identity-protective reasoning, individuals unconsciously adjust the weight they assign to evidence based on its congruency with their group's position (Kahan Reference Kahan2015a). If, for example, they encounter a highly credentialed scientist, they will deem him an “expert” worthy of deference on a particular issue – but only if he is depicted as endorsing the factual claims on which their group's position rests (Fig. 1) (Kahan et al. Reference Kahan, Jenkins-Smith and Braman2011). Likewise, when shown a video of a political protest, people will report observing violence warranting the demonstrators’ arrest if the demonstrators’ cause was one their group opposes (restricting abortion rights; permitting gays and lesbians to join the military) – but not otherwise (Kahan et al. Reference Kahan, Hoffman, Braman, Evans and Rachlinski2012a).

Figure 1. Identity-protective cognition of scientific expertise. Perceptions of highly credentialed scientists' expertise across various disputed issues was highly conditional on fit between congruence of position attributed to the scientists and the subjects' political outlooks. Colored bars reflect 0.95 confidence intervals (N = 1336). Adapted from Kahan et al. (Reference Kahan, Jenkins-Smith and Braman2011).

In fact, Bayes's Theorem doesn't say how to determine the likelihood ratio – only what to do with the resulting factor: multiply one's prior odds by it. But in order for Bayesian information processing to promote accurate beliefs, the criteria used to determine the weight of new information must themselves be calibrated to truth-seeking. What those criteria are might be open to dispute in some instances. But clearly, whose position the evidence supports – ours or theirs? – is never one of them.

The most persuasive demonstrations of identity-protective cognition show that individuals opportunistically alter the weight they assign one and the same piece of evidence based on experimental manipulation of the congruence of it with their identities. This design is meant to rule out the possibility that disparate priors or pre-treatment exposure to evidence is what's blocking convergence when opposing groups evaluate the same information (Druckman Reference Druckman2012).

But if this is how people assess information outside the lab, then opposing groups will never converge, much less converge on the truth, no matter how much or how compelling the evidence they receive. Or at least they won't so long as the conventional association of positions with loyalty to opposing identify-defining groups remains part of their “objective social reality.”

Bounded rationality?

Frustration of truth-convergent Bayesian information processing is the thread that binds together the diverse collection of cognitive biases of the bounded-rationality paradigm. Identity-protective cognition, we have seen, frustrates truth-convergent Bayesian information processing. Thus, assimilation of identity-protective reasoning into the paradigm – as has occurred within both behavioral economics (e.g., Sunstein Reference Sunstein2006; Reference Sunstein2007) and political science (e.g., Lodge & Taber Reference Lodge and Taber2013) – seems perfectly understandable.

Understandable, but wrong!

The bounded-rationality paradigm rests on a particular conception of dual-process reasoning. This account distinguishes between an affect-driven, “heuristic” form of information processing, and a conscious, “analytical” one. Both styles – typically referred to as System 1 and System 2, respectively – contribute to successful decision making. But it is the limited capacity of human beings to summon System 2 to override errant System 1 intuitions that generates the grotesque assortment of mental miscues – the “availability effect,” “hindsight bias,” the “conjunction fallacy,” “denominator neglect,” “confirmation bias” – on display in decision science's benighted picture of human reason (Kahneman & Frederick Reference Kahneman, Frederick, Holyoak and Morrison2005).

It stands to reason, then, that if identity-protective cognition is properly viewed as a member of bounded-rationality menagerie of biases, it, too, should be most pronounced among people (the great mass of the population) disposed to rely on System 1 information processing. This assumption is commonplace in the work reflecting the bounded-rationality paradigm (e.g., Lilienfeld et al. Reference Lilienfeld, Ammirati and Landfield2009; Westen et al. Reference Westen, Blagov, Harenski, Kilts and Hamann2006).

But actual data are to the contrary. Observational studies consistently find that individuals who score highest on the Cognitive Reflection Test (CRT) and other reliable measures of System 2 reasoning are not less polarized but more so on facts relating to divisive political issues (e.g., Kahan et al. Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and Mandel2012b). Experimental data support the inference that these individuals use their distinctive analytic proficiencies to form identity-congruent assessments of evidence. When assessing quantitative data that predictably trips up those who rely on System 1 processing, individuals disposed to use System 2 are much less likely to miss information that supports their groups’ position. When the evidence contravenes their group's position, these same individuals are better able to explain it away (Kahan et al. Reference Kahan, Peters, Dawson and Slovic2013).

Indeed, one study that fits this account addresses a matter that Jussim does touch on in passing: the tendency of partisans to form negative impressions of their opposing number (Fig. 2). In the study, subjects selectively credited or dismissed evidence of the validity of the CRT as an “open-mindedness” test depending on whether the subjects were told that individuals who held their political group's position on climate change had scored higher or lower than those who held the opposing view. Already large among individuals of low to modest cognitive reflection, this effect was substantially more pronounced among those who scored the highest on the CRT (Kahan Reference Kahan2013).

Figure 2. “System 2” identity-protective cognition. Subjects' assessment of the evidence of the validity of the Cognitive Reflection Test (CRT) as an “open-mindedness” test was conditional on congruence of experimentally manipulated information on who scored higher - “climate-change skeptics” or “believers” - and subjects' political identities. This effect was most pronounced among subjects scoring higher on the CRT itself. Derived from multivariate regression. Predictors for “low” and “high” CRT set at 0 and 2, respectively. CIs reflect 0.95 level of confidence (N=1750). From Kahan (Reference Kahan2013).

The tragic conflict of expressive rationality

As indicated, identity-protective reasoning is routinely included in the roster of cognitive mechanisms that evince bounded rationality. But where an information-processing dynamic is consistently shown to be magnified, not constrained, by exactly the types of reasoning proficiencies that counteract the mental pratfalls associated with heuristic information processing, then one should presumably update one's classification of that dynamic as a “cognitive bias.”

In fact, the antagonism between identity-protective cognition and perceptual accuracy is not a consequence of too little rationality but too much. Nothing an ordinary member of the public does as consumer, as voter, or participant in public discourse will have any effect on the risk that climate change poses to her or anyone else. Same for gun control, fracking, and nuclear waste disposal: her actions just don't matter enough to influence collective behavior or policymaking. But given what positions on these issues signify about the sort of person she is, adopting a mistaken stance on one of these in her everyday interactions with other ordinary people could expose her to devastating consequences, both material and psychic. It is perfectly rational under these circumstances to process information in a manner that promotes formation of the beliefs on these issues that express her group allegiances, and to bring all her cognitive resources to bear in doing so.

This account roots identity-protective cognition in the theory of “expressive rationality,” a rival to both the rational actor model in conventional economics and the bounded-rationality paradigm (Anderson Reference Anderson1993). The basic tenet of this account is that individuals derive “expressive utility,” intrinsic and instrumental, from actions that, against the background of social norms, convey their defining group commitments (Akerlof & Kranton Reference Akerlof and Kranton2000). Actions of this sort – like pretty much any other (Peirce Reference Peirce1877) – are reliably enabled by appropriate beliefs. Identity-protective cognition is the style of reasoning for rationally engaging information that is relevant to identity-expressive beliefs, particularly when that information has no other real relevance to an individual's life.

Of course, when everyone uses their reason this way at once, collective welfare suffers. In that case, culturally diverse democratic citizens won't converge, or converge as quickly, on the significance of valid evidence on how to manage societal risks. But that doesn't change the social incentives that make it rational for any individual – and hence every individual – to engage information in this way. Only some collective intervention – one that effectively dispels the conflict between the individual's interest in forming identity-expressive risk perceptions and society's interest in the formation of accurate ones – could (Kahan et al. Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and Mandel2012b; Lessig Reference Lessig1995).

Rationality≠accuracy (necessarily)

Like the scholarship Jussim criticizes, the standard view of identity-protective cognition force fits a species of human perception into the bounded-rationality template. But unlike the larger intellectual project that Jussim attacks, the mistake that doing so involves here does not reflect the field's commitment to denigrating perceptual “accuracy.”

Obviously, it isn't possible to assess the “rationality” of any pattern of information processing unless one gets what the agent processing the information is trying to accomplish. Because forming accurate “factual perceptions” is not the only thing people use information for, a paradigm that motivates empirical researchers to appraise cognition exclusively in relation to that objective will indeed end up painting a distorted picture of human thinking.

But worse, the picture will simply be wrong. The body of science this paradigm generates will fail, in particular, to supply us with the information a pluralistic democratic society needs to manage the forces that pit citizens’ stake in using their reason to know what's known and using it to be who they are as members of diverse cultural groups against one another (Kahan Reference Kahan2015b).

The dominance of the bounded-rationality paradigm creates this risk. But a counterprogram that seeks to vindicate human rationality by relentlessly defending the “accuracy” of “perceptions” without addressing how individuals use reason to protect their group identities won't remedy the former's defects.

References

Akerlof, G. A. & Kranton, R. E. (2000) Economics and identity. Quarterly Journal of Economics 115(3):715–53.Google Scholar
Anderson, E. (1993) Value in ethics and economics. Harvard University Press.Google Scholar
Druckman, J. N. (2012) The politics of motivation. Critical Review 24(2):199216.Google Scholar
Jussim, L. (2012) Social perception and social reality: Why accuracy dominates bias and self-fulfilling prophecy. Oxford University Press.Google Scholar
Kahan, D. M. (2010) Fixing the communications failure. Nature 463:296–97.Google Scholar
Kahan, D. M. (2013) Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making 8:407–24.Google Scholar
Kahan, D. M. (2015a) Laws of cognition and the cognition of law. Cognition 135:5660.CrossRefGoogle ScholarPubMed
Kahan, D. M. (2015b) What is the “science of science communication”? Journal of Scientific Communication 14(3):112.Google Scholar
Kahan, D. M., Hoffman, D. A., Braman, D., Evans, D. & Rachlinski, J. J. (2012a) They saw a protest: Cognitive illiberalism and the speech-conduct distinction. Stanford Law Review 64:851906.Google Scholar
Kahan, D. M., Jenkins-Smith, H. & Braman, D. (2011) Cultural cognition of scientific consensus. Journal of Risk Research 14:147–74.Google Scholar
Kahan, D. M., Peters, E., Dawson, E. & Slovic, P. (2013) Motivated numeracy and enlightened self-government. Cultural Cognition Project Working Paper No. 116. Available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2319992.Google Scholar
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D. & Mandel, G. (2012b) The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature: Climate Change 2:732–35.Google Scholar
Kahneman, D. & Frederick, S. (2005) A model of heuristic judgment. In: The Cambridge handbook of thinking and reasoning, ed. Holyoak, K. J. & Morrison, R.G., pp. 267–93. Cambridge University Press.Google Scholar
Kunda, Z. (1990) The case for motivated reasoning. Psychological Bulletin 108:480–98.Google Scholar
Lessig, L. (1995) The regulation of social meaning. The University of Chicago Law Review 62:9431045.Google Scholar
Lilienfeld, S. O., Ammirati, R. & Landfield, K. (2009) Giving debiasing away: Can psychological research on correcting cognitive errors promote human welfare? Perspectives on Psychological Science 4(4):390–98.Google Scholar
Lodge, M. & Taber, C. S. (2013) The rationalizing voter. Cambridge University Press.Google Scholar
Peirce, C. S. (1877) The fixation of belief. Popular Science Monthly 12:115.Google Scholar
Sherman, D. K. & Cohen, G. L. (2006) The psychology of self-defense: Self-affirmation theory. In: Advances in experimental social psychology, vol. 38, ed. Zanna, M. P., pp. 183242. Academic Press.Google Scholar
Sunstein, C. R. (2006) Misfearing: A reply. Harvard Law Review 119(4):1110–25.Google Scholar
Sunstein, C. R. (2007) On the divergent American reactions to terrorism and climate change. Columbia Law Review 107:503–57.Google Scholar
Westen, D., Blagov, P. S., Harenski, K., Kilts, C. & Hamann, S. (2006) Neural bases of motivated reasoning: An fMRI study of emotional constraints on partisan political judgment in the 2004 U.S. Presidential election. Journal of Cognitive Neuroscience 18(11):1947–58.Google Scholar
Figure 0

Figure 1. Identity-protective cognition of scientific expertise. Perceptions of highly credentialed scientists' expertise across various disputed issues was highly conditional on fit between congruence of position attributed to the scientists and the subjects' political outlooks. Colored bars reflect 0.95 confidence intervals (N = 1336). Adapted from Kahan et al. (2011).

Figure 1

Figure 2. “System 2” identity-protective cognition. Subjects' assessment of the evidence of the validity of the Cognitive Reflection Test (CRT) as an “open-mindedness” test was conditional on congruence of experimentally manipulated information on who scored higher - “climate-change skeptics” or “believers” - and subjects' political identities. This effect was most pronounced among subjects scoring higher on the CRT itself. Derived from multivariate regression. Predictors for “low” and “high” CRT set at 0 and 2, respectively. CIs reflect 0.95 level of confidence (N=1750). From Kahan (2013).