Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by
Crossref.
Kahan, Dan M.
and
Corbin, Jonathan
2016.
A Note on the Perverse Effects of Actively Open-Minded Thinking on Climate-change Polarization.
SSRN Electronic Journal ,
Kahan, Dan M.
2017.
Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition.
SSRN Electronic Journal ,
Doria, Manuel
2017.
The Unreasonable Destructiveness of Political Correctness in Philosophy.
Philosophies,
Vol. 2,
Issue. 3,
p.
17.
Simon, Bernd
2018.
A realistic view on disagreement: Roots, resolutions, and the trauma of the scientist.
Theory & Psychology,
Vol. 28,
Issue. 4,
p.
496.
Van Bavel, Jay J.
and
Pereira, Andrea
2018.
The Partisan Brain: An Identity-Based Model of Political Belief.
Trends in Cognitive Sciences,
Vol. 22,
Issue. 3,
p.
213.
Gaviria Martínez, Christian David
2019.
Pensar la Historia con el Deseo: Metacognición, Motivación y Comprensión Histórica.
Revista Colombiana de Psicología,
Vol. 28,
Issue. 1,
p.
147.
Maglić, Marina
Pavlović, Tomislav
and
Franc, Renata
2021.
Analytic Thinking and Political Orientation in the Corona Crisis.
Frontiers in Psychology,
Vol. 12,
Issue. ,
Williams, Daniel
2021.
Epistemic Irrationality in the Bayesian Brain.
The British Journal for the Philosophy of Science,
Vol. 72,
Issue. 4,
p.
913.
Williams, Daniel
2021.
Socially adaptive belief.
Mind & Language,
Vol. 36,
Issue. 3,
p.
333.
Czarnek, Gabriela
and
Kossowska, Małgorzata
2021.
The Effects of Needs for Security and Certainty on Economic Beliefs: The Role of Political Engagement and the Welfare State Model.
Social Psychological and Personality Science,
Vol. 12,
Issue. 8,
p.
1467.
Ziegler, Albert
Bedenlier, Svenja
Gläser-Zikuda, Michaela
Kopp, Bärbel
and
Händel, Marion
2021.
Helplessness among University Students: An Empirical Study Based on a Modified Framework of Implicit Personality Theories.
Education Sciences,
Vol. 11,
Issue. 10,
p.
630.
Williams, Daniel
2021.
Motivated ignorance, rationality, and democratic politics.
Synthese,
Vol. 198,
Issue. 8,
p.
7807.
Funkhouser, Eric
2022.
A tribal mind: Beliefs that signal group identity or commitment.
Mind & Language,
Vol. 37,
Issue. 3,
p.
444.
Ellis, Jon
2022.
Motivated reasoning and the ethics of belief.
Philosophy Compass,
Vol. 17,
Issue. 6,
Radinsky, Josh
and
Tabak, Iris
2022.
Data practices duringCOVID: Everyday sensemaking in a high‐stakes information ecology.
British Journal of Educational Technology,
Vol. 53,
Issue. 5,
p.
1221.
Tomljenovic, Helena
Bubic, Andreja
and
Erceg, Nikola
2022.
Contribution of rationality to vaccine attitudes: Testing two hypotheses.
Journal of Behavioral Decision Making,
Vol. 35,
Issue. 2,
Almagro, Manuel
R. Hannikainen, Ivar
and
Villanueva, Neftalí
2023.
Experimental Philosophy of Language: Perspectives, Methods, and Prospects.
Vol. 33,
Issue. ,
p.
215.
Gibbons, Adam F.
2023.
Political ignorance is both rational and radical.
Synthese,
Vol. 202,
Issue. 3,
Berger, Joël
Efferson, Charles
and
Vogt, Sonja
2023.
Tipping pro-environmental norm diffusion at scale: opportunities and limitations.
Behavioural Public Policy,
Vol. 7,
Issue. 3,
p.
581.
Holgado, Manuel Almagro
and
Fernández, Neftalí Villanueva
2023.
Disagreeing with Experts.
International Journal of Philosophical Studies,
Vol. 31,
Issue. 3,
p.
402.
Introduction
My aims in this commentary are two-fold. One is to remark my gratitude to Jussim by attempting to add some value to the enriching scholarly discussion he has initiated in his book (Jussim Reference Jussim2012). The other is to entice him, if possible, into addressing a body of research that seems quite relevant to his topic but that he unfortunately neglects.
That research examines identity-protective cognition (Sherman & Cohen Reference Sherman, Cohen and Zanna2006). Jussim disclaims interest in “political beliefs and ideologies,” because those, in his view, reflect “moral and philosophical issues,” not matters of “objective social reality” on which “issues of accuracy” in perception arise (p. 9). But what the study of identity-protective cognition shows is that “political beliefs,” “ideologies,” “cultural worldviews,” and so forth, are themselves sources of inaccurate perceptions of “objective” facts. Group attachments, according to this work, distort all manner of information processing – from logical inferences to assessments of expertise; from recollection of events to brute sense impressions. These dynamics inform myriad factual conflicts – over the contribution of human activity to global warming, the deterrent efficacy of the death penalty, and the impact of the HPV vaccine on teenage promiscuity, among others (Kahan Reference Kahan2010).
I'll elaborate on why I think this research supplies such fertile ground for engaging Jussim's concerns. Indeed, the prevailing characterization – I'd say mischaracterization – of identity-protective cognition can be used to buttress the charges Jussim makes against the “bounded rationality” paradigm (my words for his target) that animates contemporary decision science. The denigration of reason the field is guilty of here, however, doesn't reflect a mistake about the antagonism between identity-protective cognition and “accuracy.” Instead, it derives from the assumption that forming “accurate perceptions” is the only thing people use their reason for – an offense for which Jussim himself might justly be indicted as a co-conspirator. (Remember, I'm trying to lure him in!)
Identity-protective cognition and accuracy
Identity-protective cognition is a form of motivated reasoning – an unconscious tendency to conform information processing to some goal collateral to accuracy (Kunda Reference Kunda1990). In the case of identity-protective cognition, that goal is protection of one's status within an affinity group whose members share defining cultural commitments. Sometimes (for reasons more likely to originate in misadventure than conscious design) positions on a disputed societal risk become conspicuously identified with membership in competing groups of this sort. In those circumstances, individuals can be expected to attend to information in a manner that promotes beliefs that signal their commitment to the position associated with their group (Kahan Reference Kahan2015b; Sherman & Cohen Reference Sherman, Cohen and Zanna2006).
We can sharpen understanding of identity-protective reasoning by relating this style of information processing to a nuts-and-bolts Bayesian one. Bayes's Theorem instructs individuals to revise the strength of their current beliefs (“priors”) by a factor that reflects how much more consistent the new evidence is with that belief being true than with it being false. Conceptually, that factor – the likelihood ratio – is the weight the new information is due. Many cognitive biases (e.g., base rate neglect, which involves ignoring the information in one's “priors”) can be understood to reflect some recurring failure in people's capacity to assess information in this way.
That's not quite what's going on, though, with identity-protective cognition. The signature of this dynamic isn't so much the failure of people to “update” their priors based on new information, but rather, the role that protecting their identities plays in fixing the likelihood ratio they assign to new information. In effect, when they display identity-protective reasoning, individuals unconsciously adjust the weight they assign to evidence based on its congruency with their group's position (Kahan Reference Kahan2015a). If, for example, they encounter a highly credentialed scientist, they will deem him an “expert” worthy of deference on a particular issue – but only if he is depicted as endorsing the factual claims on which their group's position rests (Fig. 1) (Kahan et al. Reference Kahan, Jenkins-Smith and Braman2011). Likewise, when shown a video of a political protest, people will report observing violence warranting the demonstrators’ arrest if the demonstrators’ cause was one their group opposes (restricting abortion rights; permitting gays and lesbians to join the military) – but not otherwise (Kahan et al. Reference Kahan, Hoffman, Braman, Evans and Rachlinski2012a).
Figure 1. Identity-protective cognition of scientific expertise. Perceptions of highly credentialed scientists' expertise across various disputed issues was highly conditional on fit between congruence of position attributed to the scientists and the subjects' political outlooks. Colored bars reflect 0.95 confidence intervals (N = 1336). Adapted from Kahan et al. (Reference Kahan, Jenkins-Smith and Braman2011).
In fact, Bayes's Theorem doesn't say how to determine the likelihood ratio – only what to do with the resulting factor: multiply one's prior odds by it. But in order for Bayesian information processing to promote accurate beliefs, the criteria used to determine the weight of new information must themselves be calibrated to truth-seeking. What those criteria are might be open to dispute in some instances. But clearly, whose position the evidence supports – ours or theirs? – is never one of them.
The most persuasive demonstrations of identity-protective cognition show that individuals opportunistically alter the weight they assign one and the same piece of evidence based on experimental manipulation of the congruence of it with their identities. This design is meant to rule out the possibility that disparate priors or pre-treatment exposure to evidence is what's blocking convergence when opposing groups evaluate the same information (Druckman Reference Druckman2012).
But if this is how people assess information outside the lab, then opposing groups will never converge, much less converge on the truth, no matter how much or how compelling the evidence they receive. Or at least they won't so long as the conventional association of positions with loyalty to opposing identify-defining groups remains part of their “objective social reality.”
Bounded rationality?
Frustration of truth-convergent Bayesian information processing is the thread that binds together the diverse collection of cognitive biases of the bounded-rationality paradigm. Identity-protective cognition, we have seen, frustrates truth-convergent Bayesian information processing. Thus, assimilation of identity-protective reasoning into the paradigm – as has occurred within both behavioral economics (e.g., Sunstein Reference Sunstein2006; Reference Sunstein2007) and political science (e.g., Lodge & Taber Reference Lodge and Taber2013) – seems perfectly understandable.
Understandable, but wrong!
The bounded-rationality paradigm rests on a particular conception of dual-process reasoning. This account distinguishes between an affect-driven, “heuristic” form of information processing, and a conscious, “analytical” one. Both styles – typically referred to as System 1 and System 2, respectively – contribute to successful decision making. But it is the limited capacity of human beings to summon System 2 to override errant System 1 intuitions that generates the grotesque assortment of mental miscues – the “availability effect,” “hindsight bias,” the “conjunction fallacy,” “denominator neglect,” “confirmation bias” – on display in decision science's benighted picture of human reason (Kahneman & Frederick Reference Kahneman, Frederick, Holyoak and Morrison2005).
It stands to reason, then, that if identity-protective cognition is properly viewed as a member of bounded-rationality menagerie of biases, it, too, should be most pronounced among people (the great mass of the population) disposed to rely on System 1 information processing. This assumption is commonplace in the work reflecting the bounded-rationality paradigm (e.g., Lilienfeld et al. Reference Lilienfeld, Ammirati and Landfield2009; Westen et al. Reference Westen, Blagov, Harenski, Kilts and Hamann2006).
But actual data are to the contrary. Observational studies consistently find that individuals who score highest on the Cognitive Reflection Test (CRT) and other reliable measures of System 2 reasoning are not less polarized but more so on facts relating to divisive political issues (e.g., Kahan et al. Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and Mandel2012b). Experimental data support the inference that these individuals use their distinctive analytic proficiencies to form identity-congruent assessments of evidence. When assessing quantitative data that predictably trips up those who rely on System 1 processing, individuals disposed to use System 2 are much less likely to miss information that supports their groups’ position. When the evidence contravenes their group's position, these same individuals are better able to explain it away (Kahan et al. Reference Kahan, Peters, Dawson and Slovic2013).
Indeed, one study that fits this account addresses a matter that Jussim does touch on in passing: the tendency of partisans to form negative impressions of their opposing number (Fig. 2). In the study, subjects selectively credited or dismissed evidence of the validity of the CRT as an “open-mindedness” test depending on whether the subjects were told that individuals who held their political group's position on climate change had scored higher or lower than those who held the opposing view. Already large among individuals of low to modest cognitive reflection, this effect was substantially more pronounced among those who scored the highest on the CRT (Kahan Reference Kahan2013).
Figure 2. “System 2” identity-protective cognition. Subjects' assessment of the evidence of the validity of the Cognitive Reflection Test (CRT) as an “open-mindedness” test was conditional on congruence of experimentally manipulated information on who scored higher - “climate-change skeptics” or “believers” - and subjects' political identities. This effect was most pronounced among subjects scoring higher on the CRT itself. Derived from multivariate regression. Predictors for “low” and “high” CRT set at 0 and 2, respectively. CIs reflect 0.95 level of confidence (N=1750). From Kahan (Reference Kahan2013).
The tragic conflict of expressive rationality
As indicated, identity-protective reasoning is routinely included in the roster of cognitive mechanisms that evince bounded rationality. But where an information-processing dynamic is consistently shown to be magnified, not constrained, by exactly the types of reasoning proficiencies that counteract the mental pratfalls associated with heuristic information processing, then one should presumably update one's classification of that dynamic as a “cognitive bias.”
In fact, the antagonism between identity-protective cognition and perceptual accuracy is not a consequence of too little rationality but too much. Nothing an ordinary member of the public does as consumer, as voter, or participant in public discourse will have any effect on the risk that climate change poses to her or anyone else. Same for gun control, fracking, and nuclear waste disposal: her actions just don't matter enough to influence collective behavior or policymaking. But given what positions on these issues signify about the sort of person she is, adopting a mistaken stance on one of these in her everyday interactions with other ordinary people could expose her to devastating consequences, both material and psychic. It is perfectly rational under these circumstances to process information in a manner that promotes formation of the beliefs on these issues that express her group allegiances, and to bring all her cognitive resources to bear in doing so.
This account roots identity-protective cognition in the theory of “expressive rationality,” a rival to both the rational actor model in conventional economics and the bounded-rationality paradigm (Anderson Reference Anderson1993). The basic tenet of this account is that individuals derive “expressive utility,” intrinsic and instrumental, from actions that, against the background of social norms, convey their defining group commitments (Akerlof & Kranton Reference Akerlof and Kranton2000). Actions of this sort – like pretty much any other (Peirce Reference Peirce1877) – are reliably enabled by appropriate beliefs. Identity-protective cognition is the style of reasoning for rationally engaging information that is relevant to identity-expressive beliefs, particularly when that information has no other real relevance to an individual's life.
Of course, when everyone uses their reason this way at once, collective welfare suffers. In that case, culturally diverse democratic citizens won't converge, or converge as quickly, on the significance of valid evidence on how to manage societal risks. But that doesn't change the social incentives that make it rational for any individual – and hence every individual – to engage information in this way. Only some collective intervention – one that effectively dispels the conflict between the individual's interest in forming identity-expressive risk perceptions and society's interest in the formation of accurate ones – could (Kahan et al. Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and Mandel2012b; Lessig Reference Lessig1995).
Rationality≠accuracy (necessarily)
Like the scholarship Jussim criticizes, the standard view of identity-protective cognition force fits a species of human perception into the bounded-rationality template. But unlike the larger intellectual project that Jussim attacks, the mistake that doing so involves here does not reflect the field's commitment to denigrating perceptual “accuracy.”
Obviously, it isn't possible to assess the “rationality” of any pattern of information processing unless one gets what the agent processing the information is trying to accomplish. Because forming accurate “factual perceptions” is not the only thing people use information for, a paradigm that motivates empirical researchers to appraise cognition exclusively in relation to that objective will indeed end up painting a distorted picture of human thinking.
But worse, the picture will simply be wrong. The body of science this paradigm generates will fail, in particular, to supply us with the information a pluralistic democratic society needs to manage the forces that pit citizens’ stake in using their reason to know what's known and using it to be who they are as members of diverse cultural groups against one another (Kahan Reference Kahan2015b).
The dominance of the bounded-rationality paradigm creates this risk. But a counterprogram that seeks to vindicate human rationality by relentlessly defending the “accuracy” of “perceptions” without addressing how individuals use reason to protect their group identities won't remedy the former's defects.