1.
Empirical work on motivated reasoning suggests that our judgments are influenced to a surprising extent by our wants, desires, and preferences (Kahan Reference Kahan2016; Lord, Ross, and Lepper Reference Lord, Ross and Lepper1979; Molden and Higgins Reference Molden, Higgins, Holyoak and Morrison2012; Taber and Lodge Reference Taber and Lodge2006). In this article, we address the import of motivated reasoning for the epistemic statuses of beliefs that are impacted by it. Our central questions are: Are such beliefs epistemically justified? Are they candidates for knowledge? In liberal democracies, these questions are increasingly controversial as well as politically timely (Beebe et al. Reference Beebe, Baghramian, Drury and Dellsén2018; Lynch Reference LynchForthcoming, 2018; Slothuus and de Vreese Reference Slothuus and de Vreese2010). And yet, the epistemological significance of motivated reasoning has been almost entirely ignored by those working in mainstream epistemology.Footnote 1 We aim to rectify this oversight.
Our general aim is to provide some reasons for thinking that politically motivated reasoning has negative import for the epistemic statuses of beliefs formed in part through such reasoning. We submit that there is some initial pressure to think that beliefs formed through politically motivated reasoning are (epistemically) unjustified. If our assessments of arguments and evidence is partly dictated by our political convictions, then how could these assessments be epistemically justified? Our aim in this paper is to make this initial thought more precise, and to argue that it is more than just initially plausible. In fact, as we’ll show, the widespread presence of motivated reasoning in the political domain should lead us to be much less confident in our political beliefs, and for reasons that have not been adequately appreciated by epistemologists.
Of course, one very natural strategy would be to look at whether politically motivated reasoning is a reliable way of going about forming beliefs. There is empirical evidence which suggests that it isn’t, in that it leads many of us to form beliefs about scientific topics that conflict with the scientific consensus (Kahan, Jenkins-Smith, and Braman Reference Kahan, Jenkins-Smith and Braman2011). If this is right, then empirical work on politically motivated reasoning poses a direct skeptical challenge: knowledge requires reliable belief formation, so if beliefs formed via politically motivated reasoning are unreliable, they can’t be known.Footnote 2
While this line of argument represents one straightforward way in which politically motivated reasoning might give us pause, we want to explore another way in which politically motivated reasoning should give us pause even if it doesn’t lead us astray, but rather fortuitously “guides” us toward the truth. Maybe some beliefs formed through politically motivated reasoning are unjustified even if they generally tend to be true. We are going to argue that we should take this possibility seriously.Footnote 3
We will start with an overview of work on motivated reasoning. We will then develop three skeptical challenges that this work poses for certain of our beliefs. Finally, we will finish by taxonomizing these skeptical challenges according to their “level” of skeptical power, and by showing how motivated reasoning has some important ramifications for the way we think about the demands of intellectual humility.
2.
Manifestations of motivated reasoning can be distinguished in terms of the motivations they serve (Molden and Higgins Reference Molden, Higgins, Holyoak and Morrison2012). There is a large body of work that documents the ways in which our desire to maintain a positive self-conception leads us to give more credence to information that confirms our perception of ourselves as kind, competent, and healthy than to information that challenges these perceptions. For example, Kunda (Reference Kunda1987) reports a study in which subjects read an article which stated that caffeine consumption causes serious health problems in women but not in men. Women who were heavy caffeine consumers found the article less convincing than women who were light caffeine consumers.Footnote 4
We are here particularly interested in the impact that our political beliefs and convictions have on our assessment of arguments that pertain to those beliefs and convictions. Call this politically motivated reasoning. For example, Lord, Ross, and Lepper (Reference Lord, Ross and Lepper1979) report a study in which subjects were given the same set of arguments for and against capital punishment, but their assessments of the strength of these arguments correlated with their existing views about the rights and wrongs of capital punishment. Put simply, subjects who were predisposed to object to capital punishment found anticapital punishment arguments more convincing, whereas those who were predisposed to accept it found procapital punishment arguments more convincing. Dan Kahan and his collaborators present a series of studies which show that our political beliefs also impact our assessment of arguments and evidence that pertain to scientific issues with clear political relevance, such as global warming and the safety of nuclear power (Kahan, Jenkins-Smith, and Braman Reference Kahan, Jenkins-Smith and Braman2011; Kahan Reference Kahan2016, Reference Kahan, Boykoff and Crow2014).Footnote 5
Three aspects of work on politically motivated reasoning are worth emphasizing from the outset. First, while we have been talking of politically motivated reasoning in terms of the influence of our background political beliefs on our cognitive processes, it is important to note that the operative notion of “political belief” is very broad indeed. This is perhaps clearest in Dan Kahan and his collaborators’ work on cultural cognition. Kahan calls attention to the fact that politically motivated reasoning is as much driven by our broader cultural backgrounds as our political beliefs, narrowly construed. The thought is that the motivation or goal that is served by politically motivated reasoning is, broadly speaking, the goal of identity protection— that is, the goal of forming beliefs that protect and maintain our status within a group that defines our identity and whose members are united by a shared set of values. As Kahan puts it:
When positions on some risk or other policy relevant fact have come to assume a widely recognised social meaning as a marker of membership within identity-defining groups, members of those groups can be expected to conform their assessment of all manner of information—from persuasive advocacy to reports of expert opinion; from empirical data to their own brute sense impressions—to the position associated with the respective groups. (Reference Kahan2016, 1)
While it will do no real harm to simplify things and talk in more narrowly political terms (“right wing” vs. “left wing”), it should be borne in mind that this is a simplification.
Second, the empirical evidence does not suggest that politically motivated reasoning is less prevalent in “politically sophisticated subjects.” In fact, it is more like the opposite. Taber and Lodge (Reference Taber and Lodge2006) found that the more politically sophisticated a subject was, the more likely they were to conform their assessment of the quality of the arguments and information they consider to their political beliefs.Footnote 6 Taber and Lodge hypothesize that this is because politically motivated reasoning involves two biases. We tend to spend more time and resources finding flaws with arguments that challenge deeply held political beliefs than we do on finding flaws with arguments that support these beliefs, and, when we are free to choose which information to expose ourselves to, we tend to seek out arguments that will confirm deeply held political beliefs rather than arguments that will challenge these convictions. Put simply, politically sophisticated subjects have more information at their disposal, and the more information you have at your disposal, the better you will be at finding flaws in arguments with conclusions you don’t like, and the better you will be at seeking out information that confirms rather than challenges your existing views.Footnote 7
Third, politically motivated reasoning is something that many of us engage in, at least some of the time.Footnote 8 This is particularly clear in Kahan and his collaborators’ work. They look at the impact of our political beliefs on the following topics, among others:
Attitudes about global warming
Attitudes about the safety of “burying” nuclear waste underground
Attitudes about the efficacy of “concealed-carry” laws
One would expect laypersons to look for the existence of scientific consensus on these sorts of topics. Kahan, Jenkins-Smith, and Braman (Reference Kahan, Jenkins-Smith and Braman2011) report the following findings:
Seventy-eight of those on the “political left” think that most scientists agree that global temperatures are rising, whereas only 19% of those on the “political right” think that most scientists agree. (Fifty-six percent of those on the right think that most scientists are divided and 25% think that most disagree.)
Sixty-eight percent of those on the left think that most scientists agree that humans are causing global warming, whereas only 12% of those on the right think that most scientists agree. (Fifty-five percent of those on the right think that most scientists are divided and 32% think that most disagree.)
Thirty-seven percent of those on the right think that most scientists agree that there are safe methods of geologically isolating (“burying”) nuclear waste, whereas only 20% of those on the left think that most scientists agree. (Thirty-five percent of those on the left think that most scientists disagree and 45% on both sides of the political spectrum think that scientists are divided.)
Forty-seven percent of those on the right think that most scientists agree that permitting adults without criminal records or histories of mental illness to carry concealed handguns in public decreases violent crime, whereas only 10% of those on the left think that most scientists disagree. (Forty-seven percent of those on the left think that most scientists disagree and much the same percentage on both sides of the political spectrum think that scientists are divided.)
What does this tell us? We want to highlight one crucial point. It is often claimed that there are important cognitive asymmetries between “liberals” (in the US sense) and “conservatives” (Hodson and Busseri Reference Hodson and Busseri2012; Kanazawa Reference Kanazawa2010). It may well be that there are some asymmetries. But these findings suggest that there are also some important symmetries. Footnote 9 Both those on the right and those on the left form beliefs about scientific topics in ways that accord with their political convictions. In doing so, they end up at odds with expert scientific opinion, at least as it is represented in “expert consensus reports” from the US National Academy of Sciences (NAS). The NAS has issued reports with the following conclusions:
There is scientific consensus that global warming is real, and human activity is a central cause of it (National Research Council Committee on Analysis of Global Change Assessments 2007).
There is scientific consensus that there are safe methods of geologically isolating (or “burying”) nuclear waste (National Research Council Board on Radioactive Waste Management 1990).
The available evidence does not permit forming a conclusion on the efficacy of concealed-carry laws (National Research Council Committee to Improve Research Information and Data on Firearms 2004).
This suggests that those on the right are out of step with the scientific consensus on global warming, whereas those on the left aren’t. But those on the left are out of step with the scientific consensus on the safety of nuclear waste disposal. And both sides overestimate the extent to which the evidence supports their preferred position on concealed-carry laws. Now, it may be claimed that these findings still support the conclusion that those on the right are more out of step with the scientific consensus than those on the left. That is, politically motivated reasoning is more pronounced (or at least more damaging) amongst those on the right than amongst those on the left. This might be true, but it hardly shows that those on the left don’t also engage in politically motivated reasoning, or that, in doing so, they don’t regularly end up at odds with expert scientific opinion.Footnote 10
This completes our brief overview of work on politically motivated reasoning. We will now turn to its epistemological implications and, in particular, its skeptical import.
3.
Politically motivated reasoning is reasoning that serves a motive other than a desire to arrive at true beliefs (Kahan Reference Kahan2016). If a subject engages in politically motivated reasoning when assessing some evidence or argument, their assessment of that evidence or argument is nontrivially influenced by their background political beliefs.Footnote 11 If the evidence or argument causes trouble for those beliefs, they try to reject it, explain it away, or minimize its importance; if the evidence or argument supports those beliefs, they enthusiastically endorse it, and exaggerate its importance. The evidence outlined above suggests that when subjects engage in politically motivated reasoning, they are often led astray: they end up forming false beliefs. Consider, for instance, the attitude of many people on the political right toward global warming. Sometimes, though, cultural cognition does not lead us astray, but rather fortuitously “guides” us toward the truth. In such cases, are our beliefs thereby in the clear, epistemically speaking? Perhaps not.Footnote 12
Consider the following pair of cases:
Naïve Student-1: Tim is an impressionable student, one who is inclined to blindly trust socially approved opinions with little to no critical scrutiny. Fortunately for Tim, he is raised in a household where exposure to (and social praise of) sound climate science is abundant, and climate change “denial” is discussed only in a negative light. Tim (like most around him) believes that global temperatures are rising, and that human activity is the cause.
Naïve Student-2: Tim* is an impressionable student, one who is inclined to blindly trust socially approved opinions with little to no critical scrutiny. Unfortunately for Tim*, he is raised in a household where exposure to (and social praise of) sound climate science is rare, and “climate change” is discussed only in a negative light. Tim* (like most around him) believes that it’s not the case that global temperatures are rising.
With the literature on politically motivated reasoning in hand, we can give a plausible psychological explanation of what is going in in both these cases. Both Tim and Tim* assess any information they receive pertaining to climate science in light of their background political beliefs and broader cultural backgrounds.Footnote 13 But, at least in part because these background beliefs are very different, these assessments lead them in opposite directions. Tim ends up believing that global temperatures are rising, and that human activity is the cause. Tim* ends up denying this. We would submit that neither Tim nor Tim* are particularly unusual. While our descriptions of both are undeniably simplistic, many (though of course not all) individuals are, in most essential respects, like Tim or Tim*.
Tim*’s belief in Naïve Student-2 is clearly epistemically defective.Footnote 14 But what about Tim’s belief? Interestingly, it might be argued that there is also something amiss with Tim’s belief, and this is so even though: (i) the target belief is true and (ii) Tim possesses excellent evidence for his belief (i.e., the evidence that we may suppose he is exposed to, and which is socially approved in his epistemically friendly environment).
We can start with the distinction, familiar in mainstream epistemology, between propositional and doxastic justification. Roughly, this is the distinction between having good reasons for one’s belief (propositional justification) and properly basing one’s belief on the good reasons one possesses (doxastic justification).Footnote 15 In Naïve-Student-1, Tim has good reasons for his belief, but it is not at all clear that he is properly basing his belief on the good reasons that he has. Rather, it seems, to the extent that he is basing his beliefs on the good reasons he has, he is doing this only because these beliefs and reasons have the epistemically irrelevant property of being socially approved. And if that’s right, then—as this line of thought goes—Tim, despite possessing propositional justification for his belief that global temperatures are rising, lacks doxastic justification (and ipso facto propositional knowledge) for believing that global temperatures are rising.Footnote 16
If the above line of thought is correct, then an interesting and rather troubling result is that politically motivated reasoning can have epistemically deleterious effects even when it guides one toward the truth, and thus, regardless of whether politically motivated reasoning is reliable.Footnote 17 But is it correct? In the next two sections we consider two objections to it.
4.
We have argued that Tim’s belief that global temperatures are rising is (doxastically) unjustified because, to the extent that he bases this belief on the good reasons he has, he does this because those reasons have the epistemically irrelevant property of being socially approved. That is, he believes on the basis of these reasons because those around him regard these reasons as being good reasons. One might object that this property is not epistemically irrelevant at all. Tim is utilizing a generally effective strategy for deciding what to believe (and why to believe it): defer to socially recognized authorities. Deferring to recognized authorities is not a bad epistemic strategy. It isn’t going to work in all environments, of course—recognized authorities can be wrong (see Tim*)—but this is just an instance of the more general point that good inferential strategies won’t work well in all possible environments (see Gigerenzer Reference Gigerenzer2000).
We agree that there need not be anything wrong with deferring to recognized authorities, and that good inferential strategies need not work well in all possible environments, but we don’t think this spells trouble for our argument. Let’s start with the first concession (that there need not be anything wrong with deferring to recognized authorities). The crucial question is: What is this deference based on? It is one thing to defer to recognized authorities because you have reason to think they are likely to have the right answers (or, at least, more likely than you). It is another to defer to them because you recognise that you share certain core values or belong to the same social group. We would submit that while the former makes perfect sense from the epistemic point of view, the latter is problematic. The mere fact that you share certain core values with someone surely can’t be a good reason to defer to them, at least on factual matters like global warming.Footnote 18
The literature on politically motivated reasoning suggests that, when we engage in it, our patterns of deference are best explained in terms of a preference for listening to those with whom we share certain core values. Kahan, Jenkins-Smith, and Braman (Reference Kahan, Jenkins-Smith and Braman2011) look at what drives subjects’ assessments of the expertise of (fictional) putative authorities on issues like global warming. They find that these assessments are driven by fit between the views of a putative authority and one’s ideological predispositions. Thus, many conservatives will regard scientists who accept global warming as less authoritative than scientists who deny it, and liberals will regard scientists who accept that there are safe nuclear waste disposal methods as less authoritative than scientists who deny this.
What about the second concession—that good inferential strategies need not work well in all environments? The problem with the strategy of deferring to those with whom you share certain core values isn’t that there are possible environments in which it isn’t very effective in producing true beliefs. The problem is rather that, at least for most of us, it isn’t very effective in the world in which we find ourselves. There may, of course, be people for whom it is effective. But work on politically motivated reasoning suggests that there are a lot of people for whom it isn’t. Take again Dan Kahan’s work on cultural cognition (discussed above). He finds that while liberals tend to have views about global warming that align with the scientific consensus, they also tend to have views about the safety of nuclear power that don’t. This, we think, demonstrates the shortcomings of the strategy of deferring to those with whom you share core values.
5.
The second objection is as follows: In Naïve-Student-1, Tim does properly (enough) base his belief on the reasons he has. To see why, compare Tim with an individual who clearly does not properly base his belief on the reasons he has, Jennifer Lackey’s “racist juror”:
Racist Juror: Martin is a racist juror. Martin receives, over the course of the trial, compelling testimony that the defendant is guilty. Martin however bases his belief that the accused is guilty not on the good reasons he has for thinking so, but on his racist belief that individuals of the ethnicity of the defendant are likely to commit criminal acts.Footnote 19
Both Tim and Martin are propositionally justified in their respective beliefs: there are good reasons for them to believe the relevant propositions. But, so far as doxastic justification is concerned, there seems to be a crucial difference between Tim and Martin. First, while Tim bases his belief that scientists agree that global temperatures are rising on the good reasons that he has, Martin bases his belief that the defendant is guilty on bad, racist reasons. Granted, Tim bases his belief on these good reasons because they are socially approved in his environment. But, nonetheless, he still based his belief on these reasons. So, a fortiori, we should maintain that unlike Martin in Racist Juror, Tim in Naïve Student-1 is not only propositionally justified but also doxastically justified.
We think this objection fails for two principal reasons. The first is a general reason about the relationship, thus far not made explicit, between basing and doxastic justification. If basing suffices for doxastic justification in the sense that, if a subject S bases her belief that proposition p on a reason, R, and R is a good reason for believing p, then S is doxastically justified in believing that p, then we concede that it follows that Tim’s situation in Naïve Student-1 wouldn’t be defective in the way we’ve indicated.
But, crucially, basing is not sufficient for doxastic justification in this way, even if basing is necessary for doxastic justification. This point does not rely on a commitment to any particular substantive view of basing.
For example, on a doxastic account of the basing relation (e.g., Audi Reference Audi1982; Ginet Reference Ginet1985), basing a belief on a reason is a matter of possessing a meta-belief to the effect that that reason is a good reason to hold that belief. If that meta-belief is itself a matter of, say, wishful thinking, the target belief is surely not doxastically justified even if (on the doxastic theory) it counts as being based on a good reason; otherwise, wishful thinking could convert an unjustified belief into a justified belief.
The same point can be made on an inferential account of basing (e.g., Bondy and Carter Reference Bondy, Carter and Carter2020) according to which inference from believed premises to a believed conclusion is sufficient for basing belief in the conclusion on belief in the premises. Ordinarily, on an inferential account of basing, basing will issue in doxastically justified belief. However, inference (like any mental activity) itself can (as Ryle [Reference Ryle1945] famously noted) be competent or incompetent. Incompetent inference (e.g., one that transitions through a bad step) can result in basing, on an inferential account of basing, even if it does not result in a doxastically justified belief.
Finally, consider a causal account of basing. On such a view, basing is a matter of causation in the sense that a subject S’s belief that p is based on R iff S’s belief that p is causally sustained by R. Footnote 20 Even if we assume R is a good reason for P, it is a mistake to think that the mere presence of such a causation relation will suffice for S’s belief that p to be doxastically justified. Here it is important to distinguish causation simpliciter from competent causation, where the latter requires not just that R cause p nondeviantly (see, e.g., Plantinga Reference Plantinga1993, 69n 8) but also that R cause p via the exercise of a reliably truth-conducive disposition of the subject.
In short, the idea is that doxastic justification requires not just basing as such, but good or proper basing—where proper basing requires having a good reason as one’s basis, but is not secured by simply having a good reason as one’s basis.
Here is not the place to offer a full account of proper basing (though offering such an account remains a live project in contemporary epistemology).Footnote 21 Rather, we want to emphasize that cases like that of Naïve-Student-1 aren’t going to be insulated from the kind of criticism we raise simply because basing on a good reason is present in a way that it is not in cases like, e.g., Racist Juror.
Of course—and this bring us to the second line of resistance to the anticipated reply—it doesn’t follow from the fact that basing on a good reason doesn’t suffice for doxastic justification that Tim’s belief is not in fact both based on a good reason and, additionally, doxastically justified. Why, exactly, should we think that features of Tim’s situation should lead us to deny that his belief is doxastically justified even though it is ex hypothesi based on a good reason?
The answer, in short, has to do with an epistemically problematic feature of the way in which the target belief is based. To sharpen what’s concerning about the way Tim bases his belief in Naïve-Student-1, consider now the following case:
Francophile Cartographer: Rae (irrationally) believes that French cartographers are the only reliable sources of cartographical information. Rae’s Francophilia so strongly influences her assessment of the reliability of maps that she distrusts all information written by non-French authors. Rae stumbles upon several pieces of evidence (E1, E2, and E3) for believing cartographical claim X. But, simply because E1, E2, and E3 were written by Italian cartographer Giacomo Gastaldi, Rae disregards this evidence and so does not come to believe claim X on the basis of it. Later that day, Rae encounters the very same pieces of evidence (E1, E2, and E3) for cartographical claim X, but this time they were written by French cartographer Pierre Desceliers. Because—and only because—Desceliers is French, she accepts this evidence, and comes to believe X on the basis of it.
In Francophile Cartographer, although Rae bases her belief on what we may assume are the good reasons she has for believing claim X—viz, E1, E2, and E3—she does not properly base her belief on these reasons, viz, in a way that would plausibly be required for doxastic justification. When Rae encounters these pieces of evidence in the work of Gastaldi, she disregards them; when she encounters them in the work of Desceliers, she accepts them. Rae’s belief isn’t properly based on these pieces of evidence if whether she believes on the basis of them depends primarily on the nationality of the putative expert (an epistemically irrelevant feature). She is basing her belief on good reasons not because they are good reasons but, rather, primarily because of something epistemically irrelevant. And a result is that the process by which she is basing her belief on a good reason is problematic as a candidate for issuing doxastic justification on any of the three accounts of basing considered.
With reference to the causal and inferential accounts, it’s worth noting that Rae’s basing process is an incompetent one, even when it reliably issues in a good reason as a basis.Footnote 22 (After all, the disposition manifested here is a generally unreliable one, even if all the French communicators Rae is in contact with happen to speak truthfully.)Footnote 23 Because incompetent basing is justification-undermining on a causal account as well as on an inferential account, Rae’s belief, based on a good reason, falls short of doxastic justification despite being based on a good reason with reference to either of these accounts.
Likewise, with reference to the doxastic account, it should be noted that, while Rae we may grant has a meta-belief to the effect that E1, E2 and E3 are good reasons for believing X (and so bases her belief that X on these good reasons), this meta-belief is itself unjustified--viz., it is a result of irrational prejudice. Given that irrational prejudice surely can’t convert an otherwise doxastically unjustified belief into a doxastically justified belief, it follows that Rae’s belief, based on a good reason, falls short of doxastic justification on a doxastic account of basing despite being based on a good reason.
But notice that once we accept this line of thinking in Francophile Cartographer, then, by parity of reasoning, we should also accept that Tim in Naïve Student-1 isn’t doxastically justified in believing that global temperatures are rising. Both cases are structurally on a par; in both cases, the subject bases her belief on good reasons not because they are good reasons, but rather, primarily because of something epistemically irrelevant. And, a result in both cases is that the process by which the subject is basing her belief on a good reason is problematic as a candidate for issuing doxastic justification regardless of what view of basing one subscribes to.
So our argument is as follows. There are (at least) two ways in which one’s belief can fail to be doxastically justified. One may be like Martin in Racist Juror, believing some proposition for which there are good reasons, but not on the basis of those good reasons. Or one may be like Tim in Naïve Student-1 or Rae in Francophile Cartographer. That is, one might believe some proposition for which there are good reasons on the basis of those good reasons not because they are good reasons, but for some other epistemically irrelevant (or plain bad) reason, so as to render the basing epistemically incompetent or otherwise defective. If this is right, then our skeptical worry remains: politically motivated reasoning can have epistemically deleterious effects even when it guides one toward the truth.
As we have already indicated, we think that Tim is not particularly unusual, so what we say about Tim’s belief that global temperatures are rising is going to hold for many people who believe that global temperatures are rising. In what follows we will do two things. First, as promised (footnote 9), we will indicate how our argument relates to the literature on irrelevant influences on belief. Second, we will explore a further skeptical worry that we think might arise in even the “best” sort of case.Footnote 24
6.
We have been looking at one way in which our beliefs can be influenced by factors that have nothing to do with the truth of their contents: they can be formed via politically motivated reasoning. Such factors are called “epistemically irrelevant” because they are irrelevant to the truth of the beliefs in question. The literature on epistemically irrelevant influences on belief looks at the general question of what to say about beliefs that are influenced by epistemically irrelevant factors.Footnote 25 While we are primarily concerned with politically motivated reasoning in particular, rather than irrelevant factors in general, the reader might want to know how our argument relates to this literature.
There are two (related) respects in which the argument above differs from much of the literature on epistemically irrelevant influences. First, there is a divide between those who think that irrelevant factors pose a problem for (some) of our beliefs (e.g., Avnur and Scott‐Kakures Reference Avnur and Scott‐Kakures2015; Vavova Reference Vavova2018), and those who are more skeptical (e.g., Mogensen Reference Mogensen2016; Srinivasan Reference Srinivasan2015; White Reference White2010). Those who think that irrelevant factors can pose a problem tend to focus on the idea that learning your belief(s) have been influenced by irrelevant factors gives you a defeater for your belief(s). For instance, Katia Vavova defends this principle:
Good Independent Reason Principle: To the extent that you have good independent reason to think that you are mistaken with respect to p, you must revise your confidence in p accordingly—insofar as you can. (Reference Vavova2018, 145)
Imagine that my political beliefs have been shaped by my upbringing, and I come to recognise this. Vavova’s thought is that this would occasion a skeptical worry about my political beliefs if it gave me good reason to think my political beliefs might be mistaken.
Setting aside the implications of Vavova’s proposal, the important point for our purposes is that, for Vavova, irrelevant factors are important insofar as being aware of them may gave us reasons to think certain beliefs are mistaken.Footnote 26 The idea is that recognizing the influence of irrelevant factors on our beliefs may give us a defeater for these beliefs. Our argument above was that politically motivated reasoning constitutes a problem for some of our beliefs because beliefs formed through politically motivated reasoning are not held on the right sort of basis (or not held on the right basis in the right way). Our argument therefore differs from Vavova’s in that we are interested in the consequences of politically motivated reasoning for the grounds on which our beliefs are held, rather than in whether the recognition that our beliefs have been influenced by irrelevant factors can yield a defeater for our beliefs.
This difference is the consequence of a second and more fundamental difference. Vavova, like many of the authors who’ve written on irrelevant influences, is primarily interested in the consequences of the recognition that our beliefs have been influenced by irrelevant factors for the epistemic statuses of our beliefs. Her question is: Under what conditions should this recognition lead us to revise our confidence? But our argument has not focused on the consequences of the recognition that some of our beliefs have been influenced by politically motivated reasoning. Rather, we have focused on the epistemic status of beliefs formed through politically motivated reasoning whether the subjects are aware of the origins of their beliefs or not. In our discussions of Tim (and Rae) above, we didn’t specify whether Tim (or Rae) are aware of how they formed their respective beliefs. This doesn’t preclude us from thinking of the issue in terms of defeaters. One might think there are defeaters for one’s beliefs of which one is not aware, and that the existence of such defeaters has consequences for the epistemic statuses of one’s beliefs. So we are interested in a different issue than Vavova and others, though there may be some interesting connections between our projects.
7.
We have shown that our argument is importantly distinct from Vavova’s (and much of the literature on irrelevant influences on belief) because it doesn’t focus on the consequences of recognizing that our beliefs have (or might have been) influenced by irrelevant factors. That said, we want to finish by describing a case in which a subject does suspect that their beliefs have been influenced by politically motivated reasoning:
Enlightened Student (Part I): Diane is like Tim in the following respects: (i) Diane is raised in a household where exposure to (and praise of) sound climate science is abundant, and climate change “denial” is discussed only in a negative light; (ii) Diane (like most around her) believes that scientists agree that global temperatures are rising. Diane is unlike Tim in the following respects: (iii) Diane is not influenced or impressed by (as Tim is) which opinions are the socially approved ones; the matter of whether a given opinion is socially approved (or disapproved) has a negligible effect on whether Diane accepts it, even though Diane ends up accepting many claims (including the claim that <scientists agree that global temperatures are rising>) that are the accepted ones in her epistemically friendly environment.
It seems clear that Diane in Enlightened Student (Part 1) is not just propositionally justified but also doxastically justified in believing that global temperatures are rising, and that human activity is the cause. Unlike Tim, Diane seems to properly base this belief on the good reasons that she has for holding it. She doesn’t believe on the basis of these reasons because they are socially approved but, rather, because they strike her as good reasons. We take this to show that there is nothing wrong with Diane so far as doxastic justification and the basis of her beliefs is concerned.
But even Diane may face some trouble. Imagine Enlightened Student continues:
Enlightened Student (Part II): Diane starts to get interested in empirical research on the social causes of belief formation. She reads the literature cited in this paper, paying special attention to Kahan’s work on politically motivated reasoning. She realizes that politically motivated reasoning is ubiquitous, and even sophisticated subjects like her are susceptible to it. Now, when she thinks about the reasons for which she holds her beliefs about things like climate change, it doesn’t seem to her like she believes on the basis of these reasons because they happen to vindicate her broader political and cultural beliefs. But she has also read the literature on the (un)reliability of introspectionFootnote 27 and “hindsight bias”:Footnote 28 she recognizes that she is not necessarily a good judge of why she thinks as she does, and that, like all of us, she tends to engage in a lot of ex post facto rationalization. She therefore regards it as very possible—if not likely—that she believes on the basis of the reasons she does not because they strike her as good reasons, but because they fit with her broader political and cultural beliefs.
What are we to say about Diane in Enlightened Student (Part II)? There is an error possibility that she regards as “live,” and it is such that, were it to obtain, she would (we have argued in sections 3 and 4) not be doxastically justified in her belief that global temperatures are rising. We can view Diane’s unease as (albeit unconsciously) a response to this situation.
But is there any problem with Diane’s epistemic situation? While we wouldn’t want to say for sure that there is, it is worth noting that there are some influential views in contemporary epistemology on which Diane’s belief may well be unjustified. Take, for instance, Michael Bergmann’s (Reference Bergmann2006) influential account of epistemic defeat, on which (put roughly) it is necessary and sufficient for some condition to defeat S’s belief that S take it that the condition defeat their belief (e.g., by counting against the truth of the belief or against the reliability of its formation). On this account, if Diane takes it that her inability to rule out this error possibility constitutes a defeater for her belief that global temperatures are rising, then her belief is defeated, and she is no longer justified in believing that global temperatures are rising. So if you are attracted to a Bergmann-style account of epistemic defeat, there is room for a worry about whether Diane’s belief is justified.
A similar point can made with reference to Lewis’s (Reference Lewis1996) account of relevant alternatives and their relationship to knowledge attributions. For Lewis, whether it is true to say that some S knows that some p is true depends on whether S can rule out all relevant alternatives, and various things can lead an alternative to be relevant in some conversational context. One thing that is sufficient (but not necessary) for an alternative to be relevant in a context is that it be taken seriously in the context. So, for instance, if a Cartesian skeptical scenario is taken seriously in some context, then it is relevant in that context. A Lewisian contextualist may hold that as Diane moves from Enlightened Student (Part I) to Enlightened Student (Part II) the possibility that she has been engaging in motivated reasoning becomes relevant, and because she can’t rule it out, she can no longer truly say that she knows that global temperatures are rising. It is, of course, a further step to hold that she can no longer truly say that she is justified in believing that global temperatures are rising. But perhaps this further step can be made and, in any case, it would be unsettling enough if Diane couldn’t truly say that she knows global temperatures are rising.Footnote 29
8.
So what is the skeptical import of motivated reasoning? We want to finish by clarifying what we have been trying to do in this paper and by commenting on what all this might teach us about intellectual virtue.
With reference to the preceding discussion, three ascending levels of skeptical import can be identified:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200818024734804-0328:S0045509120000168:S0045509120000168_tabU1.png?pub-status=live)
Level 3 is serious indeed. As the Enlightened Student pair of cases indicates, the kind of skeptical threat generated by the prevalence of motivated reasoning might plausibly imperil the epistemic status of our beliefs even when these beliefs are not formed through such reasoning at all. And in this respect, even what look like perfectly “good cases” might be ones where we turn out to be much worse off epistemically than it would initially seem. This should certainly give the epistemologist cause for concern.
To be clear, the above table distinguishes levels of skeptical import—not skeptical conclusions. Compare: it is uncontentious that the skeptical import of Descartes’ arguments in Meditation 1 included at least all empirical propositions. And yet, almost no one accepts the corresponding conclusions of these arguments. Of course, any skeptical argument—no matter the scope of skepticism threatened—might be defused by a suitable anti-skeptical response. In the present case, there is, as we see it, no suitable response to the kind of (minimal) skeptical import raised at Level 1; after all, these are beliefs that we would normally take to be unjustified.
What about Levels 2 and 3? There are two forms of antiskeptical response one might try. The first form would be to try and find a weakness in the details of the skeptical arguments. So far as the Level 2 argument is concerned, one could challenge the empirical data from section 2, or the epistemological assumptions about propositional and doxastic justification relied on in sections 3–4. As for the Level 3 argument, one could challenge either of these things, or the Bergmann-style account of defeat (or the Lewisian semantics for knowledge attributions) from section 7.
The second form would be to target our overall strategy in this paper.Footnote 30 We have argued for skeptical conclusions about a class of our beliefs (beliefs in scientific issues that have become politically contentious, like global warming) on the basis of highly simplified toy cases (Naïve Student and Enlightened Student). We supposed that the causes of Tim’s and Diane’s beliefs can be captured in very simple terms (they defer to people recognized as experts in their community), and we have said very little about the reasons underlying these patterns of deference. One might argue that the causes and reasons underlying the beliefs that actual people have about issues like global warming are far more complex than this and, due to these complexities, the skeptical conclusions we have derived about Tim and Diane may not transfer over to our own beliefs about these issues. We grant this point, but our aim has not been to argue conclusively for either a Level 2 or 3 skeptical conclusion. More modestly, we hope to have shown that the phenomenon of motivated reasoning raises insidious skeptical problems—and accordingly that the epistemological ramifications of motivated reasoning are much more serious than one may initially think (see, e.g., Kelly Reference Kelly2008).
We want to conclude by noting how the above ascending cascade of skeptical threats posed by motivated reasoning might teach us something instructive about intellectual virtue.
One of the most popular mantras in contemporary social epistemology is that intellectual humility is valuable to its possessors as well as valuable more broadly in liberal democracies predicated upon a toleration of differing viewpoints.Footnote 31 One prominent account of the nature of intellectual humility, defended by Dennis Whitcomb, Heather Battaly, Jason Baehr, and Daniel Howard-Snyder (Reference Whitcomb, Battaly, Baehr and Howard-Snyder2017) maintains that the virtue centrally involves an owning of our epistemic limitations. Footnote 32 Intellectual humility, so understood, is characterized by a dispositional profile that includes cognitive, behavioral, motivational, and affective responses to the various ways in which individuals are epistemically limited. But what are these limitations exactly? As Whitcomb et al. write:
We’re all familiar with what we gather under the rubric of “intellectual limitations”: gaps in knowledge (e.g. ignorance of current affairs), cognitive mistakes (e.g. forgetting an appointment), unreliable processes (e.g. bad vision or memory), deficits in learnable skills (e.g. being bad at math), intellectual character flaws (e.g. a tendency to draw hasty inferences) … (Reference Whitcomb, Battaly, Baehr and Howard-Snyder2017, 516).Footnote 33
One feature that unifies the examples that Whitcomb et al. point to is that they are all, in an important respect, individual-level limitations. Individual-level limitations, however, should only be part of the story. Some of our intellectual limitations arise simply in virtue of our social situatedness—viz, our membership in epistemic communities that antecedently value the things that they do. This at any rate seems to be an important moral to draw from sections 2–7, and this is especially so for those who join us in taking seriously the idea that the skeptical import of the kind of motivated reasoning that features in cultural cognition rises to (at least) Level 2.
One lesson to draw from canvassing the skeptical import of motivated reasoning is that there is a distinctively social way that the virtuously intellectually humble person may own her epistemic limitations, apart from the more traditional ways of owning individual-level limitations (e.g., owning knowledge gaps, skills deficits, etc.)Footnote 34 That is, she may own the kinds of epistemic limitations one inherits specifically by embedding oneself in (and reaping other epistemic benefits from) a social community, limitations that—if our argument is sound—stand to threaten us epistemically on potentially three levels of ascending epistemic concern.
J. Adam Carter is a lecturer in philosophy at the University of Glasgow, where he directs the COGITO Epistemology Research Centre. He works mostly in epistemology, and his recent projects have included work on know-how, virtue epistemology, extended epistemology, social epistemology, and relativism.
Robin McKenna is a lecturer in philosophy at the University of Liverpool. Most of his work is in epistemology, but he is also interested in philosophy of language and philosophy of science and ethics. Current topics of interest include the epistemology of persuasion, the epistemology of climate change denial, epistemic injustice, and social constructivism.