1. Introduction
A frequently expressed concern in the context of COVID-19 in the United States is that we will not reach herd-immunity quickly enough to stave off further waves of infection—perhaps allowing new, more dangerous, variants of the virus to evolve and become endemic. This is despite the existence of vaccines and public health measures that might have gotten us to this point months ago. But the reality is that a large proportion of Americans are distrustful of the vaccines’ safety and efficacy and quick to reject other relevant measures (such as masking). A similar pattern obtains for climate change. Though conceivable that society might have taken scientists’ warning seriously decades ago and mitigated the harmful effects of anthropogenic emissions in a smooth, controlled descent to full decarbonization, the path to avoiding the worst effects of climate change has narrowed and steepened—again, in large part because scientists’ entreaties were largely disregarded.
Such examples abound. They illustrate that distrust of science can lead to social harms. While uncontroversial in the particular cases, what is somewhat less obvious is whether the dual notion—trust of science—should be seen as a collective good. Part of what makes the matter contentious is that what we take it to mean for an individual (or society) to trust science isn’t entirely clear. We take up this issue in sections 2–3. Assuming that some normatively reasonable (and realistic) construal could be identified, a number of further practical questions immediately arise: How might we bring such a state of affairs about? Or, at the very least, how might we ameliorate the root causes of the widespread distrust of science?
There is a massive, interdisciplinary literature addressing these and related questions from a variety of perspectives. Many efforts rightly focus on the social-epistemic “environment” for science communication (as we might call it, following Kahan and Landrum Reference Kahan and Landrum2017). On many issues—climate change, vaccines, genetically modified organisms, and now, most recently, COVID-19, are prominent examples—this environment has been fouled by misinformation, personal attacks on the motivations and credibility of scientists, and uninformed denial of established facts or of existence of the scientific consensus on a given issue (Oreskes and Conway Reference Oreskes and Conway2010a; Dunlap and McCright Reference Dunlap, McCright, Dryzek, Norgaard and Schlosberg2011; Brulle Reference Brulle2014). However, focus on “the big emitters” of epistemic pollution has distracted attention from an important role that individual scientists can play in shaping the social–epistemic environment of science communication for the better.
Morton (Reference Morton2014) has recently suggested a “Mandevillian” construal of the scientific enterprise—on which what might be seen as “vice” at the individual level yields virtue at the collective level.Footnote 1 Invisible hands (of a Kuhnian vein) also come to mind. On such pictures, vice (or indifference) leads to positive outcomes. Our aim in this article is to argue for greater attention to the reverse dynamic: individual virtue (or anyway, apparently innocent actions) leading to negative consequences. Even small, well-intentioned contributions to the epistemic environment for healthy science communication, we believe, can add up to an unintentional large-scale negative effect on that environment. While we will not be able to make any concrete suggestions for addressing this particular collective action problem, we believe that recognizing it as such constitutes progress in this direction.
In section 2, we briefly consider the complex question of the proper recipient(s) of the public’s trust of science. Many hold, plausibly, that the claim that we ought to trust science should not focus on individuals, but rather on scientific consensus (somehow construed). Yet we cannot disregard individual scientists entirely—if only as sources of information about such consensus. In section 3, we turn our attention to calls for individual scientists to become ambassadors of a sort for science, perhaps even to unlearn dispositions toward caution and precision to compete against the lively and brash character of science deniers. Finally, in section 4, we argue that rather than cultivating greater trust of science, such forays might serve to undermine rather than enhance appropriate and sustainable public trust of science.
2. Trust of whom/what?
What precisely would it mean to ask whether some group—voting-age Americans without scientific trainingFootnote 2, say—“trusts science”? No one thing, presumably. Indeed, not many of the things that “trusting science” could mean are very plausible. The term might even seem at first glance to be a category mistake. Science is something that people do; what would it mean to extend epistemic trust to an activity? Interpreting “science” instead as a sort of collective enterprise that generates propositions leaves open the question of which propositions we ought to believe—surely not all of the propositions generated by science. Science encompasses a wide range of disciplines, each with different methods, goals, and levels of relevance to the lay public. Trusting an astrophysicist that black holes exist somewhere in the universe apparently has a different epistemic cast from trusting a medical researcher that a certain drug is safe and effective. Clearly it’s a nonstarter to claim that we ought to trust every scientist, for many are undeserving of our trust for reasons of incompetence or dishonesty. Contending instead that we should trust good scientists (or more generally the products of good science) might be correct but is conceptually shallow and practically unrealistic—certainly for most outsiders but also for many insiders.
Perhaps the most plausible understanding of this phrase is that we should trust science when it speaks with a unified voice of a certain kind: that we should trust scientific consensus. This seems more clearly on track. It is the basic contention of Naomi Oreskes’s recent Why Trust Science? (2019). For one, the focus on consensus evades concerns about the trustworthiness of individual scientists. A likely prominent effect of the denialist campaigns mentioned in the preceding text has been the politicization of assessment of expertise by politically informed motivated reasoning. When it comes to climate change, an expert is judged as such not because of their credentials or experience but on the basis of whether what they say coheres with the view of one’s “ideological tribe” (Kahan et al. Reference Kahan, Jenkins-Smith and Braman2011). Focusing on consensus to some extent sidesteps the need to vet expertise of individuals.
For two, there are prima facie compelling arguments that consensus—when reached using a robust process and featuring sufficient social diversity—should be seen as the gold standard for scientific credibility (Longino Reference Longino1990; Solomon Reference Solomon, Kincaid and McKitrick2007; Miller Reference Miller2013; Beatty Reference Beatty, Gissis, Lamm and Shavit2017; Oreskes Reference Oreskes2019). Notably, the social practices that make it so can plausibly function without the knowing cooperation of individual scientists—again, in the “invisible-hand” style. After all, it is in large part due to the competition and institutionalized skepticism of scientists that makes the attainment of consensus so challenging and thus so revealing or significant when achieved.
So goes the story, in any case (we will not rehearse the details further here). Let’s suppose it’s true. We have argued elsewhere (Slater et al. Reference Slater, Huxster and Scholfieldforthcoming) that, despite some empirical evidence to the contrary,Footnote 3 the epistemic significance of a certain robust sort of consensus does not carry over straightforwardly to providing actionable advice about how to leverage consensus for more successful science communication. This is true for a variety of reasons, we think; but an especially salient one for our present purposes is that much of the lay public apparently lacks the background knowledge about science as a social enterprise required to appreciate the epistemic significance of scientific consensus.Footnote 4
Even setting this problem aside, it is not clear that trust at the individual level can be set aside entirely. Consensus is rarely a matter of direct inspection for members of the lay public. It is usually appreciated as a matter of testimony from a trusted individual or organization (typically functioning, testimonially, as an individual). Consider how you became aware of the existence of a scientific consensus on a given topic. If it was about climate change, perhaps it was a statement from the AAAS or National Academies of Science; or perhaps it was by reading the work of Oreskes (Reference Oreskes2004). Would Oreskes have accepted in 2004 what she writes in 2019: that “[w]e should be skeptical of any single paper in science [or, specifically, in Science]” (233)? Perhaps a tentative reading of “skeptical” is called for here. In any case, this pointed question is meant to foreground a certain tension between, on the one hand, seeing (robust, meaningful) scientific consensus as especially significant and, on the other, granting that there often is something special at the individual level of scientists that should command our epistemic respect. After all, it is not only the social structures and norms of the scientific community from which consensus derives its epistemic significance; it is from the fact that the consensus is “comprised of” individual experts who are experts in part because of their tools, training, and (hopefully) epistemically scrupulous behavior and intellectual virtues (Baehr Reference Baehr2011; Pennock Reference Pennock2019). It is part of the function, we might even say, of the norms and social structures of science to help maintain the epistemic virtue of the individuals (as Aristotle’s Polis helps its citizens achieve moral virtue).
Part of the problem, of course, is that it is difficult for the lay public to assess whether any individual scientist is worthy of trust. There exists a gap between science and the public which does not seem to be effectively bridged by technical, professional research articles or by the journalists who have been tasked with translating this research for their lay audience.
3. Individual scientists’ role in building trust
As scholars have gradually shed the Deficit Model of science communication (on which, roughly speaking, effective science communication involves merely addressing a deficit of public knowledge), investigations of the public’s trust of science have focused on the “supply side” of science communication: How can scientists be better communicators? How can they earn the respect of lay communities (Fiske Reference Fiske2012; Fiske and Dupree Reference Fiske and Dupree2014)? How might academic institutions better incentivize scientists to prioritize public outreach and communication (Ritchie Reference Ritchie2020)? Are there better ways of engaging the public in the scientific process to build trust (Wynne Reference Wynne2006; Guston Reference Guston2014)?
In pursuing such questions, it is crucial to take into account features of the communication environment that make science communication particularly challenging. Faced with the deceptive strategies and disingenuous arguments of nonscientific “merchants of doubt,” scientists’ moral high ground can represent a strategic vulnerability: sometimes the messages aren’t simple; sometimes scientists’ knowledge is incomplete or provisional in some areas that are easily conflated with the main issue at hand. A commitment to conveying the unvarnished truth in all its nuance automatically puts scientists at a disadvantage. And unlike the paid shills often fronting denial campaigns, scientists just aren’t trained to be good communicators. The deck is stacked against them from the start.
This dynamic was brilliantly (if painfully) illustrated in Robert Kenner’s (Reference Kenner2014) documentary adaptation of Oreskes and Conway’s (Reference Oreskes and Conway2010a) Merchants of Doubt which introduces influential NASA climate scientist James Hansen using footage from an interview salvaged from the cutting-room floor in which he awkwardly stumbles over his words, asks for restarts, and generally seems deeply uncomfortable in this setting, muttering at one point “Frankly, I’d rather be doing my research than being interviewed for TV!”
It is in this context that we sometimes hear gentle (and not so gentle) chiding that scientists need to do better at communicating their findings. “Don’t be such a scientist!” is the advice of biologist-turned-filmmaker-turned-science-communication-proselytizer Randy Olson in his (Reference Olson2018) book of the same title. For Olson, effective communication is all about “storytelling” (cf. Besley and Tanner Reference Besley and Tanner2011; Dahlstrom Reference Dahlstrom2014). A scientific paper recapitulates “the hero’s journey” (Olson Reference Olson2018, 15). Seeing this and being able to convey the narrative structure of scientific discovery, on his view, is what effective science communication is all about. Echoes of this suggestion are evident in an op-ed in Nature by Oreskes and Conway who argue that:
Scientists have much to learn about making their messages clearer. Honesty and objectivity are cardinal values in science, which lead scientists to be admirably frank about the ambiguities and uncertainties in their enterprise. But these values also frequently lead scientists to begin with caveats—outlining what they don’t know before proceeding to what they do—a classic example of what journalists call “burying the lead.” (Reference Oreskes and Conway2010b, 687)
But here too, we cannot lose sight of the influence of the epistemic environment for even the well-coached science communicator. As Kathleen Hall Jamieson has documented (Reference Jamieson2018), many recent media narratives about science have promoted a “science in crisis” frame. Whether the crisis is replication failures, questionable research practices, or outright scientific fraud, this narrative presumably would serve to undermine any well-spoken scientific storyteller.Footnote 5
Concerns about systemic problems in (certain branches of) science notwithstanding, there is something compelling about the thought that if scientists could communicate more clearly about their passions, come across as “normal” human beings (Rahm Reference Rahm1997), and engender curiosity—even wonder—about what they study and perhaps even the scientific enterprise, we could make substantial strides in breaking down destructive images of science prevalent in the lay public. Indeed, there seems to be some empirical support for the thesis that, unlike (a certain dimension of) scientific literacy, scientific curiosity does not engender the same political polarization we see in the context of climate change (Kahan et al. Reference Kahan, Asheley Landrum, Helft and Hall Jamieson2017).
In the next section, however, we raise a concern about a potentially damaging side-effect of this kind of outreach.
4. A collective action problem
Consider a plausible scene: Let’s suppose that you are a well-meaning, epistemically responsible scientist. You take seriously your potential impact as a communicator to the public. You’ve read Olson, Jamieson, and Oreskes and are persuaded that you ought to be a better scientific storyteller; you work at it and discover a certain talent for breaking down complex ideas to the lay public. Journalists love you; you have a good relationship with your university’s press office and have the emails of several local and national reporters. You are gratified (and if you are honest, your ego is enhanced) to have your work see some uptake in the public sphere: “This is making a difference!” you think. Not only are you discharging your duties to be an ambassador to science but you’re helping your career as well. “Finally, the world will know the health benefits of red wine.”
This leads us to our collective action problem. If we suppose that your competitors are doing the same thing, then however eloquently this news is reaching the public, the overall impression will likely be aporia: “These people say red wine is good, but those people say it’s bad. Which is it? It seems that scientists can’t make up their minds! How do we know whom to trust?”Footnote 6 This is, of course, the very dynamic that climate denialists have been exploiting for decades: Create doubt using the illusion of dissent through Potemkin science (Slater et al. Reference Slater, Huxster, Bresticker and LoPiccolo2020). Science involves plenty of dissent. Incentivizing individual scientists to get out in front of the public with their work—their hero’s journey—essentially ensures that scientific dissent will be overwhelmingly evident.
It shouldn’t be tremendously surprising to anyone to learn that in news reporting about science in the so-called prestige press, accounts of individual accomplishments make up the majority of reporting on science.Footnote 7 Articles concerning the large-scale social processes for vetting or debating such accomplishments or documenting how recent work fills in gaps in our understanding (and where gaps yet remain) are comparatively extremely rare. Yet those are precisely the sorts of articles that might help to fill out the picture of science as a social enterprise (Slater et al. Reference Slater, Huxster and Bresticker2019) needed for the public to appreciate the epistemic significance of consensus or understand why dissent in science shouldn’t be a sign of ignorance or incompetence.
Because science journalism is one of the main bridges between science and the public, this represents, at best, a missed opportunity. Moreover, Goldenberg and McCron’s (Reference Goldenberg and Christopher2017) study of media reporting on a particular finding tending toward misleading or sensationalistic interpretations (or making outright errors) confirms what many of us see every day: There is considerable room for improvement in science journalism. Our point is that even without inaccurate or misleading reporting, a bias toward reporting on science as an individualistic endeavor may have a distorting effect on the public’s trust of science. While getting people excited about science—the scientific hero’s journey/quest—might pay some dividends, the individualistic bias comes with certain risks as well.
We see these risks as falling into two salient categories (there may well be others). First, there are what we might call Reversal/Conflict Risks. A common way of contextualizing a news story about science is to explain how it is new—that is, how it departs from previous work. This framing thus tends to make salient the fact that scientists are on different pages about many issues. This is illustrated in our previously mentioned vignette. Recent studies have shown that such “reversals” have a corrosive effect on trust, even on issues that are unrelated to the subject of the reversal in question (Nabi et al. Reference Nabi, Gustafson and Jensen2018). Oreskes is relatively quick to set aside this phenomenon as a pathology concentrated in nutrition science (Reference Oreskes2019, 67). But the problem is more systematic than this and is amply illustrated by some of the early stages of the COVID-19 pandemic where the public was unusually tuned in to science at the cutting edge. If journalists or scientists had exposed the public to the complicated, ever-changing process of science over the past few decades, perhaps the lay understanding of science might have been sufficient to accommodate the seemingly contradictory information that came with the emergence of the COVID-19 virus as de rigueur for the leading edge of science.
A second, more subtle (and admittedly more speculative) risk is that by focusing on stories that cause scientists to exclaim “gee-whiz!” (Angler Reference Angler2017, 3)—“Wow! Gravitational waves!”—scientists may confirm the suspicion that they are not working on problems that everyday people care about. If we think of epistemic trust of science as featuring an affective dimension (Jones Reference Jones1996), this sort of perception and its relevance to distrust of science seems worth exploring.
Let us summarize. It is plausible that (well-deserved) public trust of science is a collective good—not only for scientists but also for well-functioning societies. In the context of widespread mistrust of science, it is often suggested that (individual) scientists need to be better communicators for a lay public audience. The natural thing for them to communicate is what they know: The cutting-edge science on which they are working. We can expect further that, even at its best, the news media will pick up on (and perhaps accentuate) the individualistic and dramatic aspects of this science. When many scientists do this, the result is apt to corrode rather than promote public trust of science. We see this as a variation of a classic “tragedy of the commons.” By doing what seems to be the right thing—either for themselves or for the scientific community at large—scientists working to promote interest in their scientific work may be undermining public trust of science.
What are the alternatives? While we can only provide some brief parting thoughts in this context, if one accepts the premise that the public’s grasp of the epistemic significance of certain robust forms of scientific consensus is the appropriate locus for public trust of science, then it would seem plausible that working to promote an understanding of the scientific enterprise that lends such consensus its epistemic weight should be at least one focus of science communication. Perhaps this means advocating for or communicating about the scientific process rather than—or in addition to—promoting one’s own science. We see such practical questions as open and urgent.
There are both prudential and moral motivations for a scientist to shift their communication practices away from the exclusive promotion of their own science. The moral case stems most clearly from seeing (appropriate, nonscientistic) levels of public trust of science as a public good. If scientists have a duty to promote (or at least not harm) the public good, then (if our argument here holds up), scientists have a duty to shift their communication practices to protect that public good. A more subtle duty might be seen as extending from considerations of justice. Heidi Grasswick (Reference Grasswick2018) argues that a lack of access to the tools necessary to understand and appreciate science’s trustworthiness (or lack thereof) constitutes an “epistemic trust injustice” to learners and their epistemic agency.
The prudential case for a shift like we are envisioning can take many forms. One is general and obvious: Scientists live in the world too; if an inappropriate distrust of science leads to poor outcomes for the world, they may suffer from such poor outcomes along with the rest of us. More directly, scientists’ own work may be compromised by public distrust—for example, in lack of public financial support or even (as we have seen during the COVID-19 pandemic) open hostility and threats to scientists. We imagine that there might also be a sort of psychological trauma associated with the Cassandra-esque phenomenon of issuing warnings that are never heeded.
Of course, much more work will be needed to address the question of how to interpret and justify the public trust of science in general. This article is premised on seeing this trust in some form as a collective good to be promoted. Seen as such, the first step in addressing the collective action problem involved in promoting this good (or merely avoiding despoiling the “epistemic commons”) is recognizing it as a collective action problem.
Acknowledgments
For discussion of earlier versions of this paper that lead to improvements, we would like to thank our audience and cosymposiasts at the Why Trust Science? symposium at the 2020/21 PSA. We’re also grateful to two reviewers for Philosophy of Science and especially Angela Potochnik. Support for some early research related to this paper was conducted under the auspices of an NSF grant (SES-1734616; Slater, PI).
Conflict of interest
The authors declare that they have no competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.