Hostname: page-component-6bf8c574d5-7jkgd Total loading time: 0 Render date: 2025-02-22T22:13:30.914Z Has data issue: false hasContentIssue false

Trust of Science as a Public Collective Good

Published online by Cambridge University Press:  25 May 2022

Matthew H. Slater*
Affiliation:
Bucknell University, Lewisburg, PA
Emily R. Scholfield
Affiliation:
Bucknell University, Lewisburg, PA
*
*Corresponding author. Email: matthew.slater@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

The COVID-19 pandemic and global climate change crisis remind us that widespread trust in the products of the scientific enterprise is vital to the health and safety of the global community. Insofar as appropriate responses to these (and other) crises require us to trust that enterprise, cultivating a healthier trust relationship between science and the public may be considered as a collective public good. While it might appear that scientists can contribute to this good by taking more initiative to communicate their work to public audiences, we raise a concern about unintended consequences of an individualistic approach to such communication.

Type
Symposia Paper
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Philosophy of Science Association

1. Introduction

A frequently expressed concern in the context of COVID-19 in the United States is that we will not reach herd-immunity quickly enough to stave off further waves of infection—perhaps allowing new, more dangerous, variants of the virus to evolve and become endemic. This is despite the existence of vaccines and public health measures that might have gotten us to this point months ago. But the reality is that a large proportion of Americans are distrustful of the vaccines’ safety and efficacy and quick to reject other relevant measures (such as masking). A similar pattern obtains for climate change. Though conceivable that society might have taken scientists’ warning seriously decades ago and mitigated the harmful effects of anthropogenic emissions in a smooth, controlled descent to full decarbonization, the path to avoiding the worst effects of climate change has narrowed and steepened—again, in large part because scientists’ entreaties were largely disregarded.

Such examples abound. They illustrate that distrust of science can lead to social harms. While uncontroversial in the particular cases, what is somewhat less obvious is whether the dual notion—trust of science—should be seen as a collective good. Part of what makes the matter contentious is that what we take it to mean for an individual (or society) to trust science isn’t entirely clear. We take up this issue in sections 23. Assuming that some normatively reasonable (and realistic) construal could be identified, a number of further practical questions immediately arise: How might we bring such a state of affairs about? Or, at the very least, how might we ameliorate the root causes of the widespread distrust of science?

There is a massive, interdisciplinary literature addressing these and related questions from a variety of perspectives. Many efforts rightly focus on the social-epistemic “environment” for science communication (as we might call it, following Kahan and Landrum Reference Kahan and Landrum2017). On many issues—climate change, vaccines, genetically modified organisms, and now, most recently, COVID-19, are prominent examples—this environment has been fouled by misinformation, personal attacks on the motivations and credibility of scientists, and uninformed denial of established facts or of existence of the scientific consensus on a given issue (Oreskes and Conway Reference Oreskes and Conway2010a; Dunlap and McCright Reference Dunlap, McCright, Dryzek, Norgaard and Schlosberg2011; Brulle Reference Brulle2014). However, focus on “the big emitters” of epistemic pollution has distracted attention from an important role that individual scientists can play in shaping the social–epistemic environment of science communication for the better.

Morton (Reference Morton2014) has recently suggested a “Mandevillian” construal of the scientific enterprise—on which what might be seen as “vice” at the individual level yields virtue at the collective level.Footnote 1 Invisible hands (of a Kuhnian vein) also come to mind. On such pictures, vice (or indifference) leads to positive outcomes. Our aim in this article is to argue for greater attention to the reverse dynamic: individual virtue (or anyway, apparently innocent actions) leading to negative consequences. Even small, well-intentioned contributions to the epistemic environment for healthy science communication, we believe, can add up to an unintentional large-scale negative effect on that environment. While we will not be able to make any concrete suggestions for addressing this particular collective action problem, we believe that recognizing it as such constitutes progress in this direction.

In section 2, we briefly consider the complex question of the proper recipient(s) of the public’s trust of science. Many hold, plausibly, that the claim that we ought to trust science should not focus on individuals, but rather on scientific consensus (somehow construed). Yet we cannot disregard individual scientists entirely—if only as sources of information about such consensus. In section 3, we turn our attention to calls for individual scientists to become ambassadors of a sort for science, perhaps even to unlearn dispositions toward caution and precision to compete against the lively and brash character of science deniers. Finally, in section 4, we argue that rather than cultivating greater trust of science, such forays might serve to undermine rather than enhance appropriate and sustainable public trust of science.

2. Trust of whom/what?

What precisely would it mean to ask whether some group—voting-age Americans without scientific trainingFootnote 2, say—“trusts science”? No one thing, presumably. Indeed, not many of the things that “trusting science” could mean are very plausible. The term might even seem at first glance to be a category mistake. Science is something that people do; what would it mean to extend epistemic trust to an activity? Interpreting “science” instead as a sort of collective enterprise that generates propositions leaves open the question of which propositions we ought to believe—surely not all of the propositions generated by science. Science encompasses a wide range of disciplines, each with different methods, goals, and levels of relevance to the lay public. Trusting an astrophysicist that black holes exist somewhere in the universe apparently has a different epistemic cast from trusting a medical researcher that a certain drug is safe and effective. Clearly it’s a nonstarter to claim that we ought to trust every scientist, for many are undeserving of our trust for reasons of incompetence or dishonesty. Contending instead that we should trust good scientists (or more generally the products of good science) might be correct but is conceptually shallow and practically unrealistic—certainly for most outsiders but also for many insiders.

Perhaps the most plausible understanding of this phrase is that we should trust science when it speaks with a unified voice of a certain kind: that we should trust scientific consensus. This seems more clearly on track. It is the basic contention of Naomi Oreskes’s recent Why Trust Science? (2019). For one, the focus on consensus evades concerns about the trustworthiness of individual scientists. A likely prominent effect of the denialist campaigns mentioned in the preceding text has been the politicization of assessment of expertise by politically informed motivated reasoning. When it comes to climate change, an expert is judged as such not because of their credentials or experience but on the basis of whether what they say coheres with the view of one’s “ideological tribe” (Kahan et al. Reference Kahan, Jenkins-Smith and Braman2011). Focusing on consensus to some extent sidesteps the need to vet expertise of individuals.

For two, there are prima facie compelling arguments that consensus—when reached using a robust process and featuring sufficient social diversity—should be seen as the gold standard for scientific credibility (Longino Reference Longino1990; Solomon Reference Solomon, Kincaid and McKitrick2007; Miller Reference Miller2013; Beatty Reference Beatty, Gissis, Lamm and Shavit2017; Oreskes Reference Oreskes2019). Notably, the social practices that make it so can plausibly function without the knowing cooperation of individual scientists—again, in the “invisible-hand” style. After all, it is in large part due to the competition and institutionalized skepticism of scientists that makes the attainment of consensus so challenging and thus so revealing or significant when achieved.

So goes the story, in any case (we will not rehearse the details further here). Let’s suppose it’s true. We have argued elsewhere (Slater et al. Reference Slater, Huxster and Scholfieldforthcoming) that, despite some empirical evidence to the contrary,Footnote 3 the epistemic significance of a certain robust sort of consensus does not carry over straightforwardly to providing actionable advice about how to leverage consensus for more successful science communication. This is true for a variety of reasons, we think; but an especially salient one for our present purposes is that much of the lay public apparently lacks the background knowledge about science as a social enterprise required to appreciate the epistemic significance of scientific consensus.Footnote 4

Even setting this problem aside, it is not clear that trust at the individual level can be set aside entirely. Consensus is rarely a matter of direct inspection for members of the lay public. It is usually appreciated as a matter of testimony from a trusted individual or organization (typically functioning, testimonially, as an individual). Consider how you became aware of the existence of a scientific consensus on a given topic. If it was about climate change, perhaps it was a statement from the AAAS or National Academies of Science; or perhaps it was by reading the work of Oreskes (Reference Oreskes2004). Would Oreskes have accepted in 2004 what she writes in 2019: that “[w]e should be skeptical of any single paper in science [or, specifically, in Science]” (233)? Perhaps a tentative reading of “skeptical” is called for here. In any case, this pointed question is meant to foreground a certain tension between, on the one hand, seeing (robust, meaningful) scientific consensus as especially significant and, on the other, granting that there often is something special at the individual level of scientists that should command our epistemic respect. After all, it is not only the social structures and norms of the scientific community from which consensus derives its epistemic significance; it is from the fact that the consensus is “comprised of” individual experts who are experts in part because of their tools, training, and (hopefully) epistemically scrupulous behavior and intellectual virtues (Baehr Reference Baehr2011; Pennock Reference Pennock2019). It is part of the function, we might even say, of the norms and social structures of science to help maintain the epistemic virtue of the individuals (as Aristotle’s Polis helps its citizens achieve moral virtue).

Part of the problem, of course, is that it is difficult for the lay public to assess whether any individual scientist is worthy of trust. There exists a gap between science and the public which does not seem to be effectively bridged by technical, professional research articles or by the journalists who have been tasked with translating this research for their lay audience.

3. Individual scientists’ role in building trust

As scholars have gradually shed the Deficit Model of science communication (on which, roughly speaking, effective science communication involves merely addressing a deficit of public knowledge), investigations of the public’s trust of science have focused on the “supply side” of science communication: How can scientists be better communicators? How can they earn the respect of lay communities (Fiske Reference Fiske2012; Fiske and Dupree Reference Fiske and Dupree2014)? How might academic institutions better incentivize scientists to prioritize public outreach and communication (Ritchie Reference Ritchie2020)? Are there better ways of engaging the public in the scientific process to build trust (Wynne Reference Wynne2006; Guston Reference Guston2014)?

In pursuing such questions, it is crucial to take into account features of the communication environment that make science communication particularly challenging. Faced with the deceptive strategies and disingenuous arguments of nonscientific “merchants of doubt,” scientists’ moral high ground can represent a strategic vulnerability: sometimes the messages aren’t simple; sometimes scientists’ knowledge is incomplete or provisional in some areas that are easily conflated with the main issue at hand. A commitment to conveying the unvarnished truth in all its nuance automatically puts scientists at a disadvantage. And unlike the paid shills often fronting denial campaigns, scientists just aren’t trained to be good communicators. The deck is stacked against them from the start.

This dynamic was brilliantly (if painfully) illustrated in Robert Kenner’s (Reference Kenner2014) documentary adaptation of Oreskes and Conway’s (Reference Oreskes and Conway2010a) Merchants of Doubt which introduces influential NASA climate scientist James Hansen using footage from an interview salvaged from the cutting-room floor in which he awkwardly stumbles over his words, asks for restarts, and generally seems deeply uncomfortable in this setting, muttering at one point “Frankly, I’d rather be doing my research than being interviewed for TV!”

It is in this context that we sometimes hear gentle (and not so gentle) chiding that scientists need to do better at communicating their findings. “Don’t be such a scientist!” is the advice of biologist-turned-filmmaker-turned-science-communication-proselytizer Randy Olson in his (Reference Olson2018) book of the same title. For Olson, effective communication is all about “storytelling” (cf. Besley and Tanner Reference Besley and Tanner2011; Dahlstrom Reference Dahlstrom2014). A scientific paper recapitulates “the hero’s journey” (Olson Reference Olson2018, 15). Seeing this and being able to convey the narrative structure of scientific discovery, on his view, is what effective science communication is all about. Echoes of this suggestion are evident in an op-ed in Nature by Oreskes and Conway who argue that:

Scientists have much to learn about making their messages clearer. Honesty and objectivity are cardinal values in science, which lead scientists to be admirably frank about the ambiguities and uncertainties in their enterprise. But these values also frequently lead scientists to begin with caveats—outlining what they don’t know before proceeding to what they do—a classic example of what journalists call “burying the lead.” (Reference Oreskes and Conway2010b, 687)

But here too, we cannot lose sight of the influence of the epistemic environment for even the well-coached science communicator. As Kathleen Hall Jamieson has documented (Reference Jamieson2018), many recent media narratives about science have promoted a “science in crisis” frame. Whether the crisis is replication failures, questionable research practices, or outright scientific fraud, this narrative presumably would serve to undermine any well-spoken scientific storyteller.Footnote 5

Concerns about systemic problems in (certain branches of) science notwithstanding, there is something compelling about the thought that if scientists could communicate more clearly about their passions, come across as “normal” human beings (Rahm Reference Rahm1997), and engender curiosity—even wonder—about what they study and perhaps even the scientific enterprise, we could make substantial strides in breaking down destructive images of science prevalent in the lay public. Indeed, there seems to be some empirical support for the thesis that, unlike (a certain dimension of) scientific literacy, scientific curiosity does not engender the same political polarization we see in the context of climate change (Kahan et al. Reference Kahan, Asheley Landrum, Helft and Hall Jamieson2017).

In the next section, however, we raise a concern about a potentially damaging side-effect of this kind of outreach.

4. A collective action problem

Consider a plausible scene: Let’s suppose that you are a well-meaning, epistemically responsible scientist. You take seriously your potential impact as a communicator to the public. You’ve read Olson, Jamieson, and Oreskes and are persuaded that you ought to be a better scientific storyteller; you work at it and discover a certain talent for breaking down complex ideas to the lay public. Journalists love you; you have a good relationship with your university’s press office and have the emails of several local and national reporters. You are gratified (and if you are honest, your ego is enhanced) to have your work see some uptake in the public sphere: “This is making a difference!” you think. Not only are you discharging your duties to be an ambassador to science but you’re helping your career as well. “Finally, the world will know the health benefits of red wine.”

This leads us to our collective action problem. If we suppose that your competitors are doing the same thing, then however eloquently this news is reaching the public, the overall impression will likely be aporia: “These people say red wine is good, but those people say it’s bad. Which is it? It seems that scientists can’t make up their minds! How do we know whom to trust?”Footnote 6 This is, of course, the very dynamic that climate denialists have been exploiting for decades: Create doubt using the illusion of dissent through Potemkin science (Slater et al. Reference Slater, Huxster, Bresticker and LoPiccolo2020). Science involves plenty of dissent. Incentivizing individual scientists to get out in front of the public with their work—their hero’s journey—essentially ensures that scientific dissent will be overwhelmingly evident.

It shouldn’t be tremendously surprising to anyone to learn that in news reporting about science in the so-called prestige press, accounts of individual accomplishments make up the majority of reporting on science.Footnote 7 Articles concerning the large-scale social processes for vetting or debating such accomplishments or documenting how recent work fills in gaps in our understanding (and where gaps yet remain) are comparatively extremely rare. Yet those are precisely the sorts of articles that might help to fill out the picture of science as a social enterprise (Slater et al. Reference Slater, Huxster and Bresticker2019) needed for the public to appreciate the epistemic significance of consensus or understand why dissent in science shouldn’t be a sign of ignorance or incompetence.

Because science journalism is one of the main bridges between science and the public, this represents, at best, a missed opportunity. Moreover, Goldenberg and McCron’s (Reference Goldenberg and Christopher2017) study of media reporting on a particular finding tending toward misleading or sensationalistic interpretations (or making outright errors) confirms what many of us see every day: There is considerable room for improvement in science journalism. Our point is that even without inaccurate or misleading reporting, a bias toward reporting on science as an individualistic endeavor may have a distorting effect on the public’s trust of science. While getting people excited about science—the scientific hero’s journey/quest—might pay some dividends, the individualistic bias comes with certain risks as well.

We see these risks as falling into two salient categories (there may well be others). First, there are what we might call Reversal/Conflict Risks. A common way of contextualizing a news story about science is to explain how it is new—that is, how it departs from previous work. This framing thus tends to make salient the fact that scientists are on different pages about many issues. This is illustrated in our previously mentioned vignette. Recent studies have shown that such “reversals” have a corrosive effect on trust, even on issues that are unrelated to the subject of the reversal in question (Nabi et al. Reference Nabi, Gustafson and Jensen2018). Oreskes is relatively quick to set aside this phenomenon as a pathology concentrated in nutrition science (Reference Oreskes2019, 67). But the problem is more systematic than this and is amply illustrated by some of the early stages of the COVID-19 pandemic where the public was unusually tuned in to science at the cutting edge. If journalists or scientists had exposed the public to the complicated, ever-changing process of science over the past few decades, perhaps the lay understanding of science might have been sufficient to accommodate the seemingly contradictory information that came with the emergence of the COVID-19 virus as de rigueur for the leading edge of science.

A second, more subtle (and admittedly more speculative) risk is that by focusing on stories that cause scientists to exclaim “gee-whiz!” (Angler Reference Angler2017, 3)—“Wow! Gravitational waves!”—scientists may confirm the suspicion that they are not working on problems that everyday people care about. If we think of epistemic trust of science as featuring an affective dimension (Jones Reference Jones1996), this sort of perception and its relevance to distrust of science seems worth exploring.

Let us summarize. It is plausible that (well-deserved) public trust of science is a collective good—not only for scientists but also for well-functioning societies. In the context of widespread mistrust of science, it is often suggested that (individual) scientists need to be better communicators for a lay public audience. The natural thing for them to communicate is what they know: The cutting-edge science on which they are working. We can expect further that, even at its best, the news media will pick up on (and perhaps accentuate) the individualistic and dramatic aspects of this science. When many scientists do this, the result is apt to corrode rather than promote public trust of science. We see this as a variation of a classic “tragedy of the commons.” By doing what seems to be the right thing—either for themselves or for the scientific community at large—scientists working to promote interest in their scientific work may be undermining public trust of science.

What are the alternatives? While we can only provide some brief parting thoughts in this context, if one accepts the premise that the public’s grasp of the epistemic significance of certain robust forms of scientific consensus is the appropriate locus for public trust of science, then it would seem plausible that working to promote an understanding of the scientific enterprise that lends such consensus its epistemic weight should be at least one focus of science communication. Perhaps this means advocating for or communicating about the scientific process rather than—or in addition to—promoting one’s own science. We see such practical questions as open and urgent.

There are both prudential and moral motivations for a scientist to shift their communication practices away from the exclusive promotion of their own science. The moral case stems most clearly from seeing (appropriate, nonscientistic) levels of public trust of science as a public good. If scientists have a duty to promote (or at least not harm) the public good, then (if our argument here holds up), scientists have a duty to shift their communication practices to protect that public good. A more subtle duty might be seen as extending from considerations of justice. Heidi Grasswick (Reference Grasswick2018) argues that a lack of access to the tools necessary to understand and appreciate science’s trustworthiness (or lack thereof) constitutes an “epistemic trust injustice” to learners and their epistemic agency.

The prudential case for a shift like we are envisioning can take many forms. One is general and obvious: Scientists live in the world too; if an inappropriate distrust of science leads to poor outcomes for the world, they may suffer from such poor outcomes along with the rest of us. More directly, scientists’ own work may be compromised by public distrust—for example, in lack of public financial support or even (as we have seen during the COVID-19 pandemic) open hostility and threats to scientists. We imagine that there might also be a sort of psychological trauma associated with the Cassandra-esque phenomenon of issuing warnings that are never heeded.

Of course, much more work will be needed to address the question of how to interpret and justify the public trust of science in general. This article is premised on seeing this trust in some form as a collective good to be promoted. Seen as such, the first step in addressing the collective action problem involved in promoting this good (or merely avoiding despoiling the “epistemic commons”) is recognizing it as a collective action problem.

Acknowledgments

For discussion of earlier versions of this paper that lead to improvements, we would like to thank our audience and cosymposiasts at the Why Trust Science? symposium at the 2020/21 PSA. We’re also grateful to two reviewers for Philosophy of Science and especially Angela Potochnik. Support for some early research related to this paper was conducted under the auspices of an NSF grant (SES-1734616; Slater, PI).

Conflict of interest

The authors declare that they have no competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Footnotes

1 From Bernard Mandeville’s (1705) The Grumbling Hive; the revealing, oft-quoted lines are: “Thus every Part was full of Vice; Yet the whole Mass a Paradise.” For further discussion, see Peters (Reference Peters2021).

2 Let us call this group “the (American) lay public” (understanding that there are multiple such publics that might command our attention).

3 Empirical evidence for the efficacy of consensus messaging strategies has been presented by Lewandowsky et al. (Reference Lewandowsky, Gignac and Vaughan2013), van der Linden et al. (Reference van der Linden, Leiserowitz, Feinberg and Maibach2015), and others; for some critical perspectives, see Kahan (Reference Kahan2017) and Chinn and Sol Hart (Reference Chinn and Sol Hart2021).

4 This is not necessarily to suggest a “deficit model” approach to remediating this issue or place the blame on the ignorant public, which, as Goldenberg (Reference Goldenberg2016, 564) points out, often serves to absolve the scientific establishment from listening to the public’s concerns (e.g., of “anxious parents” concerning vaccines).

5 Of course, as she documents, the media spin toward crisis is probably overblown. After all, it often ignores that “those whose work is prominently cited to certify that science is broken … are spearheading efforts to solve identified problems [and thus] their work is evidence of the resilience of science” (4).

6 Alternatively, it could be that we are motivated to select as credible just those articles that cohere with our preexisting preferences. (Don’t the articles about the health benefits of wine seem so much more compelling?)

7 We document this empirically for a recent subset of leading national newspapers in Slater et al. (Reference Slater, Scholfield and Conor Moore2021).

References

Angler, Martin W. 2017. Science Journalism: An Introduction. London and New York: Routledge, Taylor & Francis Group.10.4324/9781315671338CrossRefGoogle Scholar
Baehr, Jason. 2011. The Inquiring Mind: On Intellectual Virtues and Virtue Epistemology. Oxford: Oxford University Press.10.1093/acprof:oso/9780199604074.001.0001CrossRefGoogle Scholar
Beatty, John. 2017. “Consensus: Sometimes It Doesn’t Add Up.” In Landscapes of Collectivity, edited by Gissis, Snait, Lamm, Ehud, and Shavit, Ayelet, 179–98. Cambridge, MA: MIT Press.Google Scholar
Besley, John C., and Tanner, Andrea H.. 2011. “What Science Communication Scholars Think About Training Scientists to Communicate.” Science Communication 33 (2):239–63.10.1177/1075547010386972CrossRefGoogle Scholar
Brulle, Robert J. 2014. “Institutionalizing Delay: Foundation Funding and the Creation of US Climate Change Counter-Movement Organizations.” Climatic Change 122 (4):681–94.10.1007/s10584-013-1018-7CrossRefGoogle Scholar
Chinn, Sedona, and Sol Hart, P.. 2021. “Effects of Consensus Messages and Political Ideology on Climate Change Attitudes: Inconsistent Findings and the Effect of a Pretest.” Climatic Change 167 (3–4):47.10.1007/s10584-021-03200-2CrossRefGoogle ScholarPubMed
Dahlstrom, Michael F. 2014. “Using Narratives and Storytelling to Communicate Science with Nonexpert Audiences.” Proceedings of the National Academy of Sciences 111 (Supplement 4):13614–20.10.1073/pnas.1320645111CrossRefGoogle ScholarPubMed
Dunlap, Riley E., and McCright, Aaron M.. 2011. “Organized Climate Change Denial.” In The Oxford Handbook of Climate Change and Society, edited by Dryzek, John S., Norgaard, Richard B., and Schlosberg, David, 144–60. Oxford: Oxford University Press.Google Scholar
Fiske, S. T. 2012. “Managing Ambivalent Prejudices: Smart-but-Cold and Warm-but-Dumb Stereotypes.” The ANNALS of the American Academy of Political and Social Science 639 (1):3348.10.1177/0002716211418444CrossRefGoogle Scholar
Fiske, S. T., and Dupree, C.. 2014. “Gaining Trust as Well as Respect In Communicating to Motivated Audiences about Science Topics.” Proceedings of the National Academy of Sciences 111 (Supplement 4):13593–97.CrossRefGoogle ScholarPubMed
Goldenberg, Maya J. 2016. “Public Misunderstanding of Science? Reframing the Problem of Vaccine Hesitancy.” Perspectives on Science 24 (5):552–81.10.1162/POSC_a_00223CrossRefGoogle Scholar
Goldenberg, Maya J., and Christopher, McCron. 2017. “‘The Science is Clear!’: Media Uptake of Health Research into Vaccine Hesitancy.” In Knowing and Acting in Medicine, edited by Robyn Bluhm, 113–132. London: Roman & Littlefield International.Google Scholar
Grasswick, Heidi. 2018. “Understanding Epistemic Trust Injustices and Their Harms.” Royal Institute of Philosophy Supplement 84 (November):6991.10.1017/S1358246118000553CrossRefGoogle Scholar
Guston, David H. 2014. “Building the Capacity for Public Engagement with Science in the United States.” Public Understanding of Science 23 (1):5359.CrossRefGoogle ScholarPubMed
Jamieson, Kathleen Hall. 2018. “Crisis or Self-Correction: Rethinking Media Narratives about the Well-Being of Science.” Proceedings of the National Academy of Sciences, March, 201708276.Google Scholar
Jones, Karen. 1996. “Trust as an Affective Attitude.” Ethics 107:425.10.1086/233694CrossRefGoogle Scholar
Kahan, Dan. 2017. “The ‘Gateway Belief’ Illusion: Reanalyzing the Results of a Scientific-Consensus Messaging Study.” Journal of Science Communication 16 (5):A03.10.22323/2.16050203CrossRefGoogle Scholar
Kahan, Dan, Jenkins-Smith, Hank, and Braman, Donald. 2011. “Cultural Cognition of Scientific Consensus.” Journal of Risk Research 14 (2):147–74.10.1080/13669877.2010.511246CrossRefGoogle Scholar
Kahan, Dan, Asheley Landrum, Katie Carpenter, Helft, Laura, and Hall Jamieson, Kathleen. 2017. “Science Curiosity and Political Information Processing.” Political Psychology 38 (S1):179–99.10.1111/pops.12396CrossRefGoogle Scholar
Kahan, Dan, and Landrum, Asheley R.. 2017. “A Tale of Two Vaccines—and Their Science Communication Environments.” The Oxford Handbook of the Science of Science Communication, 165–72. New York: Oxford University Press.Google Scholar
Kenner, Robert. 2014. Merchants of Doubt. Mongrel Media, Sony Pictures Classics.Google Scholar
Lewandowsky, Stephan, Gignac, Gilles E., and Vaughan, Samuel. 2013. “The Pivotal Role of Perceived Scientific Consensus in Acceptance of Science.” Nature Climate Change 3 (4):399404.10.1038/nclimate1720CrossRefGoogle Scholar
Longino, Helen E. 1990. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton: Princeton University Press.10.1515/9780691209753CrossRefGoogle Scholar
Miller, Boaz. 2013. “When Is Consensus Knowledge Based? Distinguishing Shared Knowledge from Mere Agreement.” Synthese 190 (7):12931316.10.1007/s11229-012-0225-5CrossRefGoogle Scholar
Morton, Adam. 2014. “Shared Knowledge from Individual Vice: The Role of Unworthy Epistemic Emotions.” Philosophical Inquiries 2 (1):163–72.Google Scholar
Nabi, Robin, Gustafson, Abel, and Jensen, Risa. 2018. “Effects of Scanning Health News Headlines on Trust in Science: An Emotional Framing Perspective.” Presented at the 68th Annual Convention of the International Communication Association, Prague, Czech Republic, May 27.Google Scholar
Olson, Randy. 2018. Don’t Be Such a Scientist. 2nd ed. Washington, DC: Island Press.10.5822/978-1-61091-918-0CrossRefGoogle Scholar
Oreskes, Naomi. 2004. “The Scientific Consensus on Climate Change.” Science 306 (5702):16861686.CrossRefGoogle ScholarPubMed
Oreskes, Naomi. 2019. Why Trust Science? Princeton: Princeton University Press.Google Scholar
Oreskes, Naomi, and Conway, Erik M.. 2010a. Merchants of Doubt. New York: Bloomsbury Press.Google ScholarPubMed
Oreskes, Naomi, and Conway, Erik M.. 2010b. “Defeating the Merchants of Doubt.” Nature 465 (7299): 686–87.CrossRefGoogle ScholarPubMed
Pennock, Robert T. 2019. An Instinct for Truth: Curiosity and the Moral Character of Science. Cambridge, MA: MIT Press.10.7551/mitpress/11218.001.0001CrossRefGoogle Scholar
Peters, Uwe. 2021. “Illegitimate Values, Confirmation Bias, and Mandevillian Cognition in Science.” The British Journal for the Philosophy of Science 72 (4):1061–81.10.1093/bjps/axy079CrossRefGoogle Scholar
Rahm, Jrène. 1997. “Probing Stereotypes through Students’ Drawings of Scientists.” American Journal of Physics 65 (8):774.10.1119/1.18647CrossRefGoogle Scholar
Ritchie, Stuart. 2020. Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. New York: Metropolitan Books.Google Scholar
Slater, Matthew H., Huxster, Joanna K., and Bresticker, Julia E.. 2019. “Understanding and Trusting Science.” Journal for General Philosophy of Science 50 (April):247–61.CrossRefGoogle Scholar
Slater, Matthew H., Huxster, Joanna K., Bresticker, Julia E., and LoPiccolo, Victor. 2020. “Denialism as Applied Skepticism: Philosophical and Empirical Considerations.” Erkenntnis 85:870–71.10.1007/s10670-018-0054-0CrossRefGoogle Scholar
Slater, Matthew H., Huxster, Joanna K., and Scholfield, Emily. Forthcoming. “Public Conceptions of Scientific Consensus.” Erkenntnis.https://doi.org/10.1007/s10670-022-00569-zCrossRefGoogle Scholar
Slater, Matthew H., Scholfield, Emily R., and Conor Moore, J.. 2021. “Reporting on Science as an Ongoing Process (or Not).” Frontiers in Communication 5 (535474):10.10.3389/fcomm.2020.535474CrossRefGoogle Scholar
Solomon, Miriam. 2007. “The Social Epistemology of NIH Consensus Conferences.” In Establishing Medical Reality, edited by Kincaid, Harold and McKitrick, Jennifer, 90:167–77. Dordrecht: Springer Netherlands.10.1007/1-4020-5216-2_12CrossRefGoogle Scholar
van der Linden, Sander L., Leiserowitz, Anthony A., Feinberg, Geoffrey D., and Maibach, Edward W.. 2015. “The Scientific Consensus on Climate Change as a Gateway Belief: Experimental Evidence.” PLOS ONE 10 (2):e0118489.10.1371/journal.pone.0118489CrossRefGoogle ScholarPubMed
Wynne, Brian. 2006. “Public Engagement as a Means of Restoring Public Trust in Science—Hitting the Notes, but Missing the Music?Public Health Genomics 9 (3):211–20.10.1159/000092659CrossRefGoogle ScholarPubMed