Hostname: page-component-6bf8c574d5-t27h7 Total loading time: 0 Render date: 2025-02-19T21:17:49.372Z Has data issue: false hasContentIssue false

When Is Scientific Dissent Epistemically Inappropriate?

Published online by Cambridge University Press:  01 January 2022

Rights & Permissions [Opens in a new window]

Abstract

Normatively inappropriate scientific dissent prevents warranted closure of scientific controversies and confuses the public about the state of policy-relevant science, such as anthropogenic climate change. Against recent criticism by de Melo-Martín and Intemann of the viability of any conception of normatively inappropriate dissent, I identify three conditions for normatively inappropriate dissent: its generation process is politically illegitimate, it imposes an unjust distribution of inductive risks, and it adopts evidential thresholds outside an accepted range. I supplement these conditions with an inference-to-the-best-explanation account of knowledge-based consensus and dissent to allow policy makers to reliably identify unreliable scientific dissent.

Type
Social Epistemology and Science Policy
Copyright
Copyright 2021 by the Philosophy of Science Association. All rights reserved.

1. Introduction

Scientific dissent plays an indispensable role in the growth of scientific knowledge. From Copernicus’s heliocentric planetary model, through Marshall and Warren’s bacterial theory of peptic ulcers, to Shechtman’s theory of quasi-periodic crystal, a marginal dissent view may eventually prevail and become the new consensus. Even when they do not eventually prevail, dissenting views may expose weaknesses in accepted theories and help researchers strengthen them.

Some dissent, however, such as the prolonged dissent about the harmful effects of smoking, is epistemically harmful. According to de Melo-Martín and Intemann, normatively inappropriate dissent (NID) is dissent that “fails to yield any of the epistemic benefits that make even false dissent valuable” (Reference de Melo-Martín and Intemann2018, 16). (The term “normative” here refers only to epistemic normativity, rather than moral or prudential.) NID hinders the growth of scientific knowledge by preventing warranted closure of scientific controversies and by leading community research and argumentation efforts astray in unfruitful directions. NID also confuses the public and decision makers about policy-relevant science, such as the theory of anthropogenic climate change.Footnote 1

According to de Melo-Martín and Intemann, a philosophically satisfying characterization of NID “must provide predictive criteria that can reliably identify NID—that is, it must be able to identify NID as such when the dissent in question is in fact normatively inappropriate and be able to exclude scientific dissent that is actually legitimate. … A primary goal of identifying NID is that doing so can allow us to put strategies in place to prevent or address the adverse consequences NID can have” (Reference de Melo-Martín and Intemann2018, 8–9). They argue that there is currently no satisfying characterization of NID. In particular, so they argue, Biddle and Leuschner’s (Reference Biddle and Leuschner2015) inductive-risk account of NID is unsuccessful. In this article, I develop an account of NID that captures the rationale behind Biddle and Leuschner’s account but overcomes de Melo-Martín and Intemann’s criticism.Footnote 2

According to my account, a dissent is normatively inappropriate if any of these three conditions obtain: (1) its generation process is politically illegitimate; (2) it imposes an unjust distribution of inductive risks; (3) it adopts evidential thresholds outside an accepted range. I further argue that supplementing these conditions with an inference-to-the-best-explanation account of knowledge-based consensus and dissent allows policy makers to reliably identify when scientific dissent should not be relied on for making policy decisions.

2. Biddle and Leuschner’s Inductive-Risk Account of NID

Biddle and Leuschner (Reference Biddle and Leuschner2015) provide an inductive-risk account for distinguishing epistemically beneficial dissent from NID. Inductive risk is the risk that stems from making a wrong epistemic judgment, such as accepting a false hypothesis or rejecting a true hypothesis (Douglas Reference Douglas2009, chap. 5). Drawing on Wilholt (Reference Wilholt2009), who characterizes conventional scientific epistemic standards (e.g., using a critical p-value of 5%) as reflecting conventional trade-offs between inductive risks, Biddle and Leuschner formulate four conditions jointly sufficient for a dissent over a hypothesis H to be considered NID:

Inductive Risk: The nonepistemic consequences of wrongly rejecting H are likely to be severe.

Standards: The dissenting research that constitutes the objection violates established conventional standards.

Producer Risks: The dissenting research involves intolerance for industry-producer risks at the expense of public risks.

Different Parties: Producer risks and public risks fall largely upon different parties.

A paradigmatic case of NID that meets Biddle and Leuschner’s criteria is the long-lasting dissent from the mid-1950s to the mid-2000s denying a causal connection between smoking and lung cancer and the harms of passive smoking. This dissent was based on contrarian research and advocacy heavily funded by tobacco companies (Oreskes and Conway Reference Oreskes and Conway2010). The consequences of this prolonged dissent were illness and death to smokers and people in their close vicinity (conforming to Inductive Risk). It relied on dubious scientific standards (conforming to Standards). It brought large profits to tobacco companies at the expense of public health (conforming to Producer Risks). And the public, rather than tobacco companies, carried most of the burden of these expenses (conforming to Different Parties).

3. The Political-Legitimacy and Fairness Conditions for NID

De Melo-Martín and Intemann make a twofold criticism of Biddle and Leuschner’s inductive-risk account of NID. First, they argue that the conditions Producer Risks and Different Parties that address the distribution of inductive risks are inadequate because some clear cases of NID fail to meet them. Both the dissent that denies that the HIV virus causes AIDS and the dissent that questions the safety of childhood vaccines do not involve intolerance for producer risks at the expense of public risks. In these cases, the dissent goes against the financial interests of drug companies that produce vaccines and AIDS treatments (de Melo-Martín and Intemann Reference de Melo-Martín and Intemann2018, 66). Even the dissent that denies anthropomorphic climate change, which, so Biddle and Leuschner argue, meets their criteria, does not unambiguously satisfy Producer Risks. While it goes against the interests of fossil fuel companies, it aligns with the interests of alternative energy companies (70). Rajendra Pachauri, the chair of the Intergovernmental Panel on Climate Change from 2002 to 2015, was the director of such a company (37). Strictly speaking, these examples do not refute Biddle and Leuschner’s account, because it specifies sufficient rather than necessary conditions for NID. But its failure to capture clear cases of NID, including one that Biddle and Leuschner take their account to capture, significantly hinders its usefulness.

Indeed, whether a dissent is intolerant to producer risks depends on whether the producers produce a problem or a solution. While the conditions Producer Risks and Different Parties arguably fail, the rationale behind them is still sound. They are meant to prevent an unjust, unfair, or illegitimate weighing or distribution of inductive risks that would stem from accepting the dissent view. I therefore suggest instead the following two conditions that, if a dissent fails to meet them, would make it an NID:

Political Legitimacy: The dissent view has been generated and adopted as a basis for public action—if it has been adopted—by a politically legitimate knowledge-producing and stabilizing process.

Fairness: The inductive risks that would stem from accepting or adopting the dissent, and the respective harms and benefits that they entail, would be fairly and justly distributed among relevant members of or groups in society.

These two conditions ensure that both the process and outcome of trading off inductive risks against each other are legitimate. With respect to Political Legitimacy, the process in question is not only the internal scientific process of research and validation but also the external process of considering certain claims as a basis for public action. Empirical STS research shows that the stabilization and acceptance or rejection of knowledge claims in policy contexts depends not only on their epistemic validity, narrowly construed, but also on the perceived political legitimacy of the process of their consideration. Which processes are regarded as politically legitimate greatly varies from one political culture to another (Jasanoff Reference Jasanoff, Camic, Gross and Lamont2012). The Political Legitimacy condition turns this empirical insight into a normative requirement. It would impose restrictions on the process, such as a prohibition on massive lobbying and adequate public and stakeholder representation (Brown Reference Brown2009; Rolin Reference Bouwel2009).

The Fairness condition ensures that the outcome of the process is just, for example, that disempowered populations would not unjustly suffer the consequences of an error in the dissent view. Unlike Biddle and Leuschner, I do not provide an account of which processes are politically legitimate and which distributions of risk are just. The answer to these questions greatly depends on the particularities of the case and the relevant stakeholders. However, what distribution of inductive risks is just, as opposed to what the inductive risks are, is neither a scientific question nor a question in philosophy of science.Footnote 3 Rather, it is a public ethics question that can be fruitfully debated by ethicists and political philosophers and resolved in the usual ways political issues are resolved.

It might be objected that the fact that a dissent view unjustly aligns, for example, with the interests of big corporations or unjustly imposes unfair risks on vulnerable groups does not necessarily mean that the view is false. Hence, pursuing it might have epistemic benefits, and a dissent over it is not NID. A large body of philosophical literature today, however, argues that scientific knowledge is not value free (Douglas Reference Douglas2000, Reference Douglas2009; Intemann and de Melo-Martín Reference Intemann and Melo-Martín2010; Elliott Reference Elliott2017; Hicks, Magnus, and Wright Reference Hicks, Magnus and Wright2020). This claim pertains both to applied and basic science. Examples of inductive risks involved in basic science include damaging a researcher’s or a journal’s reputation, exhausting a pool of reviewers because of overchecking, or losing a priority race (Miller Reference Miller2014b; Magnus Reference Magnus2018).

Taking seriously the value-ladenness of scientific knowledge means that scientific knowledge claims “reflect the various trade-offs between values that were made in the process of inquiry leading to them,” and the appeal to social values “is necessary for achieving scientific objectivity because only it offers a non-arbitrary, principled, and relevant way to decide the various dilemmas that arise during inquiry and influence its outcomes” (Miller Reference Miller2014b, 260). Put differently, the accepted truth value of scientific knowledge claims, including dissent claims, is entangled with the value judgments that were made in the process of research that produced them. If these value judgments are flawed, so are the knowledge claims.

It might be further objected that imposing unjust risks is not logically tied with an epistemically deficient research program. For example, a James Bond style evil genius may threaten to destroy the Northern Hemisphere if the bacterial theory of disease is accepted. Accepting the theory in this case would entail unjust risk for the population of the Northern Hemisphere, although the theory is epistemically sound. Against this, I argue that the risk in this case does not genuinely stem from the theory (we can replace the theory in this example with any other theory), and it would be hard to find a case like this where the risk is a genuine consequence of accepting the theory in question. While such objections can be devastating for some arguments in analytic epistemology, in the current more realistic context, they carry little force.

4. The Workable-Evidential-Thresholds Condition for NID

De Melo-Martín and Intemann’s second line of criticism against Biddle and Leuschner’s account of NID focuses on the condition Standards. I agree that Standards is problematic but for reasons different from de Melo-Martín and Intemann’s. De Melo-Martín and Intemann understand the term “conventional standards” as referring to accepted methodological rules. They argue that breaching conventional standards cannot always be used for identifying NID because, sometimes, the disagreement between the dissenters and consensus supporters is whether a methodological rule has been used correctly or breached. For example, where one groups sees a signal, another group sees noise. They may also disagree about the relative weight that should be assigned to different accepted epistemic values (de Melo-Martín and Intemann Reference de Melo-Martín and Intemann2018, 68–69).

Recall, however, that Biddle and Leuschner employ Wilholt’s account of shared standards. Wilholt (Reference Wilholt2009, 98) characterizes shared standards as conventions that impose explicit or implicit constraints on acceptable error probabilities within a research community. They are solutions to a social-epistemic problem of coordination in a community: they allow individual researchers to develop a reliable sense of the dependability of certain kinds of scientific outcomes based only on their knowledge of the procedures rather than of the person who conducted the studies. Namely, provided that a scientist has adequately followed the relevant conventions, other scientists can reliably estimate the reliability of her reported outcomes without knowing her personally.

Thus, according to Wilholt (Reference Wilholt2009, 99), implicit or explicit infringement of shared standards occurs when values, such as personal, economic, or ideological investments, make scientists lower or raise the evidential thresholds in a way that increases the likelihood of arriving at a preferred result and violates their community’s shared understanding of these thresholds. To the extent that such evidential thresholds are fixed, Wilholt’s characterization of what constitutes an infringement of shared standards is immune to de Melo-Martín and Intemann’s criticism.

In my view, the problem with relying on Wilholt’s account of what constitutes an infringement of shared standards for the sake of identifying NID lies elsewhere. As Wilholt notes, social values participate in setting the conventional evidential standards. Different social values in different scientific contexts may participate in raising or lowering conventional threshold values. For example, in significance testing, there is an inherent mathematical trade-off between minimizing false positives and false negatives. Values influence the balance between false positives and false negatives. The existing scientific standards, which are manifested inter alia in the widespread choice of the 5% significance level, are conservative in that they regard false positives as more serious errors than false negatives (Wilholt Reference Wilholt2009, 99).

The problem with relying on such conventional standards for identifying NID is that the implicit or explicit trade-offs between inductive risks that the accepted conventional standards reflect may not be appropriate in the context of normatively appraising the dissent. In particular, the trade-offs they make between inductive risks may violate the Fairness condition. For example, when assessing whether a substance is harmful, the conventional 5% significance level standard makes a default assumption that a substance is safe and puts a burden of proof on those who want to claim otherwise. In some contexts, however, we may legitimately want to relax this burden of proof and demand only a 7% significance level or even make an opposite default assumption that a substance is unsafe unless proven otherwise.

Wilholt downplays the significance of the specific weighing of inductive risks that a certain conventional standard reflects. For him, any conventional standard can work, as long as it is widely accepted. According to Wilholt, “the standards adopted are arbitrary in the sense that there could have been a different solution to the same coordination problem, but once a specific solution is socially adopted, it is in a certain sense binding” (Reference Wilholt2009, 98). Elsewhere, however, I argue that such standards “are arbitrary only to a certain extent and within a certain range. The conventional critical p-value could have been 6 or 4.6 percent. Such values would also have served as reasonable solutions to the community’s coordination problem. A critical p-value of 45 percent, however, would not have worked, as it would have meant that the community accepted as statistically significant results that were just slightly higher than chance” (Miller Reference Miller2014a, 75; emphasis added).

That is, conventional evidential thresholds must be within a range that allows scientists to make meaningful knowledge claims. If, for instance, they do not allow scientists to distinguish signal from noise or genuine events from chance events, they are not up to their task. Only within this range, may evidential thresholds be adjusted by values to reflect different trade-offs between inductive risks.

In addition, the evidential standards should not be set in a way that prevents or makes it very unlikely to reach a certain outcome. For example, in 2001, an industry-employed researcher performed an experiment on corn that was genetically modified to contain a toxin for combating root worms. After 8 days of experiment, almost 100% of ladybugs that ate the corn died. In a subsequent experiment, the researcher conveniently stopped recording data after 7 days and determined that the corn was safe for ladybugs (Biddle Reference Biddle2014, 18). Generally speaking, stopping a study as soon as it shows desirable results or just before it would show undesirable results is illegitimate. I therefore suggest the following condition that, if a dissent violates it, would make it an NID:

Workable Evidential Thresholds: The dissent adopts evidential thresholds within a range that allows researchers to make meaningful knowledge claims and that is not knowingly likely to rule out a priori the attainment of certain empirical results.

5. Conflicting Predictions of Risk and Inference to the Best Explanation

So far I have formulated three conditions for identifying NID: Political Legitimacy, Fairness, and Workable Evidential Thresholds. I have argued that these conditions capture the rationale behind Biddle and Leuschner’s inductive-risk account of NID, which is to exclude research that imposes unfair risks or violates accepted epistemic standards, while avoiding de Melo-Martín and Intemann’s criticism of it.

There is, however, a further difficulty with both Biddle and Leuschner’s account of NID and mine, which goes as follows. In the cases discussed so far, the consequences of error in either the consensus theory or the dissent theory were not themselves in dispute between the dissenters and consensus supporters. In some cases, however, they are. For example, according to HIV deniers, AIDS symptoms, which the mainstream medical community attributes to the HIV virus, are actually caused by the very treatments that are given to AIDS patients. Unfortunately, the dissent view was adopted as a basis for public health policy by former South African president Thabo Mbeki. Obviously, HIV deniers and the mainstream medial consensus would differ on how to apply the Fairness condition. The mainstream medical community would regard not treating AIDS patients as imposing an unfair risk on them, while the dissenters would regard treating them for AIDS as imposing an unfair risk on them. How should we arbitrate such cases?

I suggest that when there are conflicting predictions of risk between dissenters and consensus supporters, for the sake of applying the Fairness condition, we should adopt the view that is more likely to be knowledge based. According to my previously published accounts of knowledge-based consensus (Miller Reference Miller2013, Reference Miller2016), a consensus is knowledge based when knowledge is the best explanation thereof. There are four types of consensus, three of which are not knowledge based and one is. The first is a noncognitive consensus, which aims at promoting nonepistemic aims. The second is a “vertically lucky” consensus. This is an agreement that happens to be correct but could have easily been wrong. The third is an epistemically unfortunate consensus, in which parties to the consensus have the bad luck of being systematically or deliberately mislead or biased. When a consensus belongs to none of these three types, it is likely to be knowledge based.

When we can eliminate nonepistemic factors, veritic epistemic luck, and epistemic misfortune as the best explanations of a consensus, knowledge remains its best explanation. I identify three conditions for knowledge being the best explanation of a consensus: (1) social calibration, researchers give the same meaning to the same terms and share the same fundamental background assumptions; (2) apparent consilience of evidence, the consensus seems to be built on an array of evidence that is drawn from a variety of techniques and methods; and (3) social diversity, the consensus is shared by men and women, researchers from the private and public sectors, liberals and conservatives, and so on.Footnote 4

In a recent paper (Miller Reference Miller, Henderson, Graham, Fricker and Pedersen2019, 234), I explain how my account of knowledge-based consensus can be expanded to deal with dissent as well. If knowledge remains the best explanation of a consensus excluding a dissent, and the dissent is best explained by nonepistemic factors, such as financial interests or personal investment, then we may still legitimately infer that the consensus excluding the dissent is knowledge based.

In the case of AIDS, the medical consensus that the HIV virus causes AIDS arguably satisfies the three conditions for knowledge-based consensus; hence, knowledge is its best explanation. By contrast, the dissent is arguably best explained by the personal investment of the few dissenting scientists, primarily molecular biologist Peter H. Duesberg, in the theory and by the external political support they have managed to gain.Footnote 5 Thus, a correct application of the Fairness condition in this case would be that denying AIDS treatment exposes AIDS patients to unfair risks, rather than administrating the treatment to them.

It might be objected that this suggestion unjustly discriminates against dissenting scientists just because they are dissenting. For example, Copernicus was also dissenter, but his view eventually prevailed. Against this, I argue that my account gives the right verdict on the Copernicus case (in any case, the Copernicus example does not meet the previous conditions for NID). First, it would be Whiggishly wrong to say that Copernicus was right. A central hypothesis that Copernicus made, namely, that the sun, rather than earth, was the center of the planetary system survived, but the rest of his theory, including the hypothesis that planets move in uniform circular motion and the system of epicycles he developed, was proven wrong. Second, as it is well known, Copernicus did not have a theory of a moving earth; thus, he could not explain why the earth appeared stationary. Last, he could not explain the lack of stellar parallax predicted by his theory. Arguably, he speculated correctly but did not know that the earth rotated around the sun. Knowledge was not the best explanation of his belief. The consensus of his time was arguably also not knowledge based because it was maintained by the church, which blocked alternative views from being explored. The conclusion that neither the consensus nor the dissent in this case were knowledge base is arguably the right one.

6. Conclusion

I have defended an inductive-risk account of epistemically inappropriate dissent. I agree with de Melo-Martín and Intemann (Reference de Melo-Martín and Intemann2018, chap. 6), however, that even if my account is successful, targeting an epistemically inappropriate dissent may not always be advisable. A better strategy may be to target the underlying causes that give rise to an epistemically inappropriate dissent in the first place. Targeting these underlying causes, however, is not always practicable, and my account can still, in some cases, help public members and policy makers decide which scientists to trust.

Footnotes

This article was written with the support of the Israel Science Foundation grant 650/18 for the project Skepticism about Testimony (principal investigators: Arnon Keren and Boaz Miller). I thank Kristen Intemann, Inmaculada de Melo-Martín, and David Kovacs for helpful feedback.

1. Consensus does not mean complete unanimity. For a discussion of the minimal level of agreement that needs to obtain in an epistemic community for a consensus to exist, see Miller (Reference Miller2013, 1300–1303) and Dang (Reference Dang2019). Dissent may also come in different degrees and flavors; see Delborne (Reference Delborne and Bretag2016).

2. Two additional accounts of NID may be identified in the literature. Like de Melo-Martín and Intemann, I think they are both inadequate. One account of NID is provided within Longino’s Critical Contextual Empiricism (Reference Longino2002, chap. 6), according to which a dissent is normatively inappropriate when the dissenters fail to take up criticism of it and continue to hold the dissent view regardless. As Fernández Pinto (Reference Fernández Pinto2014) and de Melo-Martín and Intemann (Reference de Melo-Martín and Intemann2018, chap. 4) argue, however, Longino’s account of NID faces special difficulties dealing with manufactured uncertainty. Who is to judge whether dissenters fail to follow the norm of uptake of criticism or whether their concerns are genuine? When may a community stop engaging with them and move on? Defending Longino against this anticipated charge, Borgerson (Reference Borgerson2011, 445) argues that if we distinguish the level of certainty required for action from that required for knowledge, interested parties will be less motivated to manufacture uncertainty. In response, I have argued elsewhere that Critical Contextual Empiricism should still be able to determine when closure in an epistemic community is warranted despite incessant criticism (Miller Reference Miller, Keren and Hawkins2015, 118–19). In any case, distinguishing between different levels of certainty by inductive risks is in the spirit of the account I develop in this article. According to a second suggestion, defended most recently by Oreskes (Reference Oreskes2019), dissent is normatively inappropriate if the dissenting scientists act in bad faith and have a conflict of interest in that the acceptance of the consensus view may financially hurt them or their industry sponsors. As de Melo-Martín and Intemann (Reference de Melo-Martín and Intemann2018, chap. 3) argue, however, scientists perform research for many psychological reasons, and it is hard to judge what bad faith is. With respect to conflict of interests, Oreskes admits that industry-funded science can still be valuable “so long as the norms of critical interrogation are operating and conflicts of interest are forthrightly disclosed and where necessary addressed” (Reference Oreskes2019, 66–67). But if the criterion for NID is ultimately whether the dissenters follow norms of critical scrutiny, then Oreskes’s account boils down to Longino’s account, which, as argued above, is unsatisfactory.

3. But a related question of what roles for values in science are legitimate and positive is within the domain of philosophy of science. See Fernández Pinto and Hicks (Reference Fernández Pinto and Hicks2019) for an account of a legitimate and positive role for values in regulatory science and Brown (Reference Brown2020) for an ambitious account for science in general.

4. As Dellsén (Reference Dellsén2018) argues, complete agreement with no dissent whatsoever is suspicious, because its best explanation is not knowledge but some form of coercion or group-think.

5. Space limitations prevent me from presenting the empirical evidence for this claim.

References

Biddle, Justin. 2014. “Can Patents Prohibit Research? On The Social Epistemology of Patenting and Licensing in Science.” Studies in History and Philosophy of Science 45:1423.CrossRefGoogle Scholar
Biddle, Justin, and Leuschner, Anna. 2015. “Climate Skepticism and the Manufacture of Doubt: Can Dissent in Science Be Epistemically Detrimental?European Journal for Philosophy of Science 5 (3): 261–78.CrossRefGoogle Scholar
Borgerson, Kirstin. 2011. “Amending and Defending Critical Contextual Empiricism.” European Journal for Philosophy of Science 1 (3): 435–49.CrossRefGoogle Scholar
Brown, Mark B. 2009. Science in Democracy: Expertise, Institutions, and Representation. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Brown, Matthew J. 2020. Science and Moral Imagination: A New Ideal for Values in Science. Pittsburgh: University of Pittsburgh Press.CrossRefGoogle Scholar
Dang, Haixin. 2019. “Do Collaborators in Science Need to Agree?Philosophy of Science 86 (5): 1029–40.CrossRefGoogle Scholar
Delborne, Jason A. 2016. “Suppression and Dissent in Science.” In Handbook of Academic Integrity, ed. Bretag, Tracy, 943–56. Singapore: Springer.Google Scholar
Dellsén, Finnur. 2018. “When Expert Disagreement Supports the Consensus.” Australasian Journal of Philosophy 96 (1): 142–56.CrossRefGoogle Scholar
de Melo-Martín, Inmaculada, and Intemann, Kristen. 2018. The Fight against Doubt: How to Bridge the Gap between Scientists and the Public. Oxford: Oxford University Press.CrossRefGoogle Scholar
Douglas, Heather. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67 (4): 559–79.CrossRefGoogle Scholar
Douglas, Heather. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press.CrossRefGoogle Scholar
Elliott, Kevin C. 2017. A Tapestry of Values: An Introduction to Values in Science. Oxford: Oxford University Press.CrossRefGoogle Scholar
Fernández Pinto, Manuela. 2014. “Philosophy of Science for Globalized Privatization: Uncovering Some Limitations of Critical Contextual Empiricism.” Studies in History and Philosophy of Science 47:1017.CrossRefGoogle Scholar
Fernández Pinto, Manuela, and Hicks, Daniel J.. 2019. “Legitimizing Values in Regulatory Science.” Environmental Health Perspectives 127 (3): 035001–1.–8.CrossRefGoogle ScholarPubMed
Hicks, Daniel J., Magnus, P. D., and Wright, Jessey. 2020. “Inductive Risk, Science, and Values: A Reply to MacGillivray.” Risk Analysis 40 (4): 667–73.CrossRefGoogle ScholarPubMed
Intemann, Kristen, and Melo-Martín, Inmaculada de. 2010. “Social Values and Scientific Evidence: The Case of the HPV Vaccines.” Biology and Philosophy 25 (2): 203–13.CrossRefGoogle ScholarPubMed
Jasanoff, Sheila. 2012. “The Practices of Objectivity in Regulatory Science.” In Social Knowledge in the Making, ed. Camic, Charles, Gross, Neil, and Lamont, Michèle, 307–38. Chicago: University of Chicago Press.Google Scholar
Longino, Helen. 2002. The Fate of Knowledge. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Magnus, P. D. 2018. “Science, Values, and the Priority of Evidence.” Logos and Episteme 9 (4): 413–31.CrossRefGoogle Scholar
Miller, Boaz. 2013. “When Is Consensus Knowledge Based? Distinguishing Shared Knowledge from Mere Agreement.” Synthese 190 (7): 1293–316.CrossRefGoogle Scholar
Miller, Boaz. 2014a. “Catching the WAVE: The Weight-Adjusting Account of Values and Evidence.” Studies in History and Philosophy of Science 47:6980.CrossRefGoogle Scholar
Miller, Boaz. 2014b. “Science, Values, and Pragmatic Encroachment on Knowledge.” European Journal for the Philosophy of Science 4 (2): 253–70.CrossRefGoogle Scholar
Miller, Boaz. 2015. “‘Trust Me—I’m a Public Intellectual’: Margaret Atwood’s and David Suzuki’s Social Epistemologies for Climate Science.” In Speaking Power to Truth: Digital Discourse and the Public Intellectual, ed. Keren, Michael and Hawkins, Richard, 113–28. Athabasca: Athabasca University Press.Google Scholar
Miller, Boaz. 2016. “Scientific Consensus and Expert Testimony in Courts: Lessons from the Bendectin Litigation.” Foundations of Science 21 (1): 1533.CrossRefGoogle Scholar
Miller, Boaz. 2019. “The Social Epistemology of Consensus and Dissent.” In The Routledge Companion to Social Epistemology, ed. Henderson, David, Graham, Peter, Fricker, Miranda, and Pedersen, Nikolaj, 228–39. New York: Routledge.Google Scholar
Oreskes, Naomi. 2019. Why Trust Science? Princeton, NJ: Princeton University Press.Google Scholar
Oreskes, Naomi, and Conway, Erik M.. 2010. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury.Google Scholar
Rolin Kristina. 2009. “Scientific Knowledge: A Stakeholder Theory.” In The Social Sciences and Democracy, ed. Bouwel, Jeroen van, 6280. New York: Palgrave-Macmillan.CrossRefGoogle Scholar
Wilholt, Torsten. 2009. “Bias and Values in Scientific Research.” Studies in History and Philosophy of Science 40:92101.CrossRefGoogle Scholar