Hostname: page-component-6bf8c574d5-b4m5d Total loading time: 0 Render date: 2025-02-20T15:07:47.299Z Has data issue: false hasContentIssue false

Using Democratic Values in Science: An Objection and (Partial) Response

Published online by Cambridge University Press:  01 January 2022

Rights & Permissions [Opens in a new window]

Abstract

Many philosophers of science have argued that social and ethical values have a significant role to play in core parts of the scientific process. This naturally suggests the following question: when such value choices need to be made, which or whose values should be used? A common answer to this question turns to democratic values—the values of the public or its representatives. I argue that this imposes a morally significant burden on certain scientists, effectively requiring them to advocate for policy positions they strongly disagree with. I conclude by discussing under what conditions this burden might be justified.

Type
Values in Science
Copyright
Copyright © The Philosophy of Science Association

1. Values in Science and the Democratic View

By now, most philosophers of science probably agree that there is an important place for so-called contextual (e.g., personal, ethical, social) values in core parts of the scientific process, especially in areas where science is connected to policy making. Values may appropriately play a role in evaluating evidence (Douglas Reference Weingart2009), choosing scientific models (Elliott Reference Elliott2011), structuring quantitative measures (Stiglitz, Sen, and Fitoussi Reference Stiglitz, Sen and Fitoussi2010; Reiss Reference Reiss2013, chap. 8; Hausman Reference Hausman2015; Schroeder Reference Schroeder2017), or preparing information for presentation to nonexperts (Hardwig Reference Hardwig and Wueste1994; Resnik Reference Resnik2001; Elliott Reference Elliott2006). The natural follow-up question has received less sustained attention: when scientists should make use of values, which (or whose) values should they use?Footnote 1

In some cases, philosophers of science criticize a value choice on substantive ethical grounds (e.g., Hoffmann and Stempsey Reference Hoffmann and Stempsey2008; Shrader-Frechette Reference Shrader-Frechette2008; cf. Dupré Reference Dupré, Couch and Pfeifer2016, 197). This suggests that the values to be used are the objectively correct ones. A second common view gives scientists latitude to choose whatever (reasonable) values they prefer or think best, usually supplemented by a requirement of transparency. This is suggested by many existing codes of scientific ethics, which impose few constraints on scientists in making such choices. Finally, a third view says that scientists ought to use the appropriate democratic values—that is, the values held or endorsed by the public or its representatives—at least when those values are informed and substantively reasonable.Footnote 2 The most straightforward argument for this view grounds it in considerations of democracy or political legitimacy. If certain value choices are going to ultimately influence policy, then the public or its representatives have a right to make those choices (Douglas Reference Douglas, Maasen and Weingart2005; Kitcher Reference Kitcher2011; Intemann Reference Intemann2015; cf. Steele Reference Steele2012).

A full answer to the question of which values ought to be used in science will, I think, give a role to each of these positions. But I believe that the largest and most extensive place will go to the third, which I will call the democratic view: in many of the most important cases in which values are called for in core parts of the scientific process, scientists should privilege the values endorsed by the public or its representatives.Footnote 3 The most obvious concern with this view, and one that has received much attention from its advocates, is that it does not seem practical. It is not feasible to ask citizens or policy makers to weigh in at every point in the scientific process where values are required, and even if we could, nonexperts often will not have the scientific background to fully understand the options before them. Substitutes for actual participation on the part of policy makers or the public, such as asking scientists to predict what the public would choose or to determine what values policy makers would hold upon reflection, seem to place unreasonable epistemic demands on scientists.

Guston (Reference Guston2004), Douglas (Reference Douglas, Maasen and Weingart2005), Intemann (Reference Intemann2015), and others have argued that these problems are not insurmountable, by suggesting specific ways that the concerns of policy makers and the public can be brought into the scientific process. And Elliott (Reference Elliott2006, Reference Elliott2011) has suggested a more general way we might make progress. The democratic view goes hand in hand with a view of the relationship between science and policy that is widely held: that the role of a scientist is to promote informed decision making by policy makers.Footnote 4 Bioethicists have extensively discussed how health care professionals can promote informed decision making on the part of patients and research subjects, even in cases in which a patient’s values may be uncertain, different research subjects may hold different values, and so forth. Elliott’s hope is that many of these suggestions can be adapted to the scientific case, or at least a parallel research program could be carried out, informed by the work of bioethicists.

It is, of course, far from established that these proposals will work, but the range of options on the table strikes me as cause for optimism. And even if these solutions do not work in all cases, there is bite to the democratic view, since it could still tell scientists to use democratic values when they can determine those values. Accordingly, in this article I would like to describe a different and I think deeper concern with the democratic view, one that has been conspicuously absent from the literature thus far. In requiring scientists to guide certain aspects of their work by democratic values, we will sometimes in effect ask that they support political causes they may personally oppose and bar them from fully advocating for their preferred policy measures. We are, then, depriving scientists of important political rights possessed by the general public. In the remainder of this article, I will spell out this objection more fully and explain why I think it has significant moral force. In the end, I will suggest that although there is reason to think that the objection does not ultimately undermine the democratic view, it nevertheless constitutes a significant cost that accompanies that view, which its proponents need to acknowledge.

2. Two Cases in Which the Democratic View Seems Troublesome

The literature on values in science is vast and diverse, and so it will be useful to have some particular examples in mind. First, consider Douglas’s (Reference Douglas2000, Reference Weingart2009) argument that scientists should or must appeal to value judgments when resolving particular sorts of uncertainties that arise during the scientific process. Scientists conducting research into the potential carcinogen dioxin, for example, were faced with liver samples that had tumors that could not clearly be categorized as malignant or benign. In resolving such borderline or ambiguous cases, Douglas argues that scientists should appeal to contextual values when the constitutive norms of science do not dictate any resolution. In this case, health-protective values would lead scientists to classify borderline samples as malignant, while concerns about overregulation of industry would lead scientists to classify those same samples as benign (Douglas Reference Douglas2000).

Second, consider the many choices that scientists have to make when preparing their results for presentation. How should uncertainty be characterized? (Should 90% or 95% confidence intervals be used?) Which study results should be highlighted? (Which drug side effects should be discussed at length, and which should be included as part of a long list?) How should statistics be summarized? (As means or medians? Should results be broken down by gender or presented only in aggregate?) In making choices like these, scientists frequently must appeal to values—to decide, for example, which pieces of information are important and which are not.Footnote 5

It is, I presume, uncontroversial that these value choices—how to resolve uncertainties in the research process and how to present results—can influence policy in foreseeable ways. Douglas, for example, argues that this is the case in the dioxin studies. Classifying borderline samples as malignant will make dioxin appear to be a more potent carcinogen, likely leading policy makers to regulate it more stringently (Douglas Reference Douglas2000, 571). Keohane, Lane, and Oppenheimer (Reference Keohane, Lane and Oppenheimer2014) show how a presentation choice made by the Intergovernmental Panel on Climate Change led to poor policy outcomes, which likely could have been avoided by presenting information differently. More generally, we know from a wealth of studies in psychology and behavioral economics that the way information is presented to someone can strongly influence her subsequent choices (Thaler and Sunstein Reference Thaler and Sunstein2008), and there have been several influential commentaries calling for scientists to more carefully “frame” their results (Nisbet and Mooney Reference Nisbet and Mooney2007; Lakoff Reference Lakoff2010). So, it seems clear that the value choices made by scientists can predictably affect policy.

If these value choices can influence policy, then in directing scientists to make them in accordance with democratic values—as opposed to the scientists’ personal values—we are asking scientists to characterize policy-relevant material in a way that may promote an outcome they strongly disfavor. For example, suppose the scientists in Douglas’s dioxin study value public health much more than they value keeping industry free from overregulation, but the public and its elected representatives have the opposite view. Further, suppose both views are substantively reasonable, in that they are within the range of policies eligible for adoption through democratic processes. In this case, the democratic view would tell the scientists to categorize borderline samples as benign, since that would better cohere with the public’s values. This could make dioxin appear to have minimal carcinogenic effects, predictably leading to less regulation than would have occurred had the scientists classified borderline samples according to their own health-protective values. Similarly, suppose an environmental economist conducting an impact study of a proposed construction project is herself deeply committed to the preservation of natural spaces. Nevertheless, if the public is strongly committed to economic development, the democratic view would require her to put front and center a detailed breakdown of the economic consequences of construction while describing the intrinsic ecological costs more briefly or in a less prominent place—potentially frustrating her desire for preservation.

Note that the concern here is not simply that scientists are being asked to provide information that will lead to an outcome they disfavor. I take it that any reasonable approach to scientific ethics will require that scientists communicate honestly, even in cases in which that promises to yield policies they do not like. Similarly, I presume that scientists must also be forbidden from presenting information in ways that, although technically accurate, are nevertheless misleading. The problem here is that Douglas’s scientists are being asked to characterize samples in one way (as benign) that could, with equal scientific validity, have been characterized differently (as malignant). And our environmental economist is being asked to present her results in one way (highlighting economic benefits), when an alternate presentation (one highlighting ecological costs) would be equally honest, accurate, objective, transparent, clear, and so forth. In each case, then, we have a collection of underlying data that can be described or characterized in different ways, neither of which appears to be more scientifically valid than the other. The democratic view insists that scientists choose the description grounded in values they do not accept and that seems likely to promote policy outcomes they disfavor. In this respect, the democratic view requires scientists to in effect advocate for, or at least tilt the playing field toward, political views they disagree with.Footnote 6

3. Elliott and the Principle of Helpfulness

This seems to be a significant imposition on scientists and thus a cost of the democratic view. It is therefore surprising that, so far as I can tell, philosophers who have argued for the democratic view have not commented on it. This is most striking in Elliott’s work. Elliott, recall, argues that scientists should aim to promote informed decision making among policy makers, in something like the way physicians should aim to promote informed decision making among patients. Standard accounts in bioethics say that it is the patient’s values that carry the day: in normal cases, the physician’s job is to help a patient make decisions that cohere with her own values. If the scientific case is analogous, then the scientist’s job is to help policy makers make decisions that cohere with their (or with the public’s) values. This, in turn, suggests that scientists should use democratic values when resolving uncertainties, presenting results, and so forth. In other words, Elliott’s proposal seems to imply the democratic view.Footnote 7

The main defense Elliott offers for this view, however, relies on Scanlon’s Principle of Helpfulness: “Suppose I learn, in the course of conversation with a person, that I have a piece of information that would be of great help to her because it would save her a great deal of time and effort in pursuing her life’s project. It would surely be wrong of me to fail (simply out of indifference) to give her this information when there is no compelling reason not to do so” (Scanlon Reference Scanlon1996, 224; quoted in Elliott Reference Elliott2011, 139). Elliott (Reference Elliott2011, 139) sums up the idea this way: “In situations where one can significantly help another individual by engaging in an action that requires little sacrifice, it is morally unacceptable not to help.” If the democratic view, however, requires characterizing data or presenting information in ways that promote policy choices a scientist strongly opposes, then this principle does not apply. When the prohealth scientist is required to classify ambiguous samples as benign, that does involve a sacrifice. A refusal to do so—which would hinder the proindustry policy maker’s ability to make an informed regulatory decision—would not be done “simply out of indifference.” It would be done out of the scientist’s desire to protect public health. (Similar things, obviously, can be said about the environmental economist asked to highlight the economic aspects of a proposed construction project.)

Scanlon’s Principle of Helpfulness is a quite weak one, applying only in cases in which the agent in question can put forward no significant burden of compliance. That Elliott uses it to justify his informed decision-making framework, and implicitly the democratic view, suggests that he thinks that such a view does not impose significant burdens on scientists. But that is not the case. Even if the democratic view is justified—and, as I have said, I think it is—we need to recognize that it asks a lot of scientists in cases in which their values diverge from those of the relevant political body.

4. Physicians versus Scientists

This, however, brings up an interesting question. If Elliott is right that the scientific case is analogous to the biomedical case, then we might expect to find informed-consent requirements in medicine to be similarly burdensome. Few bioethicists, though, would have sympathy for a physician who claimed that seeking informed consent constituted a significant ethical burden. (They may have sympathy for the claim that seeking informed consent is burdensome in more mundane ways—e.g., too time-consuming—but those complaints seem very unlike the scientists’.) I think that there is an important difference between the cases that will help us more clearly understand why the scientist is often burdened in a way that carries moral weight, while the physician normally is not.

We can see this by constructing a scenario that seems to put a physician in a position like the scientist’s. Consider Jane, a doctor who strongly believes that the end of life for terminal patients is greatly enhanced by effective pain management, even if doing so shortens the patient’s life or impairs his consciousness. For this reason, Jane has chosen palliative care as her specialty, making it her life’s work to help dying patients avoid unnecessary pain. One of her patients, John, has continually insisted that he wants to remain as lucid as possible, even if that means agony. As he lies there, in agony, Jane suspects that if she framed the information properly—highlighting a medication’s ability to relieve pain while downplaying its cognitive effects—she might be able to get John to accept it. And accepting the medication, Jane strongly believes, would be much better for John. Nevertheless, standard interpretations of informed consent forbid her from doing so. Knowing that John is especially concerned about lucidity, she is ethically bound to highlight that information when informing him of his options. Unsurprisingly, John declines the pain medication and experiences what Jane regards as an awful death—precisely the kind of thing she went into palliative care to prevent.

Like our prohealth scientist, Jane has been asked to present information in a way that frustrates her deeply held values. But suppose Jane complains to the ethics board at her hospital, arguing that it is burdensome to ask her to highlight to John the effects of pain medication on lucidity because doing so would frustrate her deeply held values. This complaint does not strike me as at all compelling. Why? Because Jane’s values should not hold any sway over John’s medical choices. John has the right to reject pain medication, whatever Jane (or just about anyone else) thinks about it. Put another way, John has no obligation to take Jane’s wishes into consideration when he makes his decision. His decision is ultimately his.

Now, imagine that our prohealth scientist complains to her ethics committee, asserting that it is burdensome to ask her to present her data in a proindustry light, when it could with equal scientific validity be presented in a prohealth light, because doing so would frustrate her deeply held concern for public health. Or imagine that the environmental economist complains about having to foreground the economic benefits of the proposed construction project, since doing so will make it more likely that the project is approved and another natural space will be bulldozed. If the scientists are citizens of the society in question, then their situation is different from Jane’s. As citizens in a democracy, their views should hold some sway over their government’s policy choices. A government does have an obligation to take its citizens’ views into consideration when making policy decisions. And when the government ultimately acts, it does so on the scientists’ behalf. The decision is, in part, the scientists’.

The scientists, then, are stakeholders and even decision makers to a certain extent in the associated policy decisions, in a way that Jane is not a stakeholder in John’s decision. This is true even if Jane cares more about John’s decision than our scientists care about the policy decisions. We can see, then, that the democratic view is not burdensome simply because it directs scientists to promote or advocate for outcomes they disfavor. It is burdensome because it sometimes directs scientists to promote or advocate for disfavored views, on matters that they have a right to speak on, to a body that purports to act on their behalf. This is what gives their burden its moral significance.Footnote 8

5. Justifying the Burdens of the Democratic View

Some scientists have recognized the burdens that even neutrality—let alone the democratic view—would impose on them. Consider, for instance, this statement by Barry and Oelschlaeger (Reference Barry and Oelschlaeger1996, 905): “Conservation biology is inescapably normative. Advocacy for the preservation of biodiversity is part of the scientific practice of conservation biology. If the editorial policy of or the publications in [the journal] Conservation Biology direct the discipline toward an ‘objective, value-free’ approach, then they do not educate and transform society…. To pretend that the acquisition of ‘positive knowledge’ alone will avert mass extinctions is misguided…. Without openly acknowledging such a perspective, conservation biology could become merely a subdiscipline of biology, intellectually and functionally sterile and incapable of averting an anthropogenic mass extinction.”Footnote 9 Most conservation biologists enter that field because of a strong commitment to the value of biodiversity and the preservation of nature (Marris Reference Marris2006). Similar things are surely true of other scientific disciplines. (My experience has been that researchers studying gender-based violence and studying economic inequality, for example, disproportionately hold certain political values.) To the extent that these values diverge from the values of the public and its representatives, the democratic view would require these scientists to continually characterize their results in ways structured by a value system they find unacceptable. (In this respect, I suspect things would be quite different for, say, climate scientists. Although their work is controversial, it nevertheless is founded on values that are widely shared. The potentially catastrophic consequences of climate change are ones that most people care about. Climate change skeptics more often object to the empirical claims made by climate scientists—not to the basic values they hold. This is what makes them climate change skeptics or deniers and not climate change apathists.)

Is it fair, then, to tell a conservation biologist, who perhaps entered the field because of her love for natural spaces and who has spent the bulk of her life collecting information that she hopes can be used to preserve them, that she is nevertheless ethically bound to resolve uncertainties in her research in ways favorable to economic growth? Or that she must present her results in ways that highlight the economic value (as opposed to, say, the private or aesthetic value) of undeveloped land? I do not have a full answer to this question—such an answer would require more empirical information, as well as a fuller discussion of political philosophy—but I think we can see how the argument would go. There is a range of situations in which we impose significant restrictions on speech and advocacy for people in important social positions. The Code of Conduct for US judges, for example, bars judges from publicly endorsing candidates for political office and from making speeches for political organizations.Footnote 10 Uniformed US military personnel are not permitted to participate in political fundraising, speak at political events, or display political signs, even on their private vehicles.Footnote 11 Other constraints on speech and advocacy seem ethically appropriate for politicians, police officers, lawyers, and others.

So, if there is an important public good served by constraining scientists’ advocacy, it does not in principle seem problematic to do so. Two arguments along these lines seem promising. First, a distinctly political approach might argue that although imposing this burden on scientists does restrict important political rights of speech and advocacy, it is done in order to expand the political rights of others. By requiring scientists to work from the values of the public, the ability of the public to make informed policy choices and to effectively advocate for their own positions is enhanced. Thus, although the democratic view constitutes a loss of political freedom to scientists, that loss is more than balanced by the gain in political freedom to the public as a whole. (A view like this seems generally consistent with an approach to democracy like Brettschneider’s [Reference Brettschneider2007].)

Second, a straightforwardly consequentialist argument could point out the terrible consequences that threaten to follow if the public or policy makers distrust scientific results. One of the primary arguments that have been put forward in favor of informed-consent requirements in bioethics has been that they promote trust on the part of patients. Similarly, Elliott’s informed decision-making approach—which implies the democratic view—seems like a promising way to promote trust in science (Elliott Reference Elliott2011, 133–36; cf. Hardwig Reference Hardwig and Wueste1994; Resnik Reference Resnik2001). If, then, the democratic view proves to be an effective way of promoting public trust in science, which in turn heads off the problems that ensue when policy makers disregard science, that could justify imposing significant burdens on scientists.

Neither of these defenses, of course, is anywhere near complete. But both do strike me as potentially reasonable, and so I do not think the concerns I have discussed in this article should lead proponents of the democratic view to give up that position. That said, it is important to note the form that these defenses take. Neither attempts to show that the burden on scientists is not morally significant (as, perhaps, we might be inclined to say about the complaint of the palliative-care physician). Instead, they each point to compensating benefits—not necessarily enjoyed by the scientists in question—that morally outweigh the scientists’ burden. This means that the democratic view, even if it is justified, comes at a real cost to scientists, which is something its proponents need to acknowledge.

Footnotes

For comments on previous versions of this article, I thank Alex Rajczi, the students in a seminar on science and values at Claremont McKenna College, and especially the attendees at the 2016 Philosophy of Science Association meeting. For discussions on related topics, I thank Gil Hersch, Daniel Steel, and Branwen Williams. This work was supported in part by a research grant from Claremont McKenna College’s Center for Innovation and Entrepreneurship.

1. In some cases, the justification for incorporating values into the scientific process dictates an answer. Feminist critiques of historically androcentric fields, e.g., suggest that nonandrocentric values are needed as a corrective. I set aside such cases in this article.

2. I set aside, then, cases in which the values of a policy maker or the public are unreasonable, in the sense that they lie outside the range of values that ought to be tolerated in a liberal society. In such cases, an advocate of the democratic view may permit or require scientists to reject those unreasonable values (see, e.g., Resnik Reference Resnik2001). Also, in this article I will set aside the important question of what the democratic view ought to say when the values of the public diverge from the values of policy makers. The answer to this question, I think, will depend on one’s theory of political representation.

3. This, of course, is proposed as a principle of professional ethics—not, e.g., a legal requirement.

4. See also Resnik (Reference Resnik2001) and Martin and Schinzinger (Reference Martin and Schinzinger2010) for theoretical defenses of this idea, which is consonant with the mission statements of many scientific organizations and associations.

5. For discussions, see Hardwig (Reference Hardwig and Wueste1994), Resnik (Reference Resnik2001), Elliott (Reference Elliott2006), Keohane, Lane, and Oppenheimer (Reference Keohane, Lane and Oppenheimer2014), and Schroeder (Reference Schroeder2017).

6. Can we not let scientists advocate for their preferred positions in other ways? We could let scientists present their preferred interpretation separately. But if the democratic view is to have bite, presumably these alternate results will have to be clearly designated so and offered in a less prominent place (e.g., in an appendix or online supplement). And we should of course permit scientists to advocate for their views outside of their scientific papers/reports. But it seems likely that these (private) statements will carry much less policy weight than their scientific ones.

7. In some work, Elliott appears to suggest that transparency about values may be enough (Elliott and Resnik Reference Elliott and Resnik2014). That is, he does not seem to place (many) constraints on scientists’ value choices, so long as they are open about those choices. If that is Elliott’s view—and it is not clear to me that it is—it strikes me as in tension with his insistence that scientists promote informed decision making. Surely I can better help you make a decision that coheres with your values by working from your values rather than by working from my own values (even if I am open about what I am doing). Further, even if scientists are open about their value choices, policy makers frequently will not have the technical expertise to be able to reinterpret a scientific study, replacing one set of values (the scientist’s) with another (their own). (If values could so easily be swapped out by nonspecialists, then much of the debate about values in science would be relatively unimportant. Transparency would be sufficient.)

8. To be clear, I mean for citizenship to be a sufficient, although not necessary, condition for being a stakeholder. There are many other ways in which a scientist might be a stakeholder in some policy decision. (When it comes to climate change, e.g., everyone is a stakeholder in US climate policy.)

9. This article was followed by a collection of commentaries, most of which generally supported the authors’ views. Similar proposals seem to crop up frequently among conservation biologists and are generally endorsed by those in the field (Marris Reference Marris2006).

References

Barry, Dwight, and Oelschlaeger, Max. 1996. “A Science for Survival: Values and Conservation Biology.” Conservation Biology 10:905–11.CrossRefGoogle Scholar
Brettschneider, Cory. 2007. Democratic Rights: The Substance of Self-Government. Princeton, NJ: Princeton University Press.Google Scholar
Douglas, Heather. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67:559–79.CrossRefGoogle Scholar
Douglas, Heather 2005. “Inserting the Public into Science.” In Democratization of Expertise? Exploring Novel Forms of Scientific Advice in Political Decision-Making, ed. Maasen, Sabine and Weingart, Peter, 153–69. Dordrecht: Springer.Google Scholar
Weingart, Peter 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press.Google Scholar
Dupré, John. 2016. “Toward a Political Philosophy of Science.” In The Philosophy of Philip Kitcher, ed. Couch, Mark and Pfeifer, Jessica, 182200. Oxford: Oxford University Press.CrossRefGoogle Scholar
Elliott, Kevin C. 2006. “An Ethics of Expertise Based on Informed Consent.” Science and Engineering Ethics 12:637–61.CrossRefGoogle ScholarPubMed
Elliott, Kevin C. 2011. Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Research. Oxford: Oxford University Press.CrossRefGoogle Scholar
Elliott, Kevin C., and Resnik, David B. 2014. “Science, Policy, and the Transparency of Values.” Environmental Health Perspectives 122:647–50.Google ScholarPubMed
Guston, David. 2004. “Forget Politicizing Science. Let’s Democratize Science!Issues in Science and Technology 21 (1): 2528.Google Scholar
Hardwig, John. 1994. “Toward an Ethics of Expertise.” In Professional Ethics and Social Responsibility, ed. Wueste, Daniel E., 83101. London: Rowman & Littlefield.Google Scholar
Hausman, Daniel. 2015. Valuing Health: Well-Being, Freedom, and Suffering. Oxford: Oxford University Press.Google Scholar
Hoffmann, George, and Stempsey, William. 2008. “The Hormesis Concept and Risk Assessment: Are There Unique Ethical and Policy Considerations?BELLE Newsletter 14:1117.Google Scholar
Intemann, Kristin. 2015. “Distinguishing between Legitimate and Illegitimate Values in Climate Modeling.” European Journal for Philosophy of Science 5:217–32.CrossRefGoogle Scholar
Keohane, Robert O., Lane, Melissa, and Oppenheimer, Michael. 2014. “The Ethics of Scientific Communication under Uncertainty.” Politics, Philosophy, and Economics 13:343–68.CrossRefGoogle Scholar
Kitcher, Philip. 2011. Science in a Democratic Society. Amherst, NY: Prometheus.CrossRefGoogle Scholar
Lakoff, George. 2010. “Why It Matters How We Frame the Environment.” Environmental Communication 4:7081.CrossRefGoogle Scholar
Marris, Emma. 2006. “Should Conservation Biologists Push Policies?Nature 442:13.CrossRefGoogle ScholarPubMed
Martin, Mike, and Schinzinger, Roland. 2010. Introduction to Engineering Ethics. 2nd ed. New York: McGraw-Hill.Google Scholar
Nisbet, Matthew, and Mooney, Chris. 2007. “Framing Science.” Science 316:56.CrossRefGoogle ScholarPubMed
Reiss, Julian. 2013. Philosophy of Economics: A Contemporary Introduction. London: Routledge.CrossRefGoogle Scholar
Resnik, David. 2001. “Ethical Dilemmas in Communicating Medical Information to the Public.” Health Policy 55:129–49.CrossRefGoogle Scholar
Scanlon, Thomas M. 1996. What We Owe to Each Other. Cambridge, MA: Harvard University Press.Google Scholar
Schroeder, S. Andrew. 2017. “Value Choices in Summary Measures of Population Health.” Public Health Ethics 10 (2): 176–87.Google Scholar
Shrader-Frechette, Kristin. 2008. “Ideological Toxicology: Invalid Logic, Science, Ethics about Low-Dose Pollution.” BELLE Newsletter 14:3947.Google Scholar
Steele, Katie. 2012. “The Scientist qua Policy Advisor Makes Value Judgments.” Philosophy of Science 79:893904.CrossRefGoogle Scholar
Stiglitz, Joseph E., Sen, Amartya, and Fitoussi, Jean-Paul. 2010. Mismeasuring Our Lives: Why GDP Doesn’t Add Up. New York: New Press.Google Scholar
Thaler, Richard, and Sunstein, Cass. 2008. Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press.Google Scholar