Hostname: page-component-6bf8c574d5-rwnhh Total loading time: 0 Render date: 2025-02-19T08:38:23.407Z Has data issue: false hasContentIssue false

Assertion, Nonepistemic Values, and Scientific Practice

Published online by Cambridge University Press:  01 January 2022

Rights & Permissions [Opens in a new window]

Abstract

This article motivates a shift in certain strands of the debate over legitimate roles for nonepistemic values in scientific practice from investigating what is involved in taking cognitive attitudes like acceptance toward an empirical hypothesis to looking at a social understanding of assertion, the act of communicating that hypothesis. I argue that speech act theory’s account of assertion as a type of doing makes salient legitimate roles nonepistemic values can play in scientific practice. The article also shows how speech act theory might provide a framework for fruitfully extending aspects of the social and pragmatic turns in the philosophy of science.

Type
Research Article
Copyright
Copyright © The Philosophy of Science Association

1. Introduction

1.1. Aim of the Article

Although recent arguments for a legitimate role for values in scientific practice vary in important ways, much of the debate surrounding these arguments focuses on the roles nonepistemic values might legitimately play in the acceptance or rejection of empirical hypotheses, whether by individuals or groups of individuals.Footnote 1 Indeed, as Douglas (Reference Douglas2009) reminds us, the early years of the debate turned on the question whether scientists are even in the business of accepting or rejecting hypotheses or whether, for example, they should be understood as simply assigning numerical probabilities to hypotheses (see Rudner Reference Rudner1953; Jeffrey Reference Jeffrey1956).

More recently, a debate over whether we should understand accepting or rejecting hypotheses as having nonepistemic or noncognitive consequences has returned to prominence. On one side, advocates of the so-called inductive risk argument maintain that, given the importance of science to policy advising and decision making, there are more or less clear nonepistemic consequences involved with erroneously accepting or rejecting hypotheses; further, scientists have a moral and social responsibility to consider these nonepistemic consequences when deciding the level of evidence sufficient to accept an empirical hypothesis (Douglas Reference Douglas2000, Reference Douglas2009; Steel Reference Steel2010, Reference Steel2013). On the other side, as Steel (Reference Steel2013) points out, defenders of a value-free ideal for science charge advocates of the inductive risk argument with conflating the cognitive domain of belief generation with the practical domain of action (cf. Mitchell Reference Mitchell, Machamer and Wolters2004). As these defenses go, we should be careful to distinguish cognitive attitudes scientists take toward their hypotheses and practical action; not doing so illegitimately imports into scientific practice nonepistemic values best left to policy makers.

I am interested in the contours of the recent debate. Despite disagreement over legitimate roles for nonepistemic values in scientific practice, most of the discussion coalesces around what values should or should not be involved in a scientist’s acceptance or rejection of empirical hypotheses. Given this perspective, the debate has, with certain exceptions, bracketed questions about the role nonepistemic values might have in scientific communication, that is, the making public of the acceptance or rejection of an empirical hypothesis by an individual or group of individuals.Footnote 2 But scientific communication—an act not so neatly categorized as purely cognitive or purely practical—is a central part of scientific practice. As Longino (Reference Longino1990) suggests, only by taking the social action of asserting one’s acceptance of or belief in an empirical hypothesis and making it available for scrutiny and criticism by the relevant scientific community can one’s attitude of belief or acceptance begin to count as warranted or objective.Footnote 3 Thus, I suggest we shift our focus in the debate over nonepistemic values from what is involved in taking certain cognitive attitudes toward empirical hypotheses to what is involved in publicly asserting empirical hypotheses and making them available for evaluation and further use.

From this perspective, while taking some cognitive attitudes toward a hypothesis might be seen as disconnected from action insofar as we might gloss it as a private cognitive matter, asserting a hypothesis, say, in a journal article or to a colleague in a lab meeting is an essentially social act. Drawing on a framework for talking about assertion in speech act theory first developed in the philosophy of language by Austin (Reference Austin1962), I argue that attending to assertion will reveal important roles that nonepistemic values can legitimately play in scientific practice. When assertion is understood as a type of public doing, nonepistemic values can legitimately inform the evaluation of assertions in a way that they might not when, for example, cognitive attitudes are construed nonbehavioristically.

That nonepistemic values are part of the normal evaluation of assertions follows from an important insight of speech act theory: Asserting is a kind of doing. As a kind of doing, asserting an empirical hypothesis is not distinguishable from practical action in the way that it appears that the cognitive acceptance of a hypothesis might be. Further, as an action with more or less easily identifiable consequences beyond changes in an individual’s cognitive attitudes, assertion is open to types of evaluation that go beyond epistemic values like truth and empirical accuracy. My central aim is to argue that shifting our focus from the cognitive attitudes we take toward empirical hypothesis to the role of asserting an empirical hypothesis in scientific practice provides a framework for uncovering legitimate roles for nonepistemic values in scientific practice. In doing so, I also hope to show how resources from the philosophy of language can deepen and extend social and pragmatic approaches to the study of scientific practice.

1.2. Structure of the Article

In section 2, I rehearse the debate over a role for nonepistemic values in scientific practice that centers on the argument from inductive risk and the acceptance and rejection of hypotheses. In section 3, I motivate the turn from cognitive attitudes like acceptance to the speech act of assertion. In section 4, I consider speech act theory’s account of the mechanics of assertion. In section 5, I show how the affinities between speech act theory and Douglas’s (Reference Douglas2014) social view of scientific practice uncover legitimate facets of the nonepistemic evaluation of assertions.

2. Acceptance and the Inductive Risk Argument

2.1. A Sketch of the Inductive Risk Argument

One of the most forceful arguments in favor of a nonepistemic role for values is the argument from inductive risk, recently revived by Douglas (Reference Douglas2000, Reference Douglas2003, Reference Douglas2009). Briefly, Douglas’s argument from inductive risk begins with the characterization of the scientist as accepting or rejecting hypotheses. However, given the nature of induction and the practical limitations of human cognizers, the evidence in favor of or against any given empirical hypothesis is always inconclusive. Scientists, then, have to decide the level of evidence sufficient to accept or reject an empirical hypothesis.

Given the general authority of scientists in the public domain, and the relevance of the results of scientific practice to public policy and decision making, accepting or rejecting hypotheses will likely inform later plans of action. For example, scientific claims about the toxicity or safety of GM foods—or the safety of herbicides used on herbicide-resistant transgenic crops—will inform policy recommendations about mandatory or optional labeling. As such, accepting or rejecting a hypothesis erroneously can have nonepistemic consequences, for example, economically burdensome regulations, increased public anxiety about farming practices and biodiversity, or deterioration of public health.Footnote 4 The acceptability or unacceptability of the nonepistemic consequences of error can be decided only by nonepistemic values—for example, ethical or social values like the right to know what one is eating, concern for ecosystems, or concern for economic stability. Further, scientists are often the only persons knowledgeable enough to assess the evidence and its degree of support for any empirical claim, and they are also often the only persons in the position to judge the extent and seriousness of the possible nonepistemic consequences of acceptance or rejection. Accordingly, the nonepistemic values needed to evaluate the consequences of error should inform a scientist’s decision regarding the level of evidence sufficient to accept or reject an empirical hypothesis. Roughly put, the standards of evidence adopted by the scientist should be sensitive to the risk and seriousness of negative nonepistemic consequences of erroneously accepting or rejecting an empirical hypothesis.

2.2. A Challenge to the Inductive Risk Argument

While some have responded to earlier versions of the inductive risk argument by denying that scientists are in the business of accepting or rejecting hypotheses (Jeffrey Reference Jeffrey1956), others have accepted this part of the argument but claim, in some way or another, that proponents adopt a wrongheaded conception of acceptance and the values that guide it (McMullin Reference McMullin1982). As Steel describes the objection, the argument from inductive risk is seen as adopting a behavioral and practical notion of the attitude of acceptance, which is inappropriate to “the cognitive, nonbehavioral sense relevant to science” (Reference Steel2013, 819). The domain of scientific practice is the generation of beliefs separate from any practical uses those beliefs might be put to and, as such, is an enterprise with epistemic standards set by the canons of scientific reasoning. The practical domain of policy making is the production of plans of action or recommendations for plans of action, which is an enterprise with practical standards decided, in part, by nonepistemic values. This practical domain, however, is distinct from the cognitive domain of belief generation appropriate to scientific practice.

According to this objection, the inductive risk argument gets off the ground by illegitimately importing practical standards of acceptance that apply to action into a cognitive context. Indeed, Mitchell argues that Douglas is guilty of a “conflation of the domains of belief and action [that] confuses rather than clarifies the appropriate role of values in scientific practice” (Mitchell Reference Mitchell, Machamer and Wolters2004, 250). On this view, returning to the example mentioned above, the determination of the toxicity of GM foods or of the herbicides used on herbicide-resistant plants is a purely cognitive matter answering to epistemic values encoded in the canons of proper scientific reasoning of the relevant scientific community.

However, the determination of plans of action in light of a scientific community’s acceptance or rejection of the toxicity of certain GM foods rightly answers to nonepistemic values. But, on Mitchell’s view, adopting plans of action happens subsequent to the purely cognitive matter of settling the question of the toxicity of GM foods. While a single individual “may be engaged in practices in both belief generation and policy decisions,” for Mitchell, a fundamental distinction remains between “values appropriate to generating the belief and the values appropriate to generating the action” (250–51).

On this distinction, scientific practice is primarily in the business of generating empirically supported and correct beliefs or other relevant cognitive attitudes like acceptance removed from questions about moral or social responsibilities. Thus, contrary to the inductive risk argument, nonepistemic values are invoked external to scientific practice, since such practice is understood as a purely cognitive matter answering to epistemic criteria. When it comes to empirically informed policy recommendations, the role of nonepistemic values enters only after belief in a hypothesis has been settled by the appropriate evidence, rather than at choice points internal to the cognitive practice of generating beliefs. Generating beliefs is one (cognitive) thing; acting on them is another (practical) matter.Footnote 5

3. Why Turn to Assertion?

3.1. Responses to the Challenge

In order for this line of response to the inductive risk argument to have any bite, opponents construe accepting or believing an empirical hypothesis in particular ways so that it does not have nonepistemic consequences. First, acceptance or belief is construed individualistically; it results only in a change of one’s cognitive states. Second, acceptance or belief is construed as an attitude one takes, not a public or social act. Related to the first and second points, acceptance or belief might be understood as purely cognitive in the sense that simply adopting an attitude has no implications for action. Finally—moving from an individual to a group of individuals—an empirical hypothesis might be accepted or believed internally by an epistemic community but might not migrate to other communities for any variety of reasons.Footnote 6

In response to this challenge, Steel (Reference Steel2013) accepts the view that acceptance, in particular, is cognitive rather than behavioral in nature. However, he claims the view of acceptance on offer by opponents of the inductive risk argument is impoverished because it does not consider the ways in which accepting a hypothesis indicates one’s willingness to use it in further reasoning. According to Steel, the view implicitly offered by opponents of the inductive risk argument runs together the attitude of acceptance (“a decision to treat p as a premise in a particular context”) with the attitude of belief (“a disposition to feel that p is true”; 820), and it is the former that is appropriate to many domains of scientific practice. Following this distinction, the acceptance of an empirical hypothesis indicates a commitment, or at least a willingness, to have that hypothesis inform practical reasoning when appropriate. For example, accepting the empirical hypothesis that GM foods are safe for human consumption is partially adopting a commitment to have that hypothesis inform reasoning in practical contexts about mandatory labels for GM foods. Thus, Steel claims, “Accepting a claim p is a decision (i.e., to adopt a policy of treating p as an available premise for reasoning in the context in question) that can have consequences for ethical and practical matters” (821).

Since acceptance of a hypothesis informs our reasoning about practical matters, it has implications for action, and, thus, its effects can extend beyond changes in internal cognitive states. In this way, acceptance can have nonepistemic consequences, thereby denying the first and third glosses on acceptance by the opponents of the inductive risk argument mentioned at the beginning of this section. Further, given the authority of scientists in society at large, the acceptance or rejection of a hypothesis by a scientist or group of scientists, if made public, licenses other nonscientists, including policy makers, to have that hypothesis inform practical reasoning when appropriate, thus denying the fourth gloss on the condition that scientists make public their acceptance. On the view of “acceptance [of p] as a decision to treat p as a premise in a particular context” (Steel Reference Steel2013, 820), including practical contexts, the inductive risk argument kicks back in.

3.2. Motivation for Shifting Our Focus to Assertion

Steel widens the implications of what it means to take an attitude of acceptance toward an empirical hypothesis to show how it can have practical implications without being construed as an action. Yet, despite this advance in our understanding of attitudes appropriate to scientific practice, the debate still centers largely on the proper characterization of the attitude of acceptance or other “intentional mental states” (McKaughan and Elliot Reference McKaughan and Elliot2015, 57). Should we understand acceptance as an atomic cognitive act, more akin to belief and answering only to epistemic values? Or should we understand acceptance as distinct from belief and in a more molecular way, in which acceptance of a hypothesis is adopting a policy to use that hypothesis in relevant practical reasoning contexts?

While Steel (Reference Steel2013) and others call into question the hard line between the cognitive domain of acceptance and the practical domain of action, I think their strategy is not immune to efforts by opponents of the inductive risk argument to isolate another internal cognitive attitude or intentional mental state disconnected from action and answering only to epistemic values (e.g., Lacey [Reference Lacey2015] on “holding” and “endorsing”).Footnote 7 Opponents could then argue that it is such a purely cognitive attitude that is appropriate to scientific practice, or certain moments of scientific practice, rather than other attitudes more clearly related to practical domains like Steel’s understanding of acceptance. In doing so, they would seek to reinstate the distinction between the purely cognitive aims of generating nonbehavioristic intentional mental states toward hypotheses and the practical domains of action.Footnote 8

I am not sure such arguments by opponents of the inductive risk argument would be plausible. However, with this possibility in mind, I want to shift the focus away from the implications of adopting intentional mental states and offer a different strategy for shoring up the inductive risk argument. To do this, I draw more explicitly on social and pragmatic dimensions of scientific practice.Footnote 9 My strategy begins from the perspective that scientific practice cannot simply be about individuals or groups of individuals accepting hypotheses—or taking other relevant cognitive attitudes—on the basis of individual or group reasoning about the available empirical evidence.

Instead, if Longino (Reference Longino1990) and others emphasizing the social dimensions of scientific practice are right, then scientific practice not only involves an individual or group of individuals accepting or rejecting hypotheses at the bench or in lab meetings, it also centrally involves making public the acceptance and the reasoning leading up to that acceptance. On this view, scientific practice—and perhaps even scientific knowledge and cognition—is social in the sense that it requires, as Longino argues, making available for criticism the acceptance of a hypothesis to a community of fellow practitioners and responding appropriately to that criticism.

One way to make one’s acceptance of an empirical hypothesis available for scrutiny and examination crucial to scientific practice involves the speech act of assertion: roughly, claiming that a hypothesis is (likely) true or false given the available evidence and taking responsibility to demonstrate as much.Footnote 10 In other words, scientific practice is not just about accepting empirical hypotheses but also involves asserting empirical hypotheses, which includes, among other things, communicating publicly to others one’s acceptance of an empirical claim and undertaking the responsibility to defend that claim. Scientific practice understood socially involves assertion.

This basic point issuing from social and pragmatic understandings of scientific practice shows how assertion is fundamentally different from the picture offered of acceptance at the beginning of section 3.1. Assertion, unlike glosses on acceptance by inductive risk opponents, cannot be understood as an individual and private adoption of an intentional mental state or internal policy to reason using certain premises; assertion is a social and public act directed toward others—hence, not purely cognitive or purely private. Even Mitchell, a defender of the distinction between the cognitive domain of belief and the practical domain of action, acknowledges this. She claims, “to make public one’s belief that a given hypothesis is true is an action,” granting that “in certain contexts a scientist might judge that stating what he or she is scientifically warranted to believe is politically inadvisable” (Mitchell Reference Mitchell, Machamer and Wolters2004, 250).

But one thing to note immediately is that, in the spirit of Longino’s (Reference Longino1990) social turn, talk of what we are scientifically warranted to accept before making our acceptance available for scrutiny is misplaced. On Longino’s view, peer criticism/review is required to rule out the possibility of idiosyncratic, unwarranted, or mistaken background beliefs influencing an individual’s acceptance of a hypothesis. As she puts it, “Only if the products of inquiry are understood to be formed by the kind of critical discussion that is possible among a plurality of individuals about a commonly accessible phenomenon, can we see how they count as knowledge rather than opinion” (74).

To flesh this out a little more, consider the function of peer review in scientific practice. As Longino puts it, “Peer review determines what research gets funded and what research gets published in the journals, that is, what gets to count as knowledge” (Reference Longino1990, 68). Of course, going through this process involves not mere acceptance but assertion, that is, claiming to others that some hypothesis holds or is likely to hold given the evidence.Footnote 11 Following Brandom (Reference Brandom1983), in asserting a claim, I commit myself to the likelihood of that claim given the evidence, I take responsibility for demonstrating my entitlement to that commitment, and I endorse the claim as reasonable for others to adopt (in fact, I hope others will do so with appropriate citations). A function of peer review is to make sure the assertors have discharged their responsibility for demonstrating their entitlement to the claim, in part by determining that their reasoning is not based on unjustifiably idiosyncratic or mistaken beliefs. On this view, then, a necessary but not sufficient condition of the acceptance of a hypothesis counting as scientifically warranted is that one’s peers have subjected it and the reasoning leading up to it to evaluation and criticism. Absent the evaluation and response to said evaluation made available by public assertion, one’s private acceptance or rejection of an empirical hypothesis does not count for very much at all. Thus, intentional mental states, construed purely cognitively, would not be eligible candidates for scientific warrant unless asserted.

3.3. Relevance of Assertion to the Inductive Risk Argument

While these considerations motivate my focus on assertion in scientific practice rather than on the proper characterization of acceptance, assertion is also relevant to the inductive risk argument since, as social and public, it comes with practical objectives as opposed to merely individual cognitive aims built in. One of the objectives of assertion—indeed, communication in general—is to communicate one’s acceptance of a claim with the intention of having others come to adopt the same sort of cognitive attitude (Grice Reference Grice1957). That is, by making available a claim for public scrutiny, assertion is the first step to convincing the relevant audience—one’s peers, policy makers, the general public—of the potential warrant and importance of a claim.

Assertion, then, is clearly a public action one undertakes with practical aims in mind. In this way, the dissemination of a hypothesis via assertion can have nonepistemic consequences for both the scientific community of the assertor and beyond. These nonepistemic consequences can be intended or unintended, depending on the audience and how far an assertion travels: an assertion might persuade other scientists to adopt a research project, convince funding agencies to grant funding, be used by policy makers or voters to reject mandatory food warning labels, or provide hope to cancer patients. The point is not just that in asserting a claim I adopt a policy to potentially act on it should certain practical contexts arise. The point is that my very assertion is an action with cognitive and practical aims resulting in intended (and unintended) effects on my intended (and unintended) audiences.

In this way, assertion goes beyond the third and fourth understandings of acceptance offered at the beginning of section 3.1. Assertion is not an attitude or policy strictly internal to an individual or community, nor could it serve the purposes of communication if it were. Indeed, assertion is often intended to cross community boundaries,Footnote 12 for example, from scientists to policy makers. Even when not intended by the person making the assertion, it can easily cross community boundaries, for example, from scientists to university public relations departments to news coverage to the public. Given the relevance of scientific practice to policy, in crossing boundaries, assertion as a social act is clearly more liable to having implications for action outside the scientific community in ways purely cognitive attitudes might not.

This view resonates with a founding insight of speech act theory as developed in Austin (Reference Austin1962): asserting is a kind of public doing with effects other than the mere communication of information. Since assertion is essentially a public action, it does not make sense to contrast asserting an empirical hypothesis with acting on that hypothesis with a practical objective in mind in the same way that, say, belief in an empirical hypothesis has been so contrasted. Moreover, as an action, an assertion “can’t help being liable to be substandard in all the ways in which actions in general can be, as well as those in which utterances in general can be” (Austin Reference Austin and Caton1963, 24).Footnote 13 That is, since assertion is an action capable of crossing the boundaries of one’s immediate community with more or less easily foreseeable consequences, asserting an empirical claim is open to evaluation beyond the epistemic.

Taking assertion as our unit of analysis in the debate over nonepistemic values in scientific practice follows up on Douglas’s suggestion: “Making empirical claims should be considered as a kind of action, with often identifiable consequences to be considered” (Reference Douglas2009, 70). So far, this point about the centrality of asserting empirical claims via a social act of communication has not received the attention cognitive attitudes like acceptance have in the debate over science and values, nor has anyone examined in-depth how the point might be extended by looking to the philosophy of language.Footnote 14 But, I maintain we can further uncover and motivate legitimate roles for nonepistemic values in scientific practice by focusing on assertion. To show how, I first consider the basic speech act framework for understanding assertion as a type of doing. In section 5, I show how speech act theory provides a framework for talking about nonepistemic dimensions of evaluating assertions.

4. Asserting as Doing

4.1. Locutions, Illocutions, and Perlocutions

According to Austin (Reference Austin1962), any speech act like asserting has at least three dimensions that, although analytically distinct, always accompany one another: the locutionary content, the illocutionary force, and perlocutionary effects. The locutionary content consists in the meaning and reference of the words uttered. The illocutionary force consists in the use the words are put to. An utterance of ‘This box of cereal contains GMOs’ has the meaning it has in virtue of the sense and reference of its words and the context in which it is uttered. Uttering sentences with meaning constitutes an act with locutionary content. However, for Austin, meaning in this sense does not determine the act’s illocutionary force. Austin “want[s] to distinguish [illocutionary] force and meaning in the sense in which meaning is equivalent to sense and reference,” but force is not equivalent to sense and reference (100).

To see this point, consider recent debates over mandating labels for foods that include GMOs. People uttering, ‘This box of cereal contains GMOs’ or advocating for its placement on a prominent label may intend their utterance as a warning to someone interested in eating organically or to someone worried about perceived dangers of GMOs. The same utterance might also serve as a description of the contents of the box of cereal and be intended only to communicate information. In certain contexts, the utterance might constitute an order to put the box back on the shelf. The main point is that the illocutionary force attendant to the locutionary content is important to understanding the aim of a speaker’s utterance and correspondingly shapes our evaluation of the propriety or impropriety of the speech act.

Moving beyond the speaker, the perlocutionary dimension of a speech act consists in the effects that the act has on an audience. Such effects are normally out of the control of the speaker, although the speaker might hope or intend that they occur when uttering words with a certain meaning and force. As Austin puts it, “Saying something will often, or even normally, produce certain consequential effects upon the feelings, thoughts or actions of the audience, or of the speaker, or of other persons” (Reference Austin1962, 101). Even if one intends an utterance of the sentence ‘This box of cereal contains GMOs’ as an assertion intended to convey information, one’s assertion might have the perlocutionary effect of persuading someone to get a box of organic cereal or of convincing them that food containing GMOs is somehow worrisome (perhaps in the process frightening them).

In this spirit, Austin notes, “That the giving of straightforward information produces, almost always, consequential effects upon action, is no more surprising than the converse, that the doing of any action … has regularly the consequence of making ourselves and others aware of facts” (Reference Austin1962, 110 n. 2). In other words, that asserting is a kind of doing with nonepistemic consequences is just as commonplace as the fact that undertaking a practical course of action, like running an experiment numerous times, has epistemic consequences. Asserting sentences, even those intended to convey information, can, given potential perlocutionary effects on the audience (e.g., convincing an audience GM foods are unsafe), have more or less foreseeable implications for practical action just as running an experiment has more or less foreseeable implications for determining the truth or falsity of a hypothesis.

4.2. Conditions for Successful Assertion

Given the focus of this article, speech acts with the illocutionary force of assertions are our interest in scientific contexts. The reason for canvassing the three different dimensions of speech acts is to show how the focus on belief in an empirical hypothesis tries to single out for epistemic evaluation the locutionary content, with its ordinary meaning and reference, separate from the illocutionary force of an utterance and the attendant perlocutionary effects on the audience. That is, given the emphasis on the distinction between belief and action by opponents of the inductive risk argument, they want to know whether the proposition constituting the locutionary content is true or false, independent of the consequences following from the act of putting it forward as an assertion.

But just as such a clean separation is not possible when it comes to the attitude of acceptance, so it is not possible when someone utters a sentence with sense and reference. As Austin puts it, “To perform a locutionary act is in general … also and eo ipso to perform an illocutionary act” (Reference Austin1962, 98). Further, on Austin’s view, since other speech acts do not obviously admit of epistemic evaluation, we cannot determine the truth or falsity of the locutionary content simply from sense and reference alone; we need to know that what is said has the illocutionary force of an assertion since speech acts like orders or warnings cannot be true or false in straightforwardly epistemic senses. As Austin emphasizes, sentences themselves cannot be true or false, but sentences uttered with assertoric force can be. Yet, once we move from attempting to isolate and evaluate the locutionary content of sentences to evaluating particular assertive speech acts, questions about whether said speech acts satisfy the conventions governing assertion—and also questions about perlocutionary effects—can be just as prominent as epistemic questions about the uttered sentence.

With this in mind, we should ask about the conditions of successful assertion. Asserting an empirical hypothesis should not be understood as reporting an inward act of acceptance, nor should it be understood only as an expression of one’s cognitive attitude toward an empirical hypothesis (Austin Reference Caton, Urmson and Warnock1979, 236). A successful assertion does not occur just in case it accurately expresses the speaker’s mental state at a time, even if such an act certainly expresses to others that one has a belief (Searle Reference Searle1968) and, further, even if having the appropriate belief is part of “the sincerity condition of the act” (Searle Reference Searle and Gunderson1975, 347).Footnote 15 But that certain conditions other than accepting a claim on the basis of good, individual cognitive reasons need to be met in order to successfully assert a claim should resonate with anyone who has been through the process of peer review.Footnote 16

In addition to this sincerity condition and intending one’s utterance as an assertion, according to Austin, there are three other elements that constitute the successful firing off of a speech act:

  1. 1. “the securing of uptake,” which “generally amounts to bringing about the understanding of the meaning and the force of the locution”;

  2. 2. having “the illocutionary act ‘take effect’ in certain ways” such that “certain subsequent acts … will be out of order”; and

  3. 3. “invit[ing] by convention a response or sequel,” such that “if this response is accorded, or the sequel implemented … a second act by the speaker or another person [is required]” (Austin Reference Austin1962, 115–16).

In part, these social dimensions of successful assertion can be understood as grounded in a view that holds, “The speech act of asserting arises in a particular, socially instituted, autonomous structure of responsibility and authority” (Brandom Reference Brandom1983, 640). In other words, autonomous structures of responsibility and authority determine the conditions for successfully securing uptake, what further speech acts are out of order in light of secured uptake, and also what further acts are required in response to reception of the assertion by others.Footnote 17

Consider the following example in which scientists make assertions to an intended audience of policy makers and other nonexperts. In 2013, Washington state citizens voted on a ballot initiative concerning mandatory labeling of foods produced entirely or partially with genetic engineering. Before the vote, the Washington State Academy of the Sciences (WSAS)—whose stated aim is “to provide expert scientific and engineering analysis to inform public policy making in Washington State” (Marsh et al. Reference Marsh, Nester, Beary, Pendell, Poovaiah and Unlu2013, ii)—released a white paper on the ballot initiative written by a select committee whose members passed an internal review for potential conflicts of interest. The report was commissioned by “the leadership of state legislative committees dealing with health and agriculture, water, and natural resources” (v) to answer specific questions. The report was drafted around a “Statement of Task (SOT)” to address the definition of GMOs and their current role in agriculture, questions about the nutritional content of GMOs, their safety, implications of mandatory labeling for policy and trade, and costs related to regulation and enforcement of labeling (v). This context and the stated aims of the WSAS provide the background against which the criteria for successful assertion can be satisfied.

For example, a successful discharging of the task the WSAS committee set for itself would require that the committee couch its report in language accessible to nonexperts in order to secure uptake. Further, the WSAS was careful to note that they were not advocating for or against passage of the initiative, making clear that the illocutionary force of their statements was assertoric and aimed at communicating information rather than having the type of directive force associated with warnings. In doing so, the committee also made clear that, in relation to condition 2, certain speech acts with the directive force of warnings or concrete suggestions for policy would have been out of order in the context of the report.

Another important aspect of securing uptake by the relevant audience was for the report to meet the standards of scientific peer review. Since the audience for the report comprises nonexperts unable to check the WSAS’s assertions against the technical literature, the report was “reviewed by a technical writer, and then reviewed by [anonymous] reviewers selected by [the chair of the] Report Review Committee of the WSAS [who] was otherwise uninvolved in the report” (Marsh et al. Reference Marsh, Nester, Beary, Pendell, Poovaiah and Unlu2013, v). In light of reviewers’ comments, the report was revised, thus fulfilling the third condition for assertion (as it also would be if the report’s audience requested a “sequel,” i.e., clarifications that were then provided by the white paper’s authors).

Finally, once the nonexpert audience understands the meaning of the assertions made by the WSAS and has the appropriate basis for trust in the report provided by peer review and internal reviews for conflicts of interest, the committee’s assertions take effect by providing information that informs and guides later practical deliberations about mandatory labeling. For example, since the report notes that the scientific evidence so far shows that GMOs are just as safe as non-GMOs, a reason offered in favor of labeling should not hinge exclusively or too heavily on current safety concerns.

Meeting the conditions Austin lays out for assertion facilitated by socially instituted structures of responsibility and authority allows for “the kind of critical discussion … possible among a plurality of individuals about a commonly accessible phenomenon” (Longino Reference Longino1990, 74) central to both scientific practice and science-based policy. On the proposed view, assertions are units of scientific practice just as important and demanding of our attention as cognitive attitudes or intentional mental states in debates over roles for nonepistemic values in scientific practice.

5. The Nonepistemic Dimensions of Assertion

5.1. Douglas on the Bases for Responsibility for Scientists

All together, these components provide a framework for evaluating speech acts made in the course of scientific practice using nonepistemic values. Assertion is a public act made possible and governed by relevant background social institutions and practices. To borrow from Douglas’s (Reference Douglas2014, 963–68) charting of the moral terrain of science, in the context of scientific practice, these institutions and practices grow out of (1) the demands of good scientific reasoning; (2) the fact that an assertion is scientifically warrantable or successfully fired off only given the relevant scientific community’s support, assessment, and criticism; and (3) the wider society that makes scientific practice possible through valuing it via support and funding. In turn, these social institutions and practices relating to the demands of good reasoning, the structure of the scientific community, and the values of the society in which science is practiced give rise to corresponding responsibilities and commitments, some cognitive in nature, others moral and social in nature (963).

For example, if we take the aim of science to be the production of “reliable empirical knowledge,” then scientific practitioners have epistemic and cognitive responsibilities to “[conduct] empirical inquiry properly, [respect] the evidence one gathers, and [reason] carefully about that evidence” (Douglas 2014, 964). Further, scientists meeting social obligations to one another help fulfill the requirements of good reasoning, for example, “to be fair when doing peer reviews” and to “give proper credit to ideas, words, and information that help them in their work” (965). In this context, consider again the WSAS white paper. In making their assertions in the report, the committee fulfilled the obligations of good scientific reasoning and to the scientific community by passing an internal review searching for potential conflicts of interest; by drawing mainly on peer-reviewed scientific research; by providing a list of relevant references; and by submitting their report for anonymous peer review, then revising it in light of reviewers’ comments. In doing so, the committee fulfilled the stated aim of the WSAS to provide expert and relevant analyses—but not advice or warnings—to the nonexpert public and policy makers; an aim grounded in obligations growing out of the mutually beneficial relationships between the scientific community and society.

These responsibilities and commitments, together with the view of assertion as a public and social act with cognitive and practical aims, legitimately inform the evaluation of the propriety of any particular act of scientific assertion. That is, assertions are subject to evaluation according to whether they successfully meet or fail to meet the responsibilities and commitments growing out of the relevant institutions and practices that make scientific practice possible (society), that sustain scientific practice (the scientific community), and that are constitutive of scientific investigations of the world (canons of good reasoning).

Naturally, given the aims of scientists to create reliable empirical knowledge, a part of this evaluation involves epistemic assessment of the truth, falsity, or empirical accuracy of an assertion. But given that assertion takes place against wider social institutions and practices that regulate the successful firing off of assertions, that assertions come with a mixture of cognitive and practical aims, and that certain types of perlocutionary effects can follow from any particular assertion, there are other appropriate modes of evaluation. As Austin says, “But besides the little question, is [an assertion] true or false, there is surely the question: is it in order?” (Reference Caton, Urmson and Warnock1979, 249).

5.2. Evaluating Speech Acts

In determining whether an assertion is in order, we quite often do ask social questions relevant to the responsibilities and commitments established by the institutions and practices set up by the scientific community. For example, we might ask about the speaker’s standing, authority, and sincerity in making an assertion, which is why the WSAS searched for potential conflicts of interest among committee members. Or we might ask about the stringency of the peer-review process leading up to its publication, which is why the WSAS took special care to note that the reviewers were chosen by a WSAS member not involved with the report and who remained anonymous until after the final report was finished. Had we answered the questions about the speakers’ standing and authority or about the peer-review process differently, the assertions might have misfired in a way other than being false or inadequately supported by evidence.

But, more important for our purposes, there are moral (or quasi-moral) dimensions to evaluating assertions arising from both the scientific community and the wider society in which science is practiced. We can ask whether the speakers have responsibly discharged their responsibility to think through the possible perlocutionary effects of their assertion (a responsibility that follows, in part, from the general moral responsibility to consider the intended and unintended consequences of one’s acts). In this spirit, we could also ask whether the speakers responsibly considered how others might understand or interpret the illocutionary force of their statement.

In the context of the WSAS’s white paper, the committee members discharge their responsibilities in two ways. First, they make the scope of their report clear to avoid undermining other relevant arguments about labels for GMOs that invoke values they do not consider: they address five specific questions regarding the status of GMOs, but not others. And they do so explicitly in the context of the ballot initiative proposing a regime of mandatory, rather than voluntary, labeling. The committee makes clear that by not addressing questions about biodiversity, environmental effects, or voluntary labeling, it does not judge such questions to be of “lesser importance to either the legislative sponsors or the committee” (Marsh et al. Reference Marsh, Nester, Beary, Pendell, Poovaiah and Unlu2013, vi). Rather, the committee notes—but does not expand on—the relevance of unconsidered “social values and perspectives” that could be “reflected in the choice of an individual to support or oppose” mandatory labeling (vi).Footnote 18 Second, as mentioned at the end of section 4.2, the committee takes care to note that it is not advocating for or against passage of the initiative; it is simply providing expert analyses on a topic of interest to citizens and policy makers. In these ways, the committee has thought through the ways in which its report might be received and interpreted by the general public and legislators, so as to avoid unintended interpretations of illocutionary force and attendant perlocutionary effects, for example, warnings that stoke fears about economic damages from mandatory labeling.

In general, by drawing attention to responsibilities related to considering potential perlocutionary effects arising from misunderstandings of illocutionary force, we are asking what others in both the scientific community and the general public might reasonably infer from or how they might act on the basis of an assertion. Such questions seem especially appropriate when, unlike the WSAS’s white paper, the context is ripe for underdetermining the illocutionary point of an utterance or when the perlocutionary effects of such an utterance are fairly easy to determine, for example, in research funded by lobbying arms of organic food companies or biotechnology firms like Monsanto and later presented in testimony to governmental authorities. In these latter cases, it seems that the nonepistemic evaluative questions are clear and that scientists have a responsibility to consider the ways in which their assertion might be misconstrued or misfire. Perhaps others might see their assertion as answering to the desires of a funding source rather than the canons of good reasoning; perhaps their assertion might have unintended perlocutionary effects like stoking fears about GM foods; perhaps there might be confusion about the aims or scope of their utterance: were they merely communicating information or advising—is a focus, for example, mainly on the safety of GMOs in testimony an implicit discounting of relevant environmental issues? Assessing the risks of misconstrual or misfires in such cases can legitimately involve nonepistemic values.

5.3. General Responsibilities and Licensing Further Acts

I also think this point holds more generally and not just in contexts like scientific reports or testimony to policy makers. As Watson understands the speech act of assertion, “To assert that p is, among other things, to endorse p, to authorize others to assume that p, to commit oneself to defending p, thereby (typically) giving others standing to criticize or challenge what one says” (Reference Watson2004, 58; emphasis added). Given its public nature, in asserting a hypothesis, scientists not only adopt a policy of using that hypothesis in related practical contexts for themselves. But, also in the spirit of Steel’s (Reference Steel2013) gloss on acceptance, they authorize others who are part of their audience—relevant scientific communities, the general public—to assume that hypothesis in related practical and ethical contexts. Further, this assumption on the part of the audience is natural given the authority that society grants to scientists, that peer review grants to any given successful assertion, and also in light of the value and importance society attaches to science-based policy. Clearly, in addition to having perlocutionary effects on the audience, an assertion can have wider, nonepistemic consequences issuing from the fact that a successful assertion legitimates its further use in reasoning about relevant practical and ethical matters. By asserting an empirical claim in the course of scientific practice, although certain uses will be out of bounds, a scientist endorses that claim for legitimate use in other contexts.

Now, according to Douglas (Reference Douglas2003, Reference Douglas2009), scientists have a moral responsibility to consider the consequences of erroneously accepting or rejecting an empirical hypothesis stemming from the general moral responsibility to consider the effects and risks associated with our actions. The point of appealing to speech act theory was to show the ways in which assertion can properly be understood to be an action with cognitive and practical aims. On the view suggested here, Douglas’s injunction to scientists that they consider the possible nonepistemic consequences of error is transformed to the claim that scientists have a responsibility to consider how their assertions are or might be connected to reasoning contexts related to the generation of practical plans of action; how their assertions might be used to, at least in part, license those actions; and also potential negative perlocutionary effects.Footnote 19 But assessing the seriousness of these nonepistemic consequences before the decision to make an assertion requires the invocation of nonepistemic values that cannot be left to policy makers. Combined with our general moral responsibilities to consider risks—including unintended perlocutionary effects—associated with our actions on pain of being reckless or negligent, this means values can legitimately influence a scientist’s decision to assert an empirical claim.

Consider the following example. Martin (Reference Martin2008) notes the role that the emotion of hope might play in a terminal patient’s decision to enter a clinical research trial for an experimental treatment. According to Martin, for various reasons, hope can sometimes impair or undermine a patient’s autonomy by leading her, for example, to discount risks associated with treatment in her practical deliberations to enter a trial in a way that fails to cohere with her deeply held values. If this is right, then, for Martin, researchers should take care to not “take advantage of autonomy-impairing hopes” (52). With this in mind, Martin believes some researchers act negligently insofar as “hope rhetoric pervades the way early-phase research is described” (52); in such contexts, the “word ‘hope’ is almost magical: a researcher who says it to a potential participant could hardly choose a better way to encourage her to ignore the risks and burdens of participation” (53). Couched in my terminology, according to Martin, researchers who invoke hope rhetoric are negligent in failing to consider negative perlocutionary effects of their descriptive speech acts on potential participants. That is, they fail to consider how describing their “hopes” for early phase research might inform patients’ practical reasoning in a way that risks undermining their autonomous practical deliberations to enter a trial.

This is not to say that practicing scientists, before assertion, are required to consider all possible contexts related to recommendations for action in which their assertions might appear. Obviously, fulfilling that responsibility would be impossible. Instead, it is to claim that some of the responsibilities undertaken when asserting a scientific claim involve consideration of the reasonably foreseeable ways in which an assertion might be put to use and the perlocutionary effects on their audience. Moreover, scientists should consider the ways in which a wrongful assertion—“wrongful” in a sense including misfirings unrelated to epistemic dimensions of simply being false or mistaken—can negatively affect future practical inferences. After all, as Ross puts it, if the assumption that a speaker’s assertion is in order “proves false and others act upon it with unfortunate consequences, at least part of the responsibility will lie with [the assertor] for having entitled [others] to make that assumption” (Reference Ross1986, 78).

6. Conclusion

Thinking about asserting as a kind of doing allows us to place an assertion in a wider social context in a way that focusing on intentional mental states—whether taken in an atomic sense completely disconnected from practical contexts (see Mitchell Reference Mitchell, Machamer and Wolters2004) or in a molecular sense connected to practical contexts (see Steel Reference Steel2013)—does not. By attending to this wider context in which we do things with language beyond expressing our beliefs, the distinction between cognitive attitudes taken to an empirical claim and practically acting on the basis of that empirical claim is shown to be a nonstarter from a social view of scientific practice that sees assertion as central. This perspective supports Douglas’s denial of the claim that “discussion of moral responsibility for choices [is] far removed from the context of science, where one is primarily concerned with coming to the correct empirical beliefs” (Reference Douglas2009, 70).

Further, focusing on assertion as a social act with cognitive and practical aims provides a framework to make good on Douglas’s suggestion that “making empirical claims should be considered as a kind of action, with often identifiable consequences to be considered” (Reference Douglas2009, 70). Because assertion is an inherently social act made against the background of social practices and institutions generating social, moral, and cognitive obligations and because it is also an act in which people make their assertion available for use in contexts related to action, misfired or wrongful assertions turn out to have more or less easily foreseeable nonepistemic consequences. Following proponents of the inductive risk argument, practicing scientists have a responsibility to assess these risks using nonepistemic values before the act of asserting an empirical hypothesis.

Footnotes

Thanks to Carole Lee, Colin Marshall, Jon Rosenberg, Conor Mayo-Wilson, and Alison Wylie for their encouraging and helpful comments on an earlier draft; to the audience at the University of Washington lunchtime works-in-progress series and Erin Kendig for fruitful discussions about ideas that led to the article; and to three anonymous reviewers for Philosophy of Science for their pointed and charitable suggestions for revising the article.

1. For sympathetic overviews of these arguments, see Intemann (Reference Intemann2005), Biddle (Reference Biddle2013), Rolin (Reference Rolin2015). For a new approach to values in scientific practice slightly critical of past approaches, see Hicks (Reference Hicks2014).

2. For exceptions, see McKaughan and Elliott (Reference McKaughan and Elliot2013) and John (Reference John2015). In “framing the problem of inductive risk in terms of assertion,” John (Reference John2015, 81) adopts a perspective similar to the current article. My article differs from John’s in examining and endorsing the inductive risk argument from within speech act theory.

3. While McKaughan and Elliot (Reference McKaughan and Elliot2015) emphasize that belief and acceptance do not exhaust the cognitive attitudes we take to empirical hypotheses, the lion’s share of attention is directed to acceptance and distinguishing it from belief.

4. Adapted from a point Douglas (Reference Douglas2000, 576–77) makes in the context of rat studies aimed at discovering the toxicity of dioxin.

5. A similar debate arises in the history of statistics. Savage claims: “the problems of statistics were almost always thought of as problems of deciding what to say rather than what to do” (Reference Savage1972, 159). Here, deciding what to say is analogous to a nonbehavioral construal of acceptance; deciding what to do is analogous to Steel’s account of acceptance detailed below in sec. 3.1. Savage rejects this dichotomy for reasons analogous to those I give below: “Whatever an assertion may be, it is an act; and deciding what to assert is to an instance of deciding how to act” (159). Thanks to Conor Mayo-Wilson for pointing out these similarities to my argument.

6. Thanks to a reviewer for suggesting the glosses in this paragraph and framing of this section.

7. Thanks to a reviewer for drawing my attention to this reference.

8. Thanks to a reviewer for suggesting this framing.

9. See also John (Reference John2015, 81): “a focus on ‘acceptance’ snarls discussions of inductive risk in questions of whether cognitive attitudes should be sensitive to ethical considerations; assertion, by contrast, is clearly subject both to epistemic and ethical concerns.” My reasons for focusing on assertion draw on Longino (Reference Longino1990), whose social account of scientific knowledge provides resources for a more principled defense for turning to assertion than does John.

10. Other constative speech acts—speech acts aimed at saying something about facts that obtain rather than those aimed at doing something (Pagin Reference Pagin and Zalta2015, sec. 1)—might be relevant to scientific practice; see n. 11. Assertion is a paradigmatic constative (Reference Pagin and Zalta2015, sec. 1).

11. When writing grants, one might take a less definitive stance, e.g., tentatively proposing that a hypothesis is worth investigating. Assertion, though, seems appropriate in many other contexts.

12. Thanks to a reviewer for suggesting I emphasize the point like this.

13. Here, Austin references utterances he calls performatives, e.g., christening a ship. But since he blurs the distinction between performatives and constatives, it applies to assertions.

14. John (Reference John2015) hints at the possibility of doing so, however.

15. Speech acts are subject to conventions governing communication that cognitive attitudes are not. Although I arrive at similar claims made by Steel (Reference Steel2013) and others, assertion, an act, is not strictly synonymous with acceptance, an attitude. See also John (Reference John2012, 219) who points out that the norms governing acceptance might be different from the norms governing assertion.

16. My reporting that I accept a hypothesis on the basis of good reasoning is not sufficient for publication or to convince my peers to critically engage with my acceptance.

17. Compare with Longino on the function of peer review: it does not just serve the purpose of meeting the minimal condition of 1, i.e., that the meaning and the force of the locutionary content is understandable in the sense “that the data seem right and the conclusions well-reasoned” (Reference Longino1990, 68). Peer review also helps fulfill conditions 2 and 3 by serving as a vehicle “to bring to bear another point of view on the phenomena, whose expression might lead the original author(s) to revise the way they think about and present their observations and conclusions” (68–69). Thanks to Jon Rosenberg for pointing out these similarities.

18. The committee also invited comments about its members and the SOT from the general public but did not receive any (Marsh et al. Reference Marsh, Nester, Beary, Pendell, Poovaiah and Unlu2013, v).

19. Understood this way—especially in also focusing on perlocutionary effects—my suggested view drawing on speech act theory differs from the obligation John attributes to Douglas and rejects, “the ‘floating standards obligation’ … : scientists should consider their audience’s proper epistemic standards for acceptance when setting their own epistemic standards for assertion” (John Reference John2015, 82).

References

Austin, J. L. 1962. How to Do Things with Words. London: Oxford University Press.Google Scholar
Austin, J. L. 1963. “Performative-Constative.” In Philosophy and Ordinary Language, ed. Caton, Charles E., 2233. Urbana: University of Illinois Press.Google Scholar
Caton, Charles E. 1979. “Performative Utterances.” In Philosophical Papers, ed. Urmson, J. O. and Warnock, G. J., 233–52. London: Oxford University Press.Google Scholar
Biddle, Justin. 2013. “State of the Field: Transient Underdetermination and Values in Science.” Studies in History and Philosophy of Science 44:124–33.CrossRefGoogle Scholar
Brandom, Robert. 1983. “Asserting.” Nous 17 (4): 637–50.CrossRefGoogle Scholar
Douglas, Heather. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67:559–79.CrossRefGoogle Scholar
Douglas, Heather 2003. “The Moral Responsibilities of Scientists: Tensions between Autonomy and Responsibility.” American Philosophical Quarterly 40 (1): 5968.Google Scholar
Douglas, Heather 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press.CrossRefGoogle Scholar
Douglas, Heather 2014. “The Moral Terrain of Science.” Erkenntnis 79 (5): 961–79.CrossRefGoogle Scholar
Grice, H. P. 1957. “Meaning.” Philosophical Review 66 (3): 377–88.CrossRefGoogle Scholar
Hicks, Daniel J. 2014. “A New Direction for Science and Values.” Synthese 191 (14): 3271–95.CrossRefGoogle Scholar
Intemann, Kristen. 2005. “Feminism, Underdetermination, and Values in Science.” Philosophy of Science 72 (5): 1001–12.CrossRefGoogle Scholar
Jeffrey, Richard. 1956. “Valuation and Acceptance of Scientific Hypotheses.” Philosophy of Science 23 (3): 237–46.CrossRefGoogle Scholar
John, Stephen. 2012. “Mind the Gap.” Studies in History and Philosophy of Science 43:218–20.CrossRefGoogle Scholar
John, Stephen 2015. “Inductive Risk and the Contexts of Communication.” Synthese 192:7996.CrossRefGoogle Scholar
Lacey, Hugh. 2015. “‘Holding’ and ‘Endorsing’ Claims in the Course of Scientific Activities.” Studies in History and Philosophy of Science 53:8995.CrossRefGoogle ScholarPubMed
Longino, Helen. 1990. Science as Social Knowledge. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Marsh, Thomas, Nester, Eugene, Beary, Janet, Pendell, Dustin, Poovaiah, B. W., and Unlu, Gulhan. 2013. “White Paper on Washington State Initiative 522 (I-522): Labeling of Foods Containing Genetically Modified Ingredients.” Washington State Academy of Sciences. http://www.washacad.org/initiatives/WSAS_i522_WHITEPAPER_100913.pdf.Google Scholar
Martin, Adrienne. 2008. “Hope and Exploitation.” Hastings Center Report 38 (5): 4955.CrossRefGoogle ScholarPubMed
McKaughan, Daniel J., and Elliot, Kevin C.. 2013. “Backtracking and the Ethics of Framing: Lessons from Voles and Vasopressin.” Accountability in Research 20 (3): 206–26.CrossRefGoogle ScholarPubMed
McKaughan, Daniel J., and Elliot, Kevin C. 2015. “Introduction: Cognitive Attitudes and Values in Science.” Studies in History and Philosophy of Science 53:5761.CrossRefGoogle ScholarPubMed
McMullin, Ernan. 1982. “Values in Science.” In PSA 1982: Proceedings of the 1982 Biennial Meeting of the Philosophy of Science Association, Vol. 2, ed. Peter D. Asquith and Thomas Nickles, 3–28. East Lansing, MI: Philosophy of Science Association.Google Scholar
Mitchell, Sandra. 2004. “The Prescribed and Proscribed Values in Science Policy.” In Science, Values, and Objectivity, ed. Machamer, Peter and Wolters, Georen, 245–55. Pittsburgh: University of Pittsburgh Press.Google Scholar
Pagin, Peter. 2015. “Assertion.” In Stanford Encyclopedia of Philosophy, ed. Zalta, Edward N.. Stanford, CA: Stanford University. http://plato.stanford.edu/archives/spr2015/entries/assertion/.Google Scholar
Rolin, Kristina. 2015. “Values in Science: The Case of Scientific Collaboration.” Philosophy of Science 82 (2): 157–77.CrossRefGoogle Scholar
Ross, Angus. 1986. “Why Do We Believe What We Are Told?Ratio 28 (1): 6988.Google Scholar
Rudner, Richard. 1953. “The Scientist qua Scientist Makes Value Judgments.” Philosophy of Science 20 (1): 16.CrossRefGoogle Scholar
Savage, Leonard. 1972. The Foundations of Statistics. New York: Dover.Google Scholar
Searle, John. 1968. “Austin on Locutionary and Illocutionary Acts.” Philosophical Review 77 (4): 405–24.CrossRefGoogle Scholar
Searle, John 1975. “A Taxonomy of Illocutionary Acts.” In Language, Mind, and Knowledge, ed. Gunderson, Keith, 344–69. Minneapolis: University of Minnesota Press.Google Scholar
Steel, Daniel. 2010. “Epistemic Values and the Argument from Inductive Risk.” Philosophy of Science 77 (1): 1434.CrossRefGoogle Scholar
Steel, Daniel 2013. “Acceptance, Values, and Inductive Risk.” Philosophy of Science 80 (5): 818–28.CrossRefGoogle Scholar
Watson, Gary. 2004. “Asserting and Promising.” Philosophical Studies 117 (1/2): 5777.CrossRefGoogle Scholar