1. Introduction
Much of the literature on values in science focuses on the role of values in individual scientists’ decision making, thereby ignoring the context of scientific collaboration (see, e.g., Machamer and Wolters Reference Machamer and Wolters2004; Kincaid, Dupré, and Wylie Reference Kincaid, Dupré and Wylie2007; Carrier, Howard, and Kourany Reference Carrier, Howard and Kourany2008). Yet, many scientists work in research teams and publish their findings in multiauthored articles (Wray Reference Wray2002, Reference Wray2006; Galison Reference Galison, Biagioli and Galison2003). Scientific collaboration is often a practical necessity because the production and analysis of evidence are too expensive and time-consuming for any individual scientist to accomplish independently (Hardwig Reference Hardwig1991; Wagenknecht Reference Wagenknecht2013). Sometimes collaboration becomes a necessity because a research project draws on a variety of expertise from different disciplines (Thagard Reference Thagard1999; Andersen and Wagenknecht Reference Wagenknecht2013). In such cases, a research team with a division of labor is capable of carrying out a project that no individual scientist could do on their own.
Acknowledging the importance of scientific collaboration has led many philosophers to examine its implications for the social epistemology of scientific knowledge. Some philosophers suggest that scientific knowledge emerging in collaborations involves collective beliefs or acceptances (Gilbert Reference Gilbert2000; Bouvier Reference Bouvier2004; Wray Reference Wray2006, Reference Wray2007b; Staley Reference Staley2007; Andersen Reference Andersen2010; Rolin Reference Rolin2010; Cheon Reference Cheon2014). Some others suggest that the epistemic structure of scientific collaboration is based on relations of trust and interactions among scientists (Hardwig Reference Hardwig1991; Kusch Reference Kusch2002; Thagard Reference Thagard2010; Fagan Reference Fagan2011, Reference Fagan2012; Andersen and Wagenknecht Reference Wagenknecht2013; de Ridder Reference de Ridder2013; Frost-Arnold Reference Frost-Arnold2013; Wagenknecht Reference Wagenknecht2013, Reference Wagenknecht2014). In the former case, a research team is thought to arrive at a group view that is not fully reducible to individual views. In the latter case, each team member is thought to rely on testimonial knowledge that is based on her trusting other team members. These two models are not competing accounts of the epistemic structure of scientific collaboration. Sometimes scientific knowledge in collaborations takes the form of collective acceptance, sometimes it is an outcome of trust-based acceptance, and at other times it takes some other form.Footnote 1
In this paper I examine the implications of scientific collaboration for the debate concerning the proper role of moral and social values in science. Much of the debate is focused on the ideal of value-free science, the view that non-epistemic values are not allowed to intrude the decision-making processes that scientists are engaged in when they accept something as scientific knowledge. Acceptance is thought to involve a judgment that a hypothesis or a theory is sufficiently well supported that it does not need to be submitted to further investigation for the moment (Lacey Reference Lacey1999, 13).Footnote 2 A number of philosophers argue that the value-free ideal is not feasible—or even if it is feasible under some specific circumstances, there is no reason to adopt it as a criterion of good science (Longino Reference Longino1990, Reference Longino1995; Root Reference Root1993; Lacey Reference Lacey1999; Kitcher Reference Kitcher2001, Reference Kitcher2011; Solomon Reference Solomon2001; Douglas Reference Douglas2009; Kourany Reference Kourany2010). Arguments against the ideal are advanced in tandem with case studies where moral and social values are claimed to play a legitimate role in acceptance (see Anderson Reference Anderson1995, Reference Anderson2004; Douglas Reference Douglas2000; Intemann Reference Intemann2001; Richardson Reference Richardson2010; Crasnow Reference Crasnow2014; Elliott and McKaughan Reference Elliott and McKaughan2014). While I do not object to all such arguments, I wish to challenge the assumption that all moral and social values are non-epistemic values and that, consequently, all cases where moral and social values legitimately enter into a decision to accept something count as arguments against the value-free ideal.
I argue that in the context of scientific collaboration some moral and social values are properly understood to be epistemic rather than non-epistemic values. By epistemic values I mean values that promote the attainment of truth, either intrinsically or extrinsically. As Daniel Steel explains, an epistemic value is intrinsic when manifesting that value constitutes an attainment of or is necessary for truth, and it is extrinsic when it promotes the attainment of truth without itself being an indicator or a requirement of truth (Reference Steel2010, 18). For a value to promote the attainment of truth may mean that it leads scientists to support social arrangements that are instrumental in the epistemic success of science. For example, diversity is an extrinsic epistemic value insofar as it leads scientists to cultivate a diversity of perspectives, and this in turn facilitates transformative criticism in scientific communities (Longino Reference Longino2002, 131). While moral and social values are not epistemic values intrinsically, they can be argued to be extrinsic epistemic values on the grounds that they lead scientists to act in ways that are conducive to truth.
In order to explain how the group perspective on values in science differs from the individual one, in section 2 I present a review of three well-known arguments against the value-free ideal: (1) an argument from pluralism with respect to epistemic values, (2) an argument from inductive risk, and (3) an argument from value-laden background assumptions. These arguments are built on slightly different yet overlapping analyses of what it means for a scientist to accept a hypothesis or a theory and how non-epistemic values can play a legitimate role in acceptance, which is thought to be a core epistemic moment in scientific inquiry. While the analysis of acceptance underlying these three arguments is complex, acceptance in this sense can be attributed to individual scientists and research teams alike. This analysis of acceptance does not do justice to scientific collaboration because it neglects epistemically significant differences between individual and collective acceptance, on the one hand, and between individual scientists relying and not relying on testimony, on the other hand.
In section 3 I review three normative views that have been proposed as alternatives to the value-free ideal. Miriam Solomon’s social empiricism builds on the argument from pluralism with respect to epistemic values, Heather Douglas’s conception of scientific integrity on the inductive risk argument, and Helen Longino’s social account of objectivity on the argument from value-laden background assumptions. In these three views, the value-free ideal is replaced with guidelines and norms addressed either to individual scientists or to scientific communities. While the three views offer important insights concerning the proper role of values in science, they ignore an intermediate social level in science: scientific collaboration.Footnote 3
In sections 4 and 5 I introduce a more social analysis of acceptance than the one revealed in section 2. A more social analysis can be found in the two models of understanding the epistemic structure of scientific collaboration: collective acceptance and trust-based acceptance. I identify the moral and social values that can play a legitimate role in collective and trust-based acceptance. I argue that in the context of scientific collaboration these moral and social values should be understood as extrinsic epistemic values because they promote the attainment of truth.
2. Three Arguments against the Value-Free Ideal
When one argues against the value-free ideal, it is not sufficient to show that scientific research sometimes fails to be value-free. One has to show also that the ideal in itself is not feasible—or even if it is feasible under some circumstances, there are reasons that speak against its adoption as a standard of good science. In this section I review three arguments aiming to do so. While the plausibility of these arguments depends on case studies, I leave case studies aside and focus on analyzing the conception of acceptance underlying the three arguments. The conception of acceptance is of interest here because scientific collaboration will urge philosophers to rethink acceptance in science.
2.1. Argument from Pluralism with Respect to Epistemic Values
A number of philosophers argue that the value-free ideal is not attainable because the set of epistemic values includes a variety of criteria and desiderata that cannot be realized at the same time, and non-epistemic values can legitimately play a role in determining which epistemic values scientists emphasize when they evaluate theories (Kuhn Reference Kuhn1977; Rooney Reference Rooney1992; Kitcher Reference Kitcher1993; Longino Reference Longino1995; Solomon Reference Solomon2001; Elliott Reference Elliott2013; Elliott and McKaughan Reference Elliott and McKaughan2014). Arguments aiming to undermine the value-free ideal by drawing attention to the plurality of epistemic values follow two strategies. One strategy aims to show that the plurality of epistemic values is a consequence of the plurality of epistemic goals when the goals are taken to be either significant truths (Kitcher Reference Kitcher1993; Anderson Reference Anderson1995) or empirical successes (Solomon Reference Solomon2001). Another strategy suggests that the plurality of epistemic values is revealed by studying actual practices of science. For example, Thomas Kuhn (Reference Kuhn1977) claims that the five epistemic values of accuracy, consistency, simplicity, breadth of scope, and fruitfulness have played a role in theory choice throughout the history of physics. Longino (Reference Longino1995) adds to this list six other values that, she argues, can be called epistemic on equally good grounds: empirical adequacy, novelty, ontological heterogeneity, complexity of interaction, applicability to human needs, and diffusion of power.
These arguments are based on an analysis of acceptance as a particular kind of value judgment. In this analysis, acceptance consists of two moments, “valuing” and “evaluation” (McMullin Reference McMullin1983, 5). “Valuing” is about choosing which epistemic values are applied in a particular decision-making situation, and “evaluation” is about assessing the extent to which a theory realizes the chosen epistemic values. Non-epistemic values can legitimately play a role in “valuing” because in this role they are thought to be epistemically harmless or even beneficial if they contribute to an efficient division of research efforts in scientific communities (Kitcher Reference Kitcher1993; Solomon Reference Solomon2001). But non-epistemic values are not allowed to play a role in “evaluation.” Next, I turn to an argument from inductive risk that suggests that “evaluation” cannot always be value-free either.
2.2. Argument from Inductive Risk
A number of philosophers argue that the value-free ideal is not feasible because non-epistemic values have a legitimate role to play in the evaluation of risks involved in acceptance (Douglas Reference Douglas2000, Reference Douglas2007, Reference Douglas2009; Wilholt Reference Wilholt2009; Steel Reference Steel2010, Reference Steel2013; Elliott Reference Elliott2011; Biddle Reference Biddle2013; Brown Reference Brown2013). The most often cited version of the inductive risk argument can be found in Richard Rudner’s 1953 article titled “The Scientist qua Scientist Makes Value Judgments.” One premise in Rudner’s argument is the view that a scientist as scientist accepts or rejects hypotheses and that acceptance involves uncertainty (Reference Rudner1953, 2). In accepting a hypothesis, a scientist has to decide whether the evidence at hand is sufficiently strong to warrant the acceptance. This decision, Rudner argues, depends on the risks involved. If a scientist accepts a false hypothesis, there may be a cost associated with this type of error. In addition, if she rejects a true hypothesis, there may be another cost associated with the other type of error. The key premise in Rudner’s argument is that the assessment of the costs involved in these two mistakes is a matter of moral value judgment (Reference Rudner1953, 3).
It is important to notice that Rudner’s argument builds on a “thick” conception of acceptance. Given this conception, acceptance involves three moments: the assessment of the evidential warrant of a hypothesis, the identification of error-related risks, and a moral value judgment concerning an acceptable level of risk. Thus, if one endorses a thick conception of acceptance, then the value-free ideal is not attainable because one moment in acceptance involves non-epistemic values. While some philosophers are critical of Rudner’s conception of acceptance (Jeffrey Reference Jeffrey1956; Hempel Reference Hempel1981; McMullin Reference McMullin1983; Lacey Reference Lacey1999, Reference Lacey2005; Mitchell Reference Mitchell2004), those who defend it argue that it is more relevant to the actual practice of science than a thin conception, which involves merely the assessment of the evidential warrant of a hypothesis (Douglas Reference Douglas2000, Reference Douglas2007; Biddle Reference Biddle2013; Steel Reference Steel2013). Next, I discuss yet another well-known argument against the value-free ideal.
2.3. Argument from Value-Laden Background Assumptions
A number of philosophers argue that the value-free ideal is not feasible because non-epistemic values can legitimately influence the choice of background assumptions that play a role in a scientist’s decision to accept a hypothesis (Longino Reference Longino1990, Reference Longino2002; Anderson Reference Anderson1995, Reference Anderson2004; Intemann Reference Intemann2001, Reference Intemann2005; Hawthorne Reference Hawthorne2010; Richardson Reference Richardson2010; Clough Reference Clough2011; de Melo-Martín and Intemann Reference de Melo-Martín and Intemann2012). For example, Longino argues that background assumptions are needed to establish the relevance of empirical evidence to a hypothesis or a theory (Reference Longino1990, 43–44; Reference Longino2002, 127). While background assumptions may not always “encode” social values, they often do so (Reference Longino1990, 216). Value-laden background assumptions should not be judged as necessarily “bad” science because it is difficult to see how evidential reasoning could proceed without them (Reference Longino1990, 128 and 216). Whether value-laden background assumptions are acceptable or not will depend on a community practice where they are critically evaluated and either defended, modified, or rejected in response to criticism (Reference Longino1990, 73–74).
Let me summarize the three arguments I have reviewed in this section. The first argument constructs acceptance as a value judgment that includes both valuing and evaluation. Non-epistemic values can legitimately play a role in the valuing of certain epistemic values. The second argument suggests that the evaluation of empirical adequacy is a more complex affair than merely determining the degree of evidential warrant. When a scientist evaluates a hypothesis, she is expected to identify error-related risks and to make a moral judgment of their seriousness. The third argument reveals yet another dimension in the structure of acceptance. When a scientist evaluates a hypothesis, she makes value-laden judgments concerning the plausibility of particular background assumptions. Whereas the second argument is forward looking in the sense that a scientist is expected to reflect on the consequences of her decision to accept something as scientific knowledge, the third argument is backward looking in the sense that acceptance is thought to build on an existing body of scientific research providing background assumptions for evidential reasoning.
The analysis of acceptance underlying the three arguments is social in the sense that it draws attention to the context of scientific reasoning. The first argument draws attention to the context of particular epistemic values, the second argument to the context-dependent consequences of accepting a hypothesis, and the third argument to context-dependent background assumptions. Yet, it is important to notice that acceptance, as it is analyzed in the three arguments, can be attributed to individuals and research teams alike. In sections 4 and 5 I argue that in order to do justice to scientific collaboration, we need an analysis of acceptance that is more social than the one revealed in this section.
3. Three Normative Approaches to Values in Science
So far I have shown that there are reasons to believe that the value-free ideal is not feasible independently of whether acceptance is attributed to individual scientists or research teams. Thus, it is appropriate to discuss some normative approaches that have been proposed as alternatives to the value-free ideal. Interestingly, the alternative normative approaches offer guidance not only for individual scientists but also for scientific communities. Yet, I argue that such guidance does not meet the challenges posed by scientific collaboration.
3.1. Social Empiricism
Solomon’s social empiricism gives recommendations for individual scientists and science policy makers (Reference Solomon2001, 150). In her view, non-epistemic values can legitimately play a role in determining which empirical successes a scientist considers most important when she decides to work with a particular scientific theory. Solomon thinks that non-epistemic values can play an epistemically beneficial role in science insofar as they generate and maintain an efficient distribution of research efforts among those theories that have some empirical successes. Such a distribution, she argues, is a prerequisite to the long-term epistemic success of science. For this reason, a normative approach to values in science should not discourage the influence of non-epistemic values at the individual level in determining a scientist’s choice of one theory over another (Reference Solomon2001, 120). Solomon’s main concern is the proper functioning of scientific communities. She recommends that science policy makers take steps to cultivate diversity and dissent in scientific communities (Reference Solomon2001, 117–18). As they cannot know in advance which research programs will be fruitful, they are better off distributing their bets among several alternative lines of inquiry.
3.2. Scientific Integrity
While the term “policy” is mentioned explicitly in the title of Douglas’s book Science, Policy, and the Value-Free Ideal, her main goal is to give advice not for policy makers but for individual scientists (Reference Douglas2009, 19). In her view, scientific integrity consists in keeping non-epistemic values to their proper roles in scientific reasoning (Reference Douglas2009, 88; see also 156 and 176). In order to define their proper roles, Douglas makes a distinction between a direct and an indirect role. Values play a direct role when they act as reasons in themselves to accept a hypothesis or a theory and an indirect role when they act as reasons to accept a certain level of uncertainty (Reference Douglas2009, 96). While non-epistemic values are not allowed to play a direct role in scientific reasoning, they can legitimately play an indirect one. A direct role is not acceptable because it means that non-epistemic values play the same role as evidence does (Reference Douglas2009, 156). An indirect role, on the other hand, is acceptable because scientists are morally responsible for their knowledge claims and the predictable consequences of making such claims (Reference Douglas2009, 106). As Douglas herself admits, her normative approach is meant to define a minimal criterion for good scientific practice rather than defend principles for an epistemically ideal community. Given that scientific integrity is a minimal criterion, an individual scientist can try to realize it even in a community that is less than ideal from an epistemic point of view. Next, I turn to an approach that is concerned not only with individual responsibilities but also with communities.
3.3. Social Account of Objectivity
In Longino’s (Reference Longino1990, Reference Longino2002) view, non-epistemic values can legitimately play a role in a scientist’s choice of background assumptions as long as no one has challenged these assumptions. Individual scientists are not held responsible for policing the role of non-epistemic values in scientific inquiry on their own. Such a responsibility would be too demanding because “there are no formal rules, guidelines, or processes that can guarantee that social values will not permeate evidential relations” (Reference Longino2002, 50). An individual scientist may not even be aware that her preferred background assumptions resonate with certain non-epistemic values (Reference Longino1990, 80). For these reasons, a social account of objectivity is needed to make sure that value-laden background assumptions can be identified and criticized.
Like Solomon, Longino is concerned with the proper functioning of scientific communities. A social account of objectivity is the view that scientific knowledge is objective to the degree that a relevant scientific community satisfies the four norms of public venues, uptake of criticism, shared standards, and tempered equality of intellectual authority (Reference Longino1990, 76–81; Reference Longino2002, 129–31). Yet, Longino’s approach differs from Solomon’s in that it does not assume that scientific communities are capable of realizing the normative ideal without assigning responsibilities to individual scientists. For example, the norm of uptake means that an individual scientist has an obligation to respond to criticism.
Let me wrap up my findings. While these three accounts address both individual scientists and scientific communities, they all neglect an intermediate social level in science: research groups. The guidelines and norms they offer are meant to be valid independently of whether scientists work without or within research groups. In both cases an individual scientist is expected to work with empirically successful theories (Solomon Reference Solomon2001) and acknowledge her moral responsibility for error-related risks (Douglas Reference Douglas2009). In both cases she has certain obligations in virtue of being a member of a scientific community (Longino Reference Longino1990, Reference Longino2002). Clearly, the term “social” in Solomon’s and Longino’s social epistemologies means that their epistemologies are concerned with scientific communities, not with research groups. It is time to turn to the epistemic structure of scientific collaboration, which gives rise to a more social analysis of acceptance than the one we have seen so far.
4. Values in Collective Acceptance
While there is a growing amount of literature on the role of collective acceptance and trust in science, this literature has not yet been explored in connection with the debate on the proper role of values in science. The literature on collective acceptance aims to understand community-wide scientific changes (Gilbert Reference Gilbert2000; Andersen Reference Andersen2010), expert advisory committees (Beatty Reference Beatty2006), and scientific manifestos (Bouvier Reference Bouvier2004). The literature on trust aims to account for the role of moral virtues in science (Hardwig Reference Hardwig1991), the epistemic importance of gender and race equality in science (Rolin Reference Rolin2002; Wray Reference Wray2007a), and relations between scientific and lay communities (Grasswick Reference Grasswick2010; Anderson Reference Anderson2011). In this section I discuss the role of values in collective acceptance, and in the next section I discuss the role of values in trust-based acceptance. I argue that insofar as collective acceptance and trust-based acceptance play an epistemic role in science, some moral and social values can play a legitimate role in acceptance. While moral and social values are often seen as non-epistemic values, these particular moral and social values are extrinsic epistemic values.
4.1. Plural Subject Account of Collective Acceptance
A number of philosophers use Margaret Gilbert’s plural subject account of collective belief to understand collective acceptance in science (Wray Reference Wray2001, Reference Wray2007b; Bouvier Reference Bouvier2004; Beatty Reference Beatty2006; Häkli Reference Häkli2006; Staley Reference Staley2007; Andersen Reference Andersen2010; Rolin Reference Rolin2010). On such an account, a group of scientists accepts a view insofar as the group members are jointly committed to accepting the view as a body (Gilbert Reference Gilbert2000, 39). To claim that a group accepts a view in this sense is not the same thing as to claim that all or most group members accept the view. It is possible that a group accepts a view that all or some group members do not accept as their personal view. It is not, of course, common in scientific collaborations that a group’s collective view and group members’ personal views are in conflict; otherwise, group members would hardly consider collaboration as an attractive option for them. But it is important to notice that in principle an individual scientist’s personal and collective views can diverge, and they sometimes do (Beatty Reference Beatty2006; Staley Reference Staley2007). A scientist can let a particular view stand as the group’s position even when she has some reservations or doubts concerning it.
Given Gilbert’s account of collective acceptance, moral and social values are built into the very structure of collective acceptance. As Gilbert explains it, a group’s joint commitment to accept a view as a body generates obligations for group members (Reference Gilbert2000, 44). Once a group member has openly expressed a commitment to jointly accept a view as the position of the group, she is obliged not to question the group view publicly. In some research groups an individual scientist may ask that her name be removed from the list of authors if she does not personally agree with the argument in the joint paper and is not willing to let the argument stand as the position of the group. But when she signs a joint paper, her act is usually taken to mean that she has expressed a commitment to jointly accept the content of the paper, knowing that such a commitment generates obligations. While an obligation to accept the group view is not universal in the way that some moral obligations are, it is nevertheless of moral and social nature because it is based on an agreement among group members. Therefore, a joint commitment provides group members with a moral and social reason for asserting and supporting a view (see also Mathiesen Reference Mathiesen2006, 169).
Thus, if collective acceptance is allowed to play an epistemic role in science, then a moral and social reason is allowed to play a role in acceptance. Such a reason is a scientist’s commitment to carry on the collaboration even when it means that she has to suppress some of her personal views temporarily. While such a strong commitment to collaboration may seem to be irrational from an epistemic point of view, I argue that it is not necessarily so.
4.2. Group versus Individual Justification
A common assumption is that a group’s joint commitment to accept a view as a body is not effective in getting at truth (Goldman Reference Goldman2004; Mathiesen Reference Mathiesen2006). This assumption, I argue, is false. From an individual point of view it may seem to be epistemically irrational to accept a view on the grounds that other group members accept it and one has promised to be loyal to the group. From the point of view of the group, however, the situation is different. As with individual scientists, research groups are also expected to provide epistemic justification for their views. Groups and individuals are not different in this respect. However, group justification differs from individual justification in that it involves not merely reasoning but also an aggregation procedure. By an aggregation procedure I mean a mechanism for aggregating group members’ individual views into corresponding collective views endorsed by the group as a whole. Whereas valid reasoning is likely to lead an individual to accept a consistent set of views, it is not sufficient for a group to arrive at a consistent set of collective views. As Christian List and Philip Pettit (Reference List and Pettit2011) argue, a group has to settle on an aggregation procedure (or some other type of decision-making procedure) in order to achieve a consistent set of collective views. Let me explain the argument in more detail.
Let us assume that a group G includes three persons, A, B, and C, and that each member of the group is competent in deductive reasoning. Let us assume also that the task at hand is to find out whether group G is justified in believing that (p & q) is true given that each group member has already made her individual judgments concerning the truth values of p and q. For example, if each group member believes that p is true and q is true, then each group member is justified in believing that (p & q) is true. In this case it does not make a difference whether the group decides to aggregate individual judgments concerning the premises or the conclusions. In both cases, group G is justified in believing that (p & q) is true. However, the situation is more complex when the group members disagree about the premises but nevertheless want to arrive at a collective view. Let us assume that A believes that p is true and q is false, B believes that p is false and q is true, and C believes that both p and q are true. It follows that both A and B are justified in believing that (p & q) is false and C is justified in believing that (p & q) is true. Thus, group G seems to be justified in believing that (p & q) is false when it chooses to aggregate individual conclusions by means of a majority vote. Also, group G seems to be justified in believing that p and q are true when it chooses to aggregate individual premises by means of a majority vote. This is because two group members, A and C, believe that p is true and two group members, B and C, believe that q is true. The troubling upshot is that group G seems to be justified in believing an inconsistent set of propositions: p, q, and not (p & q).
The abstract nature of the judgment aggregation problem has led some philosophers to question its relevance to scientific knowledge (Magnus Reference Magnus2013). Yet, I argue that there is at least one practical lesson research groups can learn from the problem. In order to block the possibility of inconsistent collective views, research groups need to “collectivize reason” (Pettit Reference Pettit and Schmitt2003, 176) by settling on an aggregation procedure (or another type of decision-making procedure). The reason for this is that consistency in a set of views is a necessary condition of having justified views, for both individuals and groups. Scientific publications are expected to be consistent independently of whether they are authored by individuals or groups (Rolin Reference Rolin2010, 220).
Given the need to “collectivize reason” in research groups, a strong commitment to scientific collaboration is epistemically rational because it enables the group to steer clear of inconsistent collective views. Also, a group’s decision to follow a particular aggregation procedure requires a joint commitment on behalf of its members. Whereas group members can conduct reasoning individually, the aggregation of individual views into a collective view cannot be done individually. It has to be done by the group jointly as a body. This is the case even when one group member is chosen as the lead author who is in charge of compiling individual contributions into a consistent manuscript. The other group members need to recognize the authority of the lead author, and they are asked to give their approval to the final outcome of the writing process. Thus, the moral and social values generated by a joint commitment are extrinsic epistemic values because they are conducive to internal consistency, and internal consistency is an intrinsic epistemic value because it is a necessary condition for truth (see, e.g., Steel Reference Steel2010, 18).
4.3. Objections and Replies
So far I have argued that a joint commitment to accept an aggregation procedure and its outcome is epistemically rational insofar as it enables a group to avoid inconsistency. It is important to keep in mind also that when scientists work in research groups they can achieve more ambitious epistemic goals than they could if they were working on their own. Collaboration may sometimes require an individual to make a compromise, but the compromise is balanced by the epistemic benefits of collaboration (Fallis Reference Fallis2006; Wray Reference Wray2006). Next, I wish to eliminate two objections that may be raised against my argument.
One objection is that there is no need to enforce an internally consistent collective view by means of judgment aggregation procedures if the group members take their time to deliberate on their views. Given enough time and resources, disagreements among group members will be ironed out and the group will arrive at a collective view that is not only internally consistent but also the personal view of each group member.Footnote 4 In this case, the group’s view can be understood in a summative way. On a summative account, the group accepts a view if and only if all or most group members accept the view (Gilbert Reference Gilbert2000, 37). A summative account of group view in itself does not import moral and social values into acceptance because it does not involve the notion of joint commitment.
Against this objection I argue that group deliberation is not a feasible ideal in all areas of contemporary science. Kent Staley (Reference Staley2007) argues that deliberation is difficult to implement in very large research groups, which can be found in some areas of physics. Such groups can include up to 300 or even 400 scientists. Large research groups need to strike a balance between two aims. On the one hand, they seek to avoid making false claims, and on the other, they seek to make novel and significant true claims (Staley Reference Staley2007, 323). The pressure to publish novel and significant results means that there is an incentive to aggregate individual views into a collective view even when it means that some group members’ personal views are dismissed for the moment. As Staley explains, there has to be a willingness to make compromises between the individual group members’ personal views and the collective statement of the group (Reference Staley2007, 324).
Another objection to my argument is that the moral and social obligations generated by a joint commitment cannot be epistemically rational because they are likely to suppress dissent in research groups, and dissent is epistemically beneficial, as Solomon (Reference Solomon2001) argues (see sec. 3). While I grant that dissent is an epistemic resource in scientific communities (see also Zollman Reference Zollman2010; Fehr Reference Fehr and Grasswick2011; Intemann Reference Intemann and Grasswick2011; Rolin Reference Rolin2011), I argue that imposing epistemic conformity in research groups is not a problem as long as it is balanced by a reasonable amount of dissent in scientific communities. Acknowledging the epistemic importance of dissent does not undermine my argument; instead, it supports the view that there is an asymmetry between the social epistemology of research groups and the social epistemology of scientific communities. Whereas research groups are expected to speak as if they have one voice, scientific communities are not expected to do so. One might even argue that insofar as large-scale collaborations are becoming a rule rather than an exception, philosophers should be increasingly concerned with the question of how diversity of perspectives and dissent are maintained in scientific communities.
Having countered the two objections, I conclude that a particular moral and social value can play a legitimate role in acceptance if acceptance is understood to involve not only individual but also collective acceptance. The moral and social value is an obligation that group members have in virtue of their joint commitment to let a particular view stand as the position of the group. I do not claim that all group views involve a joint commitment on behalf of group members. In some cases, a summative account of group views will probably be satisfactory. But I do claim that in some other cases, a collective account of group views is more adequate than a summative account. This is the case in large collaborations where the group is under pressure to publish novel and significant findings without waiting for all the group members to arrive at a summative consensus via deliberation. I have argued also that the moral and social values implicit in collective acceptance are extrinsic epistemic values because they promote the attainment of truth by guaranteeing the internal consistency of collective views.
Since collective acceptance is a special case, there is a need to develop an alternative, noncollective account of the epistemic structure of scientific collaboration. For example, what Melinda Fagan (Reference Fagan2011) calls an “interactive” account of group views is a noncollective alternative both to a collective acceptance and to a summative account of group views. On an interactive account, the epistemic structure of scientific collaboration consists of relations of trust and other interactions among group members (Thagard Reference Thagard2010, 279; Fagan Reference Fagan2011, 251; Wagenknecht Reference Wagenknecht2013, 207). In the next section I argue that even if one prefers a trust-based account of group views to a collective acceptance account, moral and social values can play a legitimate role in acceptance. As in the case of collective acceptance, a particular moral and social value is properly understood to be an extrinsic epistemic value.
5. Values in Trust-Based Acceptance
Trust plays an epistemic role in science when trust in a testifier functions as a reason to accept an observation report, an experimental result, a background assumption, or some other piece of information (Hardwig Reference Hardwig1991; Kitcher Reference Kitcher1993; Kusch Reference Kusch2002; Wilholt Reference Wilholt2013; Wagenknecht Reference Wagenknecht2014). The main difference between collective and trust-based acceptance is that whereas in the former case scientific knowledge is attributed to the group as a whole, in the latter case it is attributed to individual group members. An individual group member’s set of justified views is extended dramatically if trust in another group member is seen as a sufficiently good ground for epistemic justification. Thus, trust makes it possible for each group member to know more than she could know otherwise.
As John Hardwig argues, trust in a testifier involves trust in the moral and epistemic character of the testifier (Reference Hardwig1991, 700). When a scientist trusts a testifier, she trusts that the testifier is honest in giving her testimony and competent in the relevant domain. In an empirical study of two research groups Susann Wagenknecht argues that scientists use various strategies to secure trust in other group members (Reference Wagenknecht2014, 21). Scientists are not only engaged in question-and-answer types of interactions; sometimes they also witness the work of their collaborators in order to increase their understanding of others’ contributions. Yet, trust plays an irreducible role in acceptance because other group members do not know the details as well as the person who is in charge of running an experiment (Wagenknecht Reference Wagenknecht2014, 11–13). Next, I argue that while scientists often expect to have some empirical warrant to support their trust in the epistemic character of their collaborators, the moral character of their collaborators is often taken for granted.
5.1. Default Assumption of Honesty
Honesty is often taken for granted in scientific collaborations because evidence of moral character is incomplete. When group leaders recruit scientists into their teams, they may seek evidence of the moral character of the candidate in letters of recommendation (Frost-Arnold Reference Frost-Arnold2013). Also, when scientists work in relatively small and stable teams, they are likely to trust the moral character of other team members because an extended experience of collaboration gives them a good reason to do so (Wagenknecht Reference Wagenknecht2014). But even when there is evidence of good moral character, trust in the moral character of other team members is underdetermined by evidence. This is because the very notion of character refers to a disposition to behave in certain ways across a range of social situations. Consequently, trust in the moral character of other scientists is at least partly based on a principle of charity.
The assumption of honesty plays an even more prominent role in large-scale collaborations where scientists do not know all the other group members personally. Being members in the same research group may be a reason for one group member A to trust the moral character of an unknown group member B if A believes that a third group member C whom A finds trustworthy knows B personally and has found B’s moral character flawless. But even in this case, A’s trust in the moral character of B and C is based on incomplete evidence.
The upshot is that honesty functions as a default assumption in trust-based acceptance. To say that honesty is a default assumption means that a testifier is assumed to be honest unless one has a reason to doubt it. Having made a mistake is not yet a reason to doubt a scientist’s honesty since many mistakes are due to oversight, lack of experience, or some other shortcoming in the scientist’s epistemic character. One has a reason to doubt someone’s honesty when there is evidence of intentional attempts to distort the research process (or of gross negligence leading to such distortions). The default assumption of honesty is a moral value judgment because it is accepted for a moral reason. The moral reason is the belief that it is morally wrong to doubt another group member’s honesty when one does not have a reason to do so. It follows that if acceptance is understood to include trust-based acceptance, then a moral value judgment can play a legitimate role in acceptance.
Also, the default assumption of honesty is an extrinsic epistemic value because it contributes to epistemic justification in the context of scientific collaboration. While honesty is not the only requirement for a person’s being a trustworthy source of information, the assumption that a person is honest in giving her testimony is one reason to consider her trustworthy. Given the default assumption of honesty, trust in a testifier can be a good reason to accept a piece of information when a person does not have first-hand evidence. In scientific collaborations trust is often a superior reason to accept a piece of information because it gives one access to the best available evidence. As Hardwig (Reference Hardwig1991) explains, trust-based acceptance does not mean that evidence does not matter in epistemic justification. Quite the contrary, trust-based acceptance is needed precisely because evidence matters and it is too extensive and complex to be had by any other means than by trusting others (Hardwig Reference Hardwig1991, 706). Next, I wish to reply to three objections that may be raised against my argument.
5.2. Objections and Replies
The first objection is that reliance on moral value judgments concerning honesty can be reduced if not eliminated by designing reward systems and sanctions so that they provide scientists with a strong incentive to act in an honest way. When reward systems and sanctions work well, a scientist has a nonmoral reason to trust another scientist’s testimony. The nonmoral reason is her belief that the other scientist is likely to act in an honest way because as a self-interested and rational actor the other scientist understands that it is in her best interest to do so. In what Karen Frost-Arnold (Reference Frost-Arnold2013) calls a self-interest account of trust, scientists trust each other’s testimony because they believe that sanctions for betraying trust are so serious that it is in their best interest to be trustworthy.
Against this objection I argue that while a self-interest account of trust can give a partial explanation of why scientists trust each other in collaborations, it has limitations. As Hardwig points out, institutionalized control mechanisms, such as replication of results, may diminish the need to rely on the moral character of the testifier, but they cannot obviate it (Reference Hardwig1991, 707). One reason for this is that replication is not always done because it may not lead to a publication in a high-impact journal. Replication is also costly and likely to delay other research projects. For example, randomized clinical trials remain often what James R. Brown (Reference Brown and Radder2010) calls “one-shot” science. Given that results are not always replicated, it may take a long time to discover dishonesty. And if dishonesty is not detected, it will not be punished. For this reason it is unlikely that moral value judgments will be eliminated from the evaluation of trustworthiness. As Hardwig explains it, “There are no ‘people-proof’ institutions” (Reference Hardwig1991, 707). If Hardwig is right, moral value judgments can legitimately play a role in the evaluation of trustworthiness. This means that they can legitimately play a role in trust-based acceptance.
The second objection to my argument is that the moral values implicit in trust-based acceptance are so remotely related to the attainment of truth that they do not deserve to be called epistemic values. Against this objection I argue that it is based on a narrow definition of epistemic value that is in need of further defense. Given a narrow definition, the set of epistemic values includes merely intrinsic epistemic values, that is, values that are either indicators of truth or necessary for truth (Steel Reference Steel2010, 15). It does not include extrinsic epistemic values, that is, values that promote the attainment of truth without themselves being indicators or requirements of truth (Steel Reference Steel2010, 18). Given the narrow definition, epistemic values can have a justification independently of scientists’ historical reliance on them (Douglas Reference Douglas2013, 801). Consequently, epistemic values can be identified independently of scientific practices as they have evolved historically. While these features may be attractive for some purposes in philosophy of science, I see no reason to limit the scope of philosophical inquiry to intrinsic epistemic values. Extrinsic epistemic values are not less interesting for those philosophers who aim to understand actual scientific practices.
As with the second objection, the third one is also concerned with the definition of epistemic value. Someone may argue that a broad definition of epistemic value, the one including not only intrinsic but also extrinsic epistemic values, tends to blur the epistemic/non-epistemic distinction altogether. If some moral and social values are extrinsic epistemic values, then almost any values can be argued to be epistemic values, the objection goes. But this is not the case. If the set of epistemic values includes extrinsic epistemic values, then the epistemic/non-epistemic distinction is context dependent because the effectiveness of extrinsic epistemic values in bringing about the desired epistemic ends depends on the circumstances, and the circumstances are likely to vary from one context to another (Steel Reference Steel2010, 20). This means that it may be a demanding task to determine whether a value is epistemic extrinsically. But it does not mean that we should abandon the distinction among epistemic and non-epistemic values.
To sum up the argument, some moral and social values deserve to be called extrinsic epistemic values because in the context of scientific collaboration they promote epistemic justification, either the justification of group views (as I have argued in sec. 4) or the justification of individual views that are based on testimony (as I have argued in sec. 5). This result adds a novel dimension to the three normative approaches I have reviewed in section 3. Moral and social values are allowed to play a role in acceptance not only because they are required by moral responsibility (Douglas Reference Douglas2009) or because they can generate diversity and critical perspectives (Longino Reference Longino1990; Solomon Reference Solomon2001). Some moral and social values should be permitted to play a role in acceptance because they are woven into the epistemic fabric of scientific collaboration.
6. Conclusion
The debate on the proper role of values in acceptance has been limited so far because it has focused either on individual scientists’ decision making independently of scientific collaborations or on the proper functioning of scientific communities. In order to reveal the limitations, I have reviewed three arguments against the value-free ideal and three alternatives to the value-free ideal. In each case a research team is treated as an epistemic black box, and the epistemic significance of its inner organization is overlooked.
In order to explain the group perspective on values in science, I have introduced the notions of collective acceptance and trust-based acceptance. In the case of collective acceptance the group perspective means that the group is the agent of acceptance, whereas in the case of trust-based acceptance it means that an individual group member is epistemically dependent on other group members. Most significantly, the group perspective challenges the assumption that all moral and social values are non-epistemic values. In the case of collective acceptance, a joint commitment to collaboration generates moral and social obligations that can play a legitimate role in acceptance. In the case of trust-based acceptance, a default assumption of honesty is a moral value judgment that can play a legitimate role in acceptance. In both cases, moral and social values are extrinsic epistemic values because they promote the epistemic justification of either group views or individual views in the context of epistemic dependency. Some values are moral, social, and epistemic at the same time.