1. Introduction
Classical logic presupposes concepts with precise boundaries.Footnote 1 So when their peripheries begin to look fuzzy, or less than fully precise, problems emerge. One difficulty that has received considerable attention is the sorites paradox – or paradox of the heap. It tends to arise when classical inference patterns are used to reason with predicates that are tolerant of small changes in their extension.Footnote 2 For example, the meaning of ‘heap’ accommodates stuff being added or removed, and, consequently, a heap of sand can tolerate the removal or addition of single grains of sand while retaining its status as a ‘heap’. As a result, repeatedly removing single grains of sand while tracking those changes with a classically valid principle of logic like modus ponens can lead to the conclusion that a single grain of sand is a heap.Footnote 3
Several different strategies for dealing with these difficulties are prominent in the vagueness literature, but one coarse way to distinguish between them is according to their degree of departure from classical logic.Footnote 4 Indeed, the literature on vagueness is replete with non-standard logics, and most seem motivated by the idea that developing alternatives to classical logic will avoid the problematic inferential relations that lead to sorites-style difficulties. In contrast to such strategies, epistemicists maintain a full-blooded commitment to classical logic by arguing that predicates must be sharply bounded and that vagueness is the product of epistemic limitations (Fara Reference Fara and Beall2003; Sorensen Reference Sorensen1988; Sorensen’s Reference Sorensen2001; Williamson Reference Williamson1994). This approach commits epistemicists to the claim that a single grain of sand changes a ‘heap’ of sand to a ‘non-heap’, or that a single hair makes the difference between being ‘bald’ and ‘not-bald’. Such a seemingly bizarre view has come in for a good deal of criticism, because it seems to endorse a commitment to precision that flies in the face of common sense.
In this essay, I provide a backhanded defense of epistemicism: backhanded because I argue that epistemicist commitments are required to deal with practical difficulties of a certain kind.Footnote 5 The suggestion, then, is that effectively managing a variety of vagueness-plagued situations requires adopting the theoretical posture of epistemicists. To support that conclusion, I spend most of the essay arguing for a different point, which is that thinking like an epistemicist in the face of vagueness – that is, using a precise formal system-like classical logic to manage situations plagued with vagueness – requires strategies for generating precision. These strategies serve as intermediaries, allowing the formal system preferred by epistemicists to get a grip on the relevant situation.Footnote 6 My defense of epistemicism, then, is subsidiary to a more peculiar and I think interesting philosophical point, which is that an agent’s practical activity rightly influences the shape of his or her more speculative thought.
I begin by suggesting that classical logic is motivated by a view about the relation between reality and correct patterns of inference. To the extent that one believes that patterns of inference are, or should be, reflective of the way things are, commitments about logic (or metaphysics) bring in tow commitments about metaphysics (or logic). With this background in place, I sketch the challenge vagueness poses for such a view and show how epistemicists have dealt with the challenge. Next, I develop two different examples to show that managing specific vagueness-plagued situations requires adopting epistemicist commitments. In such situations, agents are required to adopt the theoretical posture of an epistemicist. Doing so allows them to think classically about vague situations. The examples, however, should not be read as promoting a thoroughgoing endorsement of epistemicism. Rather, they show that in a limited range of situations the formal commitments of epistemicism are a reasonable response to a certain kind of practical problem. If that is right, alternative kinds of situations should call for different commitments. I endorse this view in the penultimate section of the paper and conclude with a conditional endorsement of epistemicism: in a variety of specific situations, adopting the formal commitments of epistemicism is reasonable.
2. The background
Let me begin this section by emphasizing that I am not attempting to pin the views sketched in it on any particular theorist. Rather, my aim is to paint a very general picture, one behind a widespread philosophical perspective. In attempting to sketch such a general picture, I am surely distorting the views of any particular philosopher’s more nuanced position. Nevertheless, as with most sketches, the point is not to represent the details in all their particularity but, rather, to capture something of the sentiment underlying them. It is the sentiment that I am after.
During the twentieth century, much formal work was motivated by a kind of anti-psychologism: the thought that figuring out correct forms of inference should be done by looking at relations that obtain in the world, not in the mind. The general idea was (and is) that since there must be genuine relations between states of affairs, or facts, Footnote 7 determining what those relations are would tell us something about how we should reason about them. For example, by determining whether there is some specific relation between the fact that ‘it is raining’ and the fact that ‘barometric pressure has decreased’, one could determine whether it is justifiable to make inferences according to that relation, whatever it turns out to be. Behind this anti-psychologistic view of logic is another thought: states of affairs, or facts, are discrete and determinate. Consequently, the relations that obtain between them must also be discrete and determinate. As this line of thinking has it, once we get the facts sorted out, the relations between them will be clear, leading to a very crisp picture of the relations that exist between features of the world.
Of course, we do not reason with facts, we reason with propositions. As a result, only if propositions are tightly connected to facts will the relations evident between them show up in the relations between propositions.Footnote 8 However, buying into the robustness of that connection generates a tendency to transfer presuppositions. On the one hand, given certain ideas about the nature of propositions, we tend to think about facts in likewise fashion. On the other hand, given presuppositions about the nature of facts, we tend to think about propositions in similar terms. As a result, if we assume facts (or propositions) are discrete and that there are determinable relations between them, there will be a tendency to believe that propositions (or facts) must display these properties as well.
This very general sketch is significant for how philosophers have understood logic, because classically valid patterns of inference – that is, those patterns characteristic of classical logic – depend on it. Notice, however, that characterizing inferential relations according to this view reflects the assumptions built into the model. We assume that propositions faithfully represent the facts; we assume that states of affairs are discrete and determinable; we assume that the logical relations between facts are stable; we assume that the predicative aspect of propositions is sharply bounded; and so on. Without these assumptions, the patterns of inference would be dramatically different than those of classical logic.
Pointing out that classical logic presupposes perhaps controversial philosophical views need not be seen as a criticism. Instead, what we should notice about classical logic is something that is true of any representation: to accurately reflect aspects of a target object, a representation must make certain distorting commitments. In the case of classical logic, for the anti-psychologistic formal system to accurately model truth-preserving patterns of inference, it must make certain commitments about facts, propositions, the relations existing between the two, and so on. Of course, some of those commitments will likely distort how things actually are, but such potential distortions need not be considered inherently undesirable.Footnote 9 Indeed, we might even wonder when the distorting commitments typical of a particular formal system are the rational ones to make.
3. The problem
Vagueness threatens the general picture I have just sketched in two different ways. First, if features of the world lack precise boundaries, they begin to blur together, and without any clear distinction between features of the world, the possibility of understanding the general relations between things looks unpromising. Vagueness, then, threatens to undermine the anti-psychologistic thought that logical relations are appropriately determined by figuring out the relations between facts. Second, if the propositions we use to represent the world are built using blurry conceptual or predicative components, it will be difficult to understand the relations that exist between propositions for the same reason. Consequently, vagueness poses a problem for any attempt to determine legitimate inferential relations, which threatens to undermine the development of any formal system of inference.
To see the depth of the problem, consider a simple example. The terms ‘automobile’ and ‘orange’ are connected to things in the world that fall within their extensions. If the set of objects that fall within the extension of either of these two terms is nicely bounded, it is possible to determine the relations between the concepts. However, if the extension of either term fails to pick out precisely bounded sets of objects, it is much less clear how to figure out those relations. For instance, suppose that the term ‘automobile’ picks out a sharply delimited set of objects, but that the term ‘orange’ is vague. On this supposition, our capacity to make justifiable inferences about orange automobiles is undermined.
The problem arises due to the fact that a properly constructed sorites sequence on clear cases of ‘orange’ or ‘not-orange’ can undermine the meaning of the term. How can that be? ‘Orange’ is susceptible to small, unnoticeable shifts in extension. Suppose, then, that we begin with a vat of orange paint, and slowly, one drop at a time, add white paint. Surely one drop of white paint will not alter the color of a vat of paint. Thus, if the paint is orange, the paint plus one drop of white paint is still orange. But adding another drop to that unnoticeably altered vat of orange paint will not alter its color. Consequently, it too must be orange. Eventually, one drop at a time, repeated application of this procedure along with its parallel form of inference leads to the conclusion that a vat of white paint is orange. In this way, the meaning of the term is undermined. Similar procedures can be applied to any vague term across an indefinite number of dimensions. (The idea is that given a sufficient number of sorites-style steps, anything can be changed or altered to become anything else.) Consequently, the entailment relations between propositions that depend on the meaning of vague terms cannot be maintained.
This, then, is the crux of the difficulty: when the possibility of vagueness enters the conceptual landscape, formal inferential relations blur and begin to collapse. This is due to the fact that without precise extensional boundaries, we cannot figure out the relations necessary for determining what follows from what.Footnote 10
4. The epistemicist solution
There are a number of alternatives for dealing with these problems. Several theorists have tinkered with classical semantics, and thus the traditional conception of logic, with the hope of producing formal systems capable of handling the challenges of vagueness. Fine’s (Reference Fine, Keefe and Smith1997) supervaluationism, Edgington’s (Reference Edgington, Keefe and Smith1997) degree theoretic approach, Tye’s (Reference Tye, Keefe and Smith1997) three-valued logic, the subvaluationism of Hyde (Reference Hyde1997), the contextualism of Shapiro’s (Reference Shapiro2008), Footnote 11 as well as many other formal alternatives to classical logic have been developed in an effort to better represent the inferential connections between propositions formed using vague predicates. In contrast to these non-classical approaches, epistemicists have retained their commitment to classical logic and tried to deal with vagueness by relocating the problem. For them, the extension of a predicate does not blur at the edges, and the thought that facts are anything but discrete is nonsense.Footnote 12 Vagueness is not a problem that arises due to fuzzy states of affairs or the imprecision of our semantic representations; rather, vagueness arises due to epistemic limitations. Epistemicists, then, are committed to something like the background story I have sketched above: facts, predicates, and propositions are precise, and there is a tight correlation between the formalism of classical logic and the facts to which it is connected.
This strategy for dealing with vagueness has the virtue of retaining anti-psychologistic commitments while explaining the phenomenon of vagueness. Accordingly, vagueness is a problem that results from the limitations of having a mind, Footnote 13 and, thus, is a problem that stands outside the relation between representational systems and their corresponding objects. Creatures with minds can never know where the precise boundaries of vague terms lie. Nevertheless, we can rest assured that the boundary is there somewhere. For an epistemicist, ‘ignorance is the real essence of the phenomenon ostensively identified as vagueness’ (Williamson Reference Williamson1994, 202). Accordingly, the problems of vagueness are not semantic or metaphysical, they are epistemic.
Because of this, we need not bother with formal representations of vagueness that reject bivalence and otherwise alter classical logic in the hope of representing some characteristic feature of vagueness. As Williamson (Reference Williamson1994, 186) points out:
If one abandons bivalence for vague utterances, one pays a high price. One can no longer apply classical truth-conditional semantics to them, and probably not even classical logic. Yet classical semantics and logic are vastly superior to the alternatives in simplicity, power, past success, and integration with theories in other domains. It would not be wholly unreasonable to insist on these grounds alone that bivalence must somehow apply to vague utterances, attributing any contrary appearances to our lack of insight (emphasis in original).
The point is that at all costs we should make sense of vagueness while retaining our commitment to classical, bivalent logic. That commitment, as well as the view that language users are ignorant of the sharp boundaries of vague predicates, is distinctive of epistemicism.
5. Thinking like an epistemicist
My aim in this section is to demonstrate that managing some vagueness-plagued situations requires adopting epistemicist commitments. In the next section, I argue that this fact supports the more general view that an agent’s practical activity rightly influences the shape of his or her speculative thought. The link between this and the next section, then, is the following: since epistemicist commitments are required when managing some vague situations, Footnote 14 we should expect different interests to motivate the use of alternative formal systems.
Let me pause to emphasize that I am not arguing for (or against) any formal system. Rather, I am trying to show that we must commit to certain ways of thinking in order to perform specific activities. Without adopting such commitments, we could not do certain things. So what kind of vagueness-plagued situations require epistemicist commitments? To my mind, the world of sports is filled with examples, but I want to look at the commitments required to effectively play a tennis match. Sports related situations, however, are not the only ones worth thinking about. Scientific problems also present a range of vagueness-plagued situations that tend to be managed by embracing epistemicist views. For example, biologists and philosophers of biology adopt such a posture to think about evolutionary history despite its seeming vagueness. Systematics, then, will also provide a useful example. To ease into things, I begin with tennis.
5.1. In or Out
Tennis balls are fuzzy, they often travel at speeds well over 100 mph (just over 44 m/s), and they frequently land on or next to a less-than-precisely demarcated line. If there are vague situations, this is one of them. And, yet, despite the obvious situational vagueness, tennis officials must use bivalent standards for the concepts ‘in’ and ‘out’ and reason as if they nicely map onto the facts – that is, they must reason according to the inferential patterns of classical logic.Footnote 15 In fact, I want to suggest that for several reasons tennis officials – chair umpires in particular – reflect the philosophical commitments of epistemicism when managing the inferential tasks posed by tennis matches. First, their job description requires them to believe that there is a fact of the matter as to whether the ball landed in or out. Second, they have to be committed to the precision of the predicates being used to manage the tennis match – that is, the terms ‘in’ and ‘out’ cannot be fuzzy or have indeterminate boundaries. Third, the inferences they make to manage the match must be truth-preserving – that is, they must reason classically. And, finally, they must believe that their own ignorance of the relation between features of the world and the precise terms used to describe it explains the seeming vagueness of the situation.
For officials, the trouble with mapping ‘in’ or ‘out’ to the situation at hand is not due to the vagueness of the terms, or to the fuzziness of the facts; rather, the difficulty results from their own epistemic limitations. We can easily imagine for example, a chair umpire saying to himself, ‘the ball must have been in or out, but it was moving so fast, and everything seemed a blur in the moment.’ This self-reflective acknowledgment of ignorance seems reasonable: determining whether a ball traveling at speeds well over 100 mph has come into the slightest contact with a seemingly imprecise line is obviously beyond human abilities. Yet officials must adopt the posture of an epistemicist: the ball must be either in or out, and the seeming vagueness of the situation is purely the result of what we cannot in principle know. The situation of tennis officials, then, seems to correlate well with the philosophical commitments endorsed by epistemicists.
Faced with such an example, I want to further suggest that managing the situation with these commitments requires a strategy to generate the necessary precision. In other words, for the minds of chair umpires to reflect the commitments of epistemicists, there must be some practical intermediary that generates the required precision. Such an intermediary allows classically valid patterns of inference to get a grip on the vagueness-plagued situations they are supposed to represent.
In this case, the intermediary is not hard to come by. Line judges mediate between situational vagueness and the precise predications needed for a tennis match to take place. They stand back from the court, with their eyes trained on particular lines, and watch to determine whether a ball landing near it should be considered in or out. Line judges are different from chair umpires (who regulate a variety of features of the match) because they have but one job: to determine whether the predicate ‘in’ or ‘out’ applies in a situation whose facts may be unknowable. The job of a line judge is to generate precision by acting as an intermediary, bridging what is precise and determinate – the predicates ‘in’ and ‘out’ – to what is imprecise and indeterminate – the situation at hand. Only in this way can the game of tennis be played at all.Footnote 16
My point may be made more clear by considering the recently implemented tennis replay system ‘Hawk-Eye.’ This computationally sophisticated system is designed to eliminate vagueness. It is built to ensure that the use of ‘in’ and ‘out’ by tennis officials accurately represents the world’s facts. The point is to implement technology to overcome the epistemic limitations of line judges. But Hawk-Eye has epistemic limitations too. Its use, then, more starkly reflects the fact that managing tennis matches requires adopting the speculative commitments of epistemicists, and that doing so depends on strategies for bridging formally precise predicates to seemingly vague situations.
To see this, consider how Hawk-Eye functions. A virtual model of an officially sized tennis court is used to represent the actual tennis court being used in play. Placed around the tennis court are a series of video cameras with very high frame per second ratios. The cameras record on the order of 120 images per second (Collins and Evans Reference Collins and Evans2008), Footnote 17 and once the images are captured, the data are sent to a computer system that uses proprietary video-processing software to track the position of every shot and every serve over the course of a tennis match (Fischetti Reference Fischetti2007). In this way, one can precisely determine where the ball landed with respect to the boundary on any particular shot. Thus, Hawk-Eye can substantiate or falsify a fallible line judge’s application of ‘in’ or ‘out’.
But even if we assume that the virtual tennis court used by Hawk-Eye to represent the actual tennis court is perfectly faithful to every swerve in the line, every blade of grass, every grain of concrete, every granule of chalk dust, or whatever, the projections of the ball in the model must follow statistical algorithms. What this means is that there is inherent indeterminacy in tracking the tennis ball and, consequently, inherent indeterminacy in giving application to ‘in’ and ‘out’.
Consider a statistical error analysis of a system like Hawk-Eye. Reports suggest that the position of a tennis ball is measured to within a mean error interval of 3.6 mm. This fact has been used to calculate the following:
Dispersions of errors are usefully reported in terms of the ‘standard deviation.’ If the distribution of the errors was the well known and frequently encountered ‘normal distribution, ’ then if the mean deviation is 3.6 mm the standard deviation would be about 3.6 mm × 1.25 = 4.5 mm. Because, in a normal distribution, 95% of the points lie within approximately 2 standard deviations of the mean and 99% lie within about 2.6 standard deviations, we can estimate some putative confidence intervals. In this case we could say that in 5% of Hawk-Eye’s predictions (that is 1 in 20), the error could be greater than about 9 mm and in 1% it could be greater than 11.7 mm. (Collins and Evans Reference Collins and Evans2008)
To put that succinctly, Hawk-Eye’s projections of where a tennis ball lands are on average indeterminate (or vague) by about 3.6 mm, and sometimes the projections are deeply flawed. The cloudiness of the average projection, and the fact that it can be incorrect 5% of the time, is, for lack of better terminology, a result of the system’s ignorance. Whether a ball was, say, in or out by one millimeter is, in principle, not something Hawk-Eye can know.
But if Hawk-Eye does not actually resolve the problem it was designed to handle – that is, if it does not really eliminate vagueness – why spend the time and money necessary to implement it? As with line judges, Hawk-Eye serves an important purpose in the world of tennis. It is a practical solution to an epistemic problem: it was designed as an intermediary device, grounded in the purposes to hand, to assist in transforming less-than-fully-precise predicates – the terms ‘in’ or ‘out’ – into the kind of precisely demarcated concepts needed to reason about a vagueness-plagued situation. When a ball is neither clearly in nor clearly out, officials defer to Hawk-Eye to mediate between precise predicates and an epistemically vague situation. Only in this way can the bivalent inference patterns of tennis officials get a grip on the situation.
Although officials may be ignorant of the fine-grained relation between their conceptual apparatus and how the world stands, they have learned to bridge that relation by developing strategies that generate precision. By allowing line judges – or, more recently, sophisticated computer systems – to stand between necessarily precise predicates and seemingly vague situations, formally precise inference patterns can be used to manage vagueness. Consequently, when we say ‘That ball was in’ the deployment of the predicate, and the inferential relations that depend on it, accurately reflect epistemicist commitments, which are required to perform the activity.
5.2. Trees of Life
Biologists and philosophers of biology build phylogenetic trees to reason about evolutionary processes. Assume that their reasoning follows classically valid patterns of inference.Footnote 18 If it does, it is only because they rely on epistemicist commitments and evolutionary models that bridge the vague facts of evolutionary history to the precision of phylogenetic trees. For me to make the case that this is right, we first need a bit of background.
Systematists classify organisms into related groups in order to study biological processes through time. One aspect of this project involves mapping the evolutionary history – that is, mapping ancestry and descent relations – of earth’s flora and fauna. The idea is to develop a tree of lifeFootnote 19 to increase the explanatory and predictive power of the biological sciences.Footnote 20 Given such a project, there is a natural inclination to believe that there is but one correct map of evolutionary history – that is, one correct tree of life. After all, the historical facts are what they are, individual organisms bear particular phylogenetic relations to their ancestors, and there is only one way the entire evolutionary process has unfolded.
The intuition that ‘History has an objective structure’ (Sterelny and Griffiths Reference Sterelny and Griffiths1999, 197) echoes a widely held position in the biological sciences. It also reflects something of the commitments of epistemicism. Just as epistemicists believe that ‘each state of affairs either clearly obtains or clearly fails to obtain’ (Williamson’s Reference Williamson, Loux and Zimmerman2003, 711), so too, systematists share a tendency to believe that ‘The phylogenetic hierarchy exists independently of the methods we use to discover it, and is unique and unambiguous in form’ (Ridley Reference Ridley2004, 480). This view is especially plausible if we imagine a tree that depicts the progression of evolutionary history as it unfolded one organism at a time. It is also plausible if we think that statements of the form ‘Y descended from X’ or ‘X is an ancestor of Y’ are factual statements that require bivalent commitments.Footnote 21
Unfortunately, things are not quite as clear as these intuitions make them out to be. Since it is unrealistic to construct phylogenetic trees that map evolutionary history one organism at a time, trees are built using species concepts, which group organisms together according to particular criteria.Footnote 22 Clustering organisms together according to species, however, inevitably forces biologists to ignore certain evolutionary facts. Consider the biological species concept, which has been defined by Mayr (Reference Mayr, Wheeler and Meier2000, 17) as ‘groups of interbreeding natural populations that are reproductively isolated from other such groups.’ Such a definition may seem precise enough, but the fact that speciation events occur over time ensures that certain facts of diverging species are ignored. As Laporte (Reference Laporte2005, 367) has recently argued:
even if we restrict our attention to the biological species concept, we are bound to find cases...in which there is no definitive answer, even in principle, as to whether there are two species present or just one species divided into two subspecies. Cursory reflection on the phylogenetic species concept and the ecological species concept indicates that the application of these other species concepts cannot be any more cut and dried than the application of the biological species concept. Still other species concepts fare the same. Evolution assures that they do.
The problem is that no matter what species concept is at work, the slow, gradual changes that are the hallmark of evolutionary adaptation make certain that organismal divergence will always include organisms that can justifiably be grouped with either side of diverging clusters of organisms.
Nevertheless, to construct phylogenetic trees that map onto these seemingly vague historical facts, biologists and philosophers of biology must treat diverging species as branching nodes. Where evolutionary history seems dominated by vague processes of divergence, trees of life, on whatever species concept one chooses to adopt, represent those processes as events. Consequently, an evolutionary divergence – that is, the process of speciation – which may take minutes, weeks, decades, centuries, or millennia to unfold is represented as a sharply delimited bifurcation on phylogenetic trees.Footnote 23 It seems, then, that when confronted with a situation where the relation between precise species concepts and the historical facts is in-principle unknowable, researchers must decide how their precise representations are going to map onto the seemingly vague facts.
The problem seems to parallel the difficulty facing those involved with tennis matches. Just as chair umpires need precise concepts to map onto seemingly vague facts in order to manage a tennis match, so too, biologists need precise phylogenetic trees to map onto seemingly vague facts in order to do evolutionary biology. Indeed, I want to suggest that biologists must reflect the philosophical commitments of epistemicism when managing the inferential tasks posed by evolutionary biology for several reasons. First, they must believe that whether one species descended from another is a factual relation that is either true or false. Second, the inferences made using propositions to represent those relations require bivalence – that is, biologists must believe statements of the form ‘Y descended from X’ are either true or false. And, finally, they must believe that the seeming vagueness of the historical situation, and the difficulty of mapping phylogenetic trees to that situation, is not due to the vagueness of trees, or to the fuzziness of the facts, but, rather, to their own epistemic limitations.
Faced with cases like this, we should notice that, as with the earlier example, connecting the precision of phylogenetic trees to the evolutionary facts they are intended to represent requires an intermediary to bridge the precise representation with what it aims to represent. In this case, the intermediary amounts to a strategy that uses evolutionary models (along with the statistical algorithms at their heart) to generate phylogenetic trees. In contrast to line judges, who simply determine whether a ball was in or out, evolutionary models determine different structures and different node locations for distinctive phylogenetic trees depending on the scientific project at hand. In contrast to the task of judging a ball to be in or out, then, evolutionary models mediate between phylogenetic trees and the facts they aim to represent by determining whether a tree gets those facts right or wrong for a particular scientific purpose.
To see this, consider a couple of projects that have made use of phylogenetic trees to represent seemingly vague historical facts. First, researchers use phylogenetic trees to map the evolutionary history of HIV within particular host organisms. In this way, they are able to develop vaccines to effectively treat the virus. Since HIV evolves rapidly, developing effective vaccines requires predicting the selection pressures faced by the virus and anticipating its response to those pressures within its host organism. To do this, researchers rely on phylogenetic trees that have been developed using evolutionary models that emphasize rapid selection pressures within hostile environments. Such models use distinctive statistical algorithms to produce tree structures that are dramatically different from those based on other models, which emphasize the kind of random genetic mutations characteristic of less hostile environments. In this way, theorists are able to usefully predict the vagaries of the virus as it responds to drug treatments within a specific individual.
In contrast to trees designed to represent the evolutionary history of HIV within host organisms, other trees aim to represent the evolution of HIV across communities of individuals. By accurately portraying the evolutionary history of HIV within groups of organisms, intervention strategies can be deployed to prevent its further spread. But models used to produce phylogenetic trees for HIV in the community-based case are distinct from those used to investigate evolutionary movement within organisms. In particular, to develop phylogenetic trees that represent HIV’s evolution across communities, theorists rely on evolutionary models that emphasize slow, gradual changes due to random genetic mutations.Footnote 24
What we see, then, is that researchers use a variety of precise phylogenetic trees to represent the evolutionary history of HIV in order to accomplish particular scientific tasks. They decide which trees to use, and how node location and structure should be determined, by considering the evolutionary model most relevant to their interests. Do they need trees that show quick evolutionary adaptations due to selection pressure? Or are they looking for trees that reflect the kind of gradual change characteristic of genetic drift? Should the trees be designed for a growing population of host organisms, a shrinking population, or a static population? When these questions are answered, theorists generate trees that are substantively different, both in structure and in node location, using evolutionary models pertinent to their purposes. Each dimension of HIV’s evolution, then, requires a different evolutionary model that emphasizes peculiar patterns of evolutionary change. Different scientific interests require different evolutionary models, different models lead to distinctive phylogenetic trees, and (it seems) no tree is best for all purposes.
My point, however, is not about evolutionary biology. My point, rather, is that evolutionary models (and the statistical algorithms used to produce them) are here functioning in a manner similar to Hawk-Eye or line judges. They serve as intermediaries – or strategies for generating precision – that bridge sharp phylogenetic trees to the vagueness plagued situations they aim to represent and allow theorists to manage situational vagueness with epistemicist commitments. Just as Hawk-Eye serves as an intermediary for moving from what is inherently vague and imprecise to patterns of inference that demand precise concepts and bivalent thinking, so too, the principles used to build evolutionary trees act as intermediaries for biologists and philosophers of biology. With such intermediaries, theorists can move from a seemingly vague historical situation to forms of inference that demand precision. As a result, propositions like ‘Y descended from X’ can be taken as flatly true or flatly false, and the inferential patterns characteristic of classical logic can get a grip on the situation at hand. Without such intermediaries, it would not be possible to think about the historical facts in such a clean, all or nothing sort of way. And since treating things this way is required for the activity, it seems that adopting the stance of an epistemicist is a reasonable way to proceed.Footnote 25
5.3. Strategic Precision
Let me close this section by briefly recapitulating the thread of argument running through its examples as well as signaling what is still to come. The examples show that managing vagueness according to the commitments of epistemicists requires strategies for generating precision in order to bridge the formalism of classical logic with seemingly vague situations. Without such strategies, there is no way to move from one’s understanding of the world to the precision required for performing the relevant activities. Epistemicists should appreciate this argument. After all, if we cannot know how the facts stand, what basis is there for preferring their formal commitments over others? I have been arguing that the basis is practical, which will lead me to suggest in the next section that there may be good reasons for endorsing non-standard logics of vagueness.
6. A backhanded endorsement of epistemicism
I have been discussing the in-principle ignorance of language users in terms of an inability to determine the relation between wholly precise facts and bivalent propositions. The examples above were designed around that way of talking and were meant to show that figuring out how to connect a formal system of inference to the situations it is meant to represent is a practical problem, one that depends on designing strategies for bridging one with the other. The practical nature of such bridging intermediaries grounds them in an agent’s interests, and as we have seen, there are occasions where those interests presuppose precise facts. Only by treating the historical facts in a certain way can biologists develop HIV vaccines, and only by treating the facts of a tennis match in a certain way can the game of tennis be played. There is no reason to think that these are the only examples. Indeed, there seem to be an indefinite number of activities and interests that require treating predicates and facts as unquestionably precise. In cases of this kind, the views of epistemicists seem vindicated: sharp predicates are used to describe sharp, but unknowable, facts, and ignorance is the source of the problems of vagueness.
But because adopting epistemicist commitments in such cases looks like good practical reasoning, we should hedge against a thoroughgoing endorsement of the view. Although acting like an epistemicist may be rational in a certain range of situations, whole-heartedly adopting the epistemicist’s stance seems problematic. There are a couple of different reasons to believe so. First, we should not rule out the possibility that there are genuinely vague facts.Footnote 26 Consider, for example, a tennis ball and the boundary marker of a tennis court. Could it ever be the case that a ball neither lands on the boundary nor fails to land on the boundary? In answer, consider the following from Collins and Evans (Reference Collins and Evans2008):
In real life, the edge of a line painted on grass cannot be defined to an accuracy of one millimeter. First, because grass and paint are not like that, and second, because even given perfect paint and a perfect surface to draw on, the apparatus used to paint the line is unlikely to maintain its accuracy to one millimeter over the width of the court. Furthermore, tennis balls are furry and it is not clear that their edges can be defined to an accuracy of one millimeter. In short, in the real world of tennis we do not quite know what ‘touching the line’ means.
Never mind the point about knowing what the phrase means (which seems to echo something of the commitments of epistemicism). Instead, consider the substantive points about how the facts stand: tennis balls are fuzzy; the surface of a tennis court – made up of grass, cement, or clay – is irregular; and the boundary lines of tennis courts – made of paint, chalk, or tape – are misshapen, jagged, and irregular. In short, everything that constitutes the facts seems less than fully precise. Does it make sense, then, to say that a ball with a stray strand of fleece that touches an isolated particle of chalk embedded in a minuscule crack of clay on a roughly surfaced tennis court must be either in or out? I do not know, but it certainly seems dogmatic to insist on it.
Epistemicists will likely not find that suggestion convincing. Second, then, even if facts must be discretely bounded states of affairs, it may be practically rational, given our in-principle ignorance, to use a formal system that is not committed to classically valid forms of inference. As we have seen, the strategies above are driven by purposes that demand bivalent propositions, and in these cases classical logic is the formal system to be preferred. But it is not at all clear that all of an individual’s aims and interests have similar demands. Not everything we do requires the things we say, think, or write to be wholly true or false. In such situations, we should expect principles of practical reason to call for inferential systems that are different from classical logic.
Consider the use of tort law to settle questions of compensation. In such cases, courts must determine the extent of an individual’s harm in order to determine the form and extent of compensation owed by the tortfeasor. But such cases are often shot through with vagueness and not manageable with simple bivalent propositions. For example, the 2010 oil spill in the Gulf of Mexico had detrimental consequences for a number of individuals living in the area. As a result, US courts had to determine the degree to which individuals impacted by the oil spill had a ‘legitimate claim to compensation’ in order to figure out how much their compensation should be. After all, there were degrees of harm to consider. Not only were the fishermen on the Gulf impacted, but so too were the restaurant owners who depended on the Gulf Coast’s fisheries. In fact, the extent of harm done to the general economic condition of the region was such that almost anyone could make some claim to compensation. Despite this fact, the courts had to determine which claims were legitimate and which were not. Unfortunately, there were no clear guidelines concerning the extension of the predicate ‘legitimate claim to compensation’. As a result, assessing whether an individual’s assertion that he or she had such a claim could not be determined to be true or false. Consequently, the bivalent strictures of classical logic could not get a grip on the situation, and other inferential strategies had to be adopted.
Or again, consider another kind of example. Geologists are on occasion interested in a rock’s resistance to weathering. This information may prove useful when planning to build a home, office building, or apartment complex near an area with exposed rock, for example. But the predicate ‘resistant to weathering’ seems to come in degrees. An igneous rock with high percentages of silica is more resistant to weathering than, say, an igneous rock with high percentages of feldspar. Nevertheless, both rock-types are much more resistant to weathering than a mudstone or some similar variety of sedimentary rock. Suppose, then, that a geologist is trying to determine whether an outcrop containing a mixture of mudstone and quartz-rich igneous rock is resistant to weathering. It seems absurd to insist that the geologist adopt the formal constraints of classical logic and judge the outcrop as either resistant to weathering or not-resistant to weathering. Instead, we expect the geologist to think in degrees: given that the particular outcrop is a mixture of two different rock types, its resistance to weathering should fall somewhere between the two. In this way, we expect the geologist’s thinking to reflect something of the formal structures characteristic of degree theoretic treatments of vagueness.Footnote 27
When trying to manage predicates whose extension is undecided or statements that call for degrees of consideration, it seems more reasonable to think without the precise strictures of classical logic than to be bound by the bivalent dictates of that system.Footnote 28 This is due to the fact that these two cases are conspicuously different from the earlier examples involving tennis and phylogenetic trees. In the earlier examples, the practical activities demanded the strictures of classical logic. In these latter cases, however, the practical activities are different in kind. The result is that the form of thought appropriate to the task takes a different shape. And this is how it should be: to manage our way through a variety of complex situations, we need flexibility in our inferential patterns. But that is not to say that any form of thought in any situation is as good as the next. Rather, as I have shown, the appropriate form of thought for the situation at hand will be grounded in the practical activities that make its application appropriate.
7. Conclusion
In Vagueness Williamson (Reference Williamson1994, 4) notes that:
the epistemic view implies a form of realism, that even the truth about the boundaries of our concepts can outrun our capacity to know it. To deny the epistemic view of vagueness is therefore to impose limits on realism; to assert it is to endorse realism in a thoroughgoing form.
What I have been urging in this paper is sympathetic to epistemicism, but not because it implies a form of realism. Rather, my sympathies are motivated by an understanding of formal systems wherein they are models of inference designed in the service of diverse human interests. In light of this understanding, I have argued that between formal systems of logic and the situations they aim to represent stand intermediaries for bridging one with the other. The argument depends on recognizing human cognitive limitations: our epistemic shortcomings preclude a thoroughgoing understanding of the states of affairs that formal systems are intended to represent. Consequently, we must know what we want to do in order to apply a formal system to manage the vagueness-plagued situations that often confront us.
But even a perfunctory look at the variety of aims, purposes, and ends adopted by human beings shows that not all of them require the use of precise, bivalent concepts for their realization. Consequently, there is little reason to think that we have to be committed to the views of epistemicists across the board.
I suppose this argument will not sit well with epistemicists. Nevertheless, what I have argued is good news for the view. My arguments show that managing vagueness should, at least on occasion, be done in just the manner suggested by epistemicists. Consequently, the incredulous stares that tend to meet the pronouncements of epistemicist views are, at least sometimes, unwarranted. There are occasions where we have to adopt the theoretical posture of epistemicists. Doing so presupposes strategies for generating precision, which mediate between the inferential machinery of classical logic and the situations it is intended to reflect. If that is right, epistemicism looks more like a position motivated by good practical reasoning than by any necessary features of predicates, propositions, or facts.
Acknowledgements
I want to thank Elijah Millgram for all that he has done in making this paper possible. I also want to thank Monika Piotrowska for discussing these issues with me and reading through earlier drafts. Discussions with Mike Wilson, colleagues at the University of Utah (especially Jim Tabery and Matt Haber), and participants of the semi-weekly kaffeeklatsch have been tremendously helpful in developing the ideas found here. I want to thank the Marriner S. Eccles Graduate Fellowship for financial support, and, finally, I would like to thank two anonymous referees for suggestions and comments that significantly improved the paper.