1. Introduction
The generality problem is one of the most important challenges to process reliabilism about epistemic justification.Footnote 1 In brief, the generality problem poses the following question: out of all the process types exemplified by a given process token, which types are epistemically relevant for determining justification? Presumably, without an answer to this question, it's very hard to tell what epistemic implications (if any) reliabilism will have on particular cases of belief formation.Footnote 2 Hence, any satisfactory solution to the generality problem must offer an informative theory of process type relevance.
As the past four decades have demonstrated, substantive progress on the generality problem has been remarkably hard to come by.Footnote 3 That said, some responses to the generality problem have garnered more attention – and appear more promising – than others. One of these responses is James Beebe's tri-level statistical solution to the generality problem. However, despite the initial plausibility of Beebe's approach, Julien Dutant and Erik Olsson (Reference Dutant and Olsson2013) have shown that the tri-level statistical solution entails intuitively implausible justification verdicts on a variety of cases.
Samuel Kampa (Reference Kampa2018) has recently offered a new proposal for repairing Beebe's original solution. Kampa calls it the new statistical solution to the generality problem. After presenting the new statistical solution, Kampa argues that it successfully overcomes the challenges that undermined Beebe's original statistical solution. However, there's good reason to believe that Kampa is mistaken. In this paper, I argue that Kampa's new statistical solution fails to make substantive progress towards solving the generality problem.
In §2, I present Beebe's statistical solution in more detail. After this, I explain Dutant and Olsson's contention that the types identified as relevant according to Beebe's theory are far too descriptively narrow to be the correct relevant types. In §3, I present Kampa's new statistical solution to the generality problem and discuss Kampa's explanation for why his new solution avoids the problems that plague Beebe's theory. In §4, I offer my main criticism of Kampa's new statistical solution. In particular, I show that the new statistical solution faces a dilemma. If we interpret the elements of the new statistical solution in an unqualified manner, then the new statistical solution falls prey to a straightforward counter example by countenancing types that are far too narrow. On the other hand, if we interpret the new statistical solution in a qualified manner, then the theory in its current form offers us, at best, scant insight into the nature of process type relevance. Either way, Kampa's theory fails as a genuine solution to the generality problem.
2. The original tri-level statistical solution
2.1 The tri-level condition and Beebe's statistical solution
James Beebe invokes key notions from cognitive science to formulate his answer to the generality problem. According to neuroscientist David Marr, cognitive processes can be analyzed at three levels of description: The information problem (I) being solved in the process, the method (M) used to solve that problem, and the cognitive system (S) used to execute that method.Footnote 4 As Beebe clarifies, the cognitive method (M) is the algorithm used to solve the information problem, and the cognitive system (S) that solves the information problem is the cognitive architecture that executes the algorithm (Beebe Reference Beebe2004: 182).Footnote 5 Beebe contends that the (I), (M), and (S) properties of belief-forming process tokens are epistemically relevant features that determine (at least partially) whether a given process token generates a justified belief (2004: 180). Beebe incorporates this idea by positing the tri-level condition for process type relevance.
The tri-level condition:
The reliability of a cognitive process type T determines the justification of any belief token produced by a cognitive process token t that falls under T only if all of the members of T:
(a) solve the same type of information-processing problem i solved by t;
(b) use the same information-processing procedure or algorithm t
used in solving i; and
(c) share the same cognitive architecture as t. (Beebe Reference Beebe2004: 180)Footnote 6
Some brief clarifications are in order. In Beebe's terminology, the belief-forming process tokens that indeed token (i.e., instantiate) some process type just are the tokens that “fall under” that type. Beebe describes the class of tokens falling under some type T as the “members of T.” This convention is reasonable enough, as one can straightforwardly view types as corresponding to a particular extension, where in this case the extension of a process type is the class of process tokens that instantiate that type.Footnote 7
The tri-level condition posits the following constraint on a token t's relevant type T: all of the tokens that fall under T must have the same (I), (M), and (S) properties that t has. Importantly, for Beebe this is a partial definition of a token's relevant type, because it only presents a necessary (but not sufficient) condition on being a member of T's extension.
In addition to the tri-level condition, Beebe saw that he'd need to place further restrictions on relevant types so as to avoid what Richard Feldman calls the “no distinctions problem.”Footnote 8 According to Feldman, a given theory of type relevance succumbs to the no-distinctions problem when the types it identifies as relevant are too broad. Feldman notes that responses to the generality problem that suffer from the no-distinctions problem end up entailing the same degree of justification for various beliefs that intuitively should have different degrees of justification (Feldman Reference Feldman1985: 161). For instance, consider the following three process types:
[1] visual belief formation
[2] visual belief formation under good lighting conditions
[3] visual belief formation under bad lighting conditions.
Ceteris paribus, tokens that instantiate [2] produce beliefs with a greater degree of justification than tokens that instantiate [3]. A reliabilist would explain this fact by noting that [2] is more reliable than [3]. However, a theory of type relevance that identifies broad types like [1] as being the relevant type for any token instance of visual belief formation will entail the implausible result that beliefs produced by vision under good lighting conditions will have the same degree of justification as beliefs produced by vision under poor lighting conditions. Hence, we should reject such a theory of type relevance that countenances relevant types that are too broad.
In order to avoid the no-distinctions problem, Beebe adds an additional statistical constraint to his theory of type relevance:
Let A be the broadest process type that satisfies the tri-level condition for some process token t … I argue that the relevant process type for some t is the subclass of A which is the broadest objectively homogeneous subclass of A within which t falls. A subclass S is objectively homogeneous if there are no statistically relevant partitions of S that can be effected. (Beebe Reference Beebe2004: 187–8)
Let's begin by examining the key concepts of this statistical constraint. First, recall that Beebe refers to process types as classes of process tokens. Classes can have proper sub-classes within them, and Beebe uses the notion of a class “partition” to refer to a proper-subclass of some broader class. In this way, a narrower type like [2] is an example of a partition of a broader type like [1].
To understand objective homogeneity, we must first understand the notion of a “statistically relevant” partition. Let A represent a broad process type, and let S represent a proper sub-class type of A. S is a statistically relevant partition of A if and only if S's degree of reliability differs from A's degree of reliability (Beebe Reference Beebe2004: 188). According to Beebe, a type's degree of reliability is simply the “probability” that a token generates a true belief given that it instantiates that type (188). For example, given that [2] is a partition of [1] and that (ceteris paribus) [2] is more reliable than [1], it follows that [2] is a statistically relevant partition of [1].
Given this notion of a statistically relevant partition, we can understand objective homogeneity as follows: a given type T is objectively homogenous just if T has no proper sub-types with degrees of reliability that differ from T's degree of reliability (Beebe Reference Beebe2004: 189).
With both the tri-level condition and the statistical constraint in place, we can now state Beebe's tri-level statistical solution to the generality problem (TS).
TS T is the relevant type for a given token t if and only if
(a) t tokens T
(b) T satisfies the tri-level condition
(c) T is the broadest objectively homogenous subclass of the broadest type satisfying the tri-level condition (relative to t).
According to Beebe, the statistical constraint (c) makes the types identified as relevant narrow enough so that TS avoids Feldman's no-distinctions problem.
2.2 Problems with Beebe's tri-level statistical solution
Despite its initial plausibility, there are serious problems with TS. Julien Dutant and Erik Olsson (Reference Dutant and Olsson2013: 1354–5) present what they call the “trivialization problem” against TS. Consider any type T of a given token t, where T satisfies the tri-level condition. Now, consider the proper subclass of T denoted by T+, where T+ is comprised of all and only the tokens of T that produce true beliefs. T+ is perfectly reliable, so not only is T+ statistically relevant, but it also won't contain any partitions with different degrees of reliability. Moreover, if we suppose that t itself generates a true belief, then t instantiates T + . In this case, TS entails that T+ is the relevant type for t. But given that T+ has a maximal (100%) degree of reliability, it follows that the belief produced by t will be perfectly justified. This schematic description of TS's workings highlights how any true belief, according to TS, will automatically (and trivially) be perfectly justified.
This is an implausible result. For instance, TS would entail that any true belief formed on the basis of a coin flip would be perfectly justified. In essence, Dutant and Olsson's objection shows how TS avoids the no-distinctions problem at the cost of falling prey to the opposite worry for theories of type relevance: what Feldman (Reference Feldman1985: 161) calls “the single-case problem”. A theory of type relevance suffers from the single-case problem insofar as it countenances types (as being relevant) that are too narrow. According to theories of type relevance that fall prey to the single-case problem, virtually any token that generates a true belief will have a relevant type with maximal (or near-maximal) reliability, and any token that generates false belief will have a relevant type that has maximal (or near-maximal) unreliability (Feldman Reference Feldman1985: 161).
Dutant and Olsson consider various strategies for either tightening restrictions on statistical relevance, or loosening restrictions on homogeneity so as to salvage TS in some form or fashion.Footnote 9 Ultimately, Dutant and Olsson show there to be serious flaws with all of these repair strategies. While I lack the space to discuss these strategies here, it's important to note that Kampa accepts the failure of these repair proposals. This leads Kampa to present his own unique strategy for repairing the tri-level statistical solution to the generality problem.
3. Kampa's new statistical solution
Kampa's main approach for avoiding the problems raised by Olsson and Dutant involves adding a further constraint on the broadest objectively homogenous subclass that can constitute a token's relevant type. Kampa calls this proposal the New Statistical Solution (NS).
NS For any process token t, T is the relevant process type for t if and only if
a) t tokens T
b) T satisfies the tri-level condition
c) T is the broadest objectively homogenous admissible subclass of the broadest type satisfying the tri-level condition under which t falls. (Kampa Reference Kampa2018: 236)
Kampa defines admissibility as follows:
Admissibility
Where A is a type partially defined by tri-level properties [IA, MA, SA] and T is a type partially defined by tri-level properties [IB, MB, SB], T is an admissible subclass of A just in case the extension of [IB, MB, SB] is a proper subclass of the extension of [IA, MA, SA].Footnote 10
Kampa claims that “[t]he new material in [condition (c) of NS] can be summed up in a rather ungainly slogan: ‘No admissibility without a difference in tri-level property graining’” (2018: 236). According to the admissibility constraint on relevant types, in order for T to be the relevant type for token t, it cannot be the case that T is partially defined by the exact same tri-level properties that partially define some broader type A (instantiated by t as well) of which T is a proper sub-type. In order for a type T to be admissible, T must be partially defined by a tri-level property (I, M, or S) that is finer-grained than a tri-level property that partially defines type A. According to Kampa, a given tri-level property F1 is finer-grained than a distinct tri-level property F2 if and only if the extension of F1 is a proper-subset of the extension of F2 (Kampa Reference Kampa2018: 237).
NS successfully avoids Dutant and Olsson's trivialization problem. This is because it's possible for tokens with true-belief outputs and tokens with false-belief outputs to share all of the same exact tri-level properties. So, while it might be the case that a given token instantiates some type that's comprised of all and only true-belief producing instances, there's no guarantee that this type will be admissible. Hence, it's not the case that types like T+ will automatically count as relevant according to NS (Kampa Reference Kampa2018: 238). Therefore, NS doesn't entail that any true belief will be maximally justified.
According to Kampa, the fact that tri-level properties can be more or less fine-grained allows NS to avoid the no-distinctions problem. While Kampa doesn't specifically discuss how M or S-properties can be more or less fine-grained, he goes to some lengths to argue that I-property graining is both coherent and relatively straightforward. Furthermore, he argues that I-property graining plays a key role in explaining how NS delivers intuitively correct justification verdicts on particular cases of belief formation. Following the work of Michael Dawson (Reference Dawson2013: 48), Kampa suggests that we define information problems in terms of “input-output” mappings (Kampa Reference Kampa2018: 240). In this way, “I-properties can be distinguished by their inputs” (241). Let's say that I-property Ia has an input-output mapping that is a proper subset of another I-property Ib’s input-output mapping. In this case, Ia is finer-grained than Ib.
Importantly, Kampa does not give a detailed definition or informative analysis that outlines all of the possible ways in which input-properties individuate I-properties and thus determine I-property graining. However, he does give two distinct examples of I-property graining: graining with respect to the phenomenal properties of inputs, and graining with respect to the environmental properties of inputs (Reference Kampa2018: 241–2). I will quickly address both examples in turn.
Kampa gives the following description of how inputs to perceptual belief-forming processes – perceptual experiences – can be more or less fine-grained according to their phenomenal properties:
[W]e can legitimately analyze sensory inputs in terms of phenomenal properties. If, then, inputs are analyzable in terms of properties, they are also analyzable in terms of graining, per our operative definition of “graining” … [T]he notion that one perception can be finer grained than another is, I think, fairly intuitive. That “clear visual perception” should come out finer grained than “visual perception” is unsurprising; and happily, this is just what the New Statistical Solution suggests … [A]nalyzing inputs in terms of graining makes for a nice isomorphism between perceptions and objects of perceptions. Just as “mauve” is finer grained than “purple”, so being appeared to mauvely is finer grained than being appeared to purplely. (Kampa Reference Kampa2018: 242)
Given how NS determines relevant types on this basis of phenomenal-property input graining, it would seem that NS can straightforwardly account for the fact that, ceteris paribus, beliefs produced by clear visual perception are more justified than beliefs produced by mere visual perception. Plausibly, the class of visual belief-forming process tokens featuring clear, non-blurry phenomenology has a higher truth-ratio than the broader class of all visual belief-forming tokens. This would make the type [clear visual perception] a statistically relevant sub class of [visual perception]. And, since phenomenal property graining can ground I-property graining, the sub-class of [clear visual perception] would count as admissible according to NS.
Kampa also states that I-properties can be more or less fine-grained with respect to the environmental features of input experiences. In applying this idea, Kampa asks us to consider the following two types, which I'll denote as T* and T•S for short:
-
T* [inferring on the basis of sense perception]
-
T•S [inferring on the basis of sense perception under favorable conditions]Footnote 11
Given that being formed under favorable environmental conditions is, plausibly, a justification-determining feature of a belief, Kampa notes that T•S is intuitively the correct relevant type for process tokens that instantiate T•S (2018: 240). Kampa argues that NS can deliver this result in virtue of how I-properties can be more or less fine-grained due to the external/environmental properties of experiential inputs.
[W]hat inputs a system has often (if not always) causally depends on the system's environment. Therefore, what I-properties a system has isn't simply an internal matter; a system's I-properties change in response to varying environmental conditions that present diverse inputs … [I-properties] map inputs to outputs and thus “reach beyond the system” to the surrounding environment. (Kampa Reference Kampa2018: 240–1)
According to this account of I-properties, it's possible to individuate an I-property on the basis of an input-output mapping whose input perceptual experiences were all formed under a specific environmental condition C. For Kampa's example, let C denote favorable environmental conditions for sense perception. This input-output mapping would constitute an I-property that is “finer-grained than T*’s I-property, since the set of inputs associated with T•S is a proper subset of the set of inputs associated with T*” (Kampa Reference Kampa2018: 241). Kampa notes that this fact makes T•S “an admissible subtype of T* on the New Statistical Solution,” thus allowing NS to pick out T•S as a relevant type for tokens that instantiate it (Reference Kampa2018: 241).
In sum, Kampa defends the idea that I-properties can be more or less fine-grained on the basis of either the phenomenal features of inputs or the environmental features of inputs. By his lights, this allows NS to countenance admissible types that allow his solution to the generality problem to avoid the no-distinctions problem.
4. Problems with the new statistical solution
For whatever merits NS has, I contend that it fails as an informative account of type relevance. To begin, it's unclear exactly how we are to interpret NS. In particular, there seem to be two live interpretations of Kampa's theory. According to what I call the unqualified interpretation of NS, type admissibility and I-property graining are unqualified in the following sense:
Unqualified Admissibility (UA)
Admissible types can be partially defined by any tri-level properties.
Unqualified I-Property Graining (UG)
I-properties can be more or less-fine grained with respect to any phenomenal properties of inputs and/or any environmental properties of inputs.
On the other hand, according to the qualified interpretation of NS, type admissibility and/or I-property graining are qualified in the following ways:
Qualified Admissibility (QA)
Admissible types can be partially defined only by tri-level properties of kind X.
Qualified I-Property Graining (QG)
I-properties can be more or less-fine grained only with respect to either
i. input phenomenal properties of kind Y
or
ii. input environmental properties of kind Z.
For simplicity, any version of NS that incorporates either QA or QG counts as a qualified interpretation of NS.
As I'll argue below, NS faces the following dilemma. The unqualified interpretation would entail that NS countenances relevant types that are far too narrow, thus delivering implausible justification verdicts on particular cases of belief formation. On the other hand, if we adopt the qualified interpretation, then NS fails to make substantive progress towards solving the generality problem by leaving crucial questions unanswered – questions that simply re-raise the generality problem in a slightly different form. As a result, NS either succumbs to counter-example, or it offers at best negligible progress towards solving the generality problem.
4.1 The unqualified interpretation and descriptive narrowness
To begin, UA – as opposed to QA – appears to be the most natural way of interpreting Kampa's admissibility condition. After all, in his explicit presentation of admissibility, Kampa places no limitations on the kinds of tri-level properties that can partially define admissible types (Reference Kampa2018: 236). Secondly, UG – rather than QG – seems to be the most natural way of interpreting Kampa's account of I-property graining. As I mentioned above, Kampa doesn't offer us a detailed account of I-property graining that covers all the specific ways in which I-property graining can occur.Footnote 12 Instead, Kampa gives the examples of phenomenal property graining and environmental property graining to be illustrative of how I-property graining works. This being the case, given that Kampa doesn't flag even the possibility that some instance of phenomenal (or environmental) graining might fail to ground an instance of I-property graining, it seems reasonable to interpret NS in terms of UG.
However, if we interpret NS according to UA and UG, then NS produces counterintuitive justification verdicts on particular cases. To see this, let's consider a case from Dutant and Olsson's criticism of Beebe's original tri-level statistical solution to the generality problem.Footnote 13 Dutant and Olsson ask us to consider Smith, who uses a very odd cognitive procedure for identifying tree species throughout his afternoon hike. This procedure can be roughly described as follows: “Smith classifies any tree whose leaves are biggish as a maple tree” (Reference Dutant and Olsson2013: 1358). For simplicity, let's use the tri-level properties Is, Ms, and Ss to denote Smith's odd cognitive procedure used throughout his forest walk. Intuitively, Smith's tokens that instantiate [Is, Ms, Ss] produce beliefs with a very low degree of justification. After all, assuming that Smith is hiking through a normal forest – filled with many non-maple trees that nonetheless have “biggish” leaves – the [Is, Ms, Ss] procedure ends up yielding many false beliefs for Smith throughout the afternoon.
However, it does not seem as if NS, once coupled with UA and UG, can accommodate the intuitive verdict on Smith's case. For simplicity, let TB denote a broad type partially defined by [Is, Ms, Ss]. Further, let tm denote one of Smith's TB-instantiating tokens from that afternoon in which Smith actually happened to be looking at a real maple tree. As it turns out, we can isolate sub-types of TB that are both instantiated by tm and admissible so long as we assume UA and assume that I-property graining can occur with respect to any phenomenal or environmental properties of input experiences. Importantly, these sub-types are also very reliable (and hence, statistically relevant), which leads NS to deliver an implausible justification verdict for the belief produced by tm. In what follows, I'll first present the admissible type due to phenomenal I-property graining and then present the admissible type due to environmental I-property graining. After that, I'll argue that both such types are very reliable according to the most popular conceptions of reliabilism on offer.
First, there is an admissible sub-type of TB that emerges due to phenomenal graining so long as we assume UG and UA. Presumably, Smith looks at all kinds of “biggish-sized” leaves during his afternoon hike: maple leaves, birch leaves, beech leaves, oak leaves, etc. Clearly, maple leaves look different from these other leaves. They have a noticeably unique shape. In other words, the phenomenal properties constitutive of the input experiences had while looking at an actual maple leaf are different than the phenomenal properties constitutive of the experiences had while looking at these other kinds of leaves. Let L denote the set of phenomenal properties that uniquely characterize visual experiences produced when looking at objects that are in fact maple leaf-shaped.Footnote 14 Given UG, phenomenal property L determines an I-property that is finer-grained than Is – call it ILs. Next, let TL denote a type instantiated by tm that is partially defined by the cognitive procedure [ILs, Ms, Ss]. Given UA, TL is also an admissible subtype of TB.
Furthermore, assuming UA and UG, there is an admissible sub type of TB that emerges due to environmental property graining. Of course, there are many different sorts of descriptions one could give of the environmental conditions under which input experiences are produced. That being said, the following description seems to genuinely characterize one such environmental feature that an input experience could have:
K Having been produced while the subject's eye lens and retina are focused on an organism whose cells have DNA of type k – where k is the sort of DNA that only maples in fact have.Footnote 15
Given UG, environmental property K determines an I-property that is finer-grained than Is – call it IKs. Let TK denote a type instantiated by tm that is partially defined by the cognitive procedure [IKs, Ms, Ss]. Given UA, TK is also an admissible subtype of TB.
As it turns out, TL and TK are very reliable according to the two most common approaches to determining reliability measurements for relevant process types. Let's call the first such theory actual-world reliabilism, according to which the truth-ratio measurement for a given type T is taken across the class of all T instances from the actual world. Footnote 16 Given that TL is partially defined by ILs, TL’s extension is only comprised of instances of visual maple tree identification where the subject happens to be experiencing the unique phenomenology caused by maple tree leaves. As a result, the truth ratio across all such process instances from the actual world will be quite high. Similarly, the extension of TK – given that it's partially defined by IKs – is comprised of instances of visual maple tree identification in which the input experience happens to be produced while the subject's eyes were focused on objects with maple tree DNA. Clearly, the truth ratio across all of the actual world instances of TK will be very high (and perhaps perfect).
The other popular approach to measuring reliability is what I'll call nearby-worlds reliabilism. Footnote 17 According to nearby-worlds reliabilism, the justification-determining truth ratio for a given process type T is taken across the class of T's instances throughout all of the possible worlds that are counterfactually close (enough) to the actual world.Footnote 18 Given contemporary counterfactual semantics, the notion of being “counterfactually close enough” is something of a vague term of art.Footnote 19 That said, while we might not be in a position to offer an exhaustive description of the counterfactually nearby modal space for any given possible world, few philosophers doubt that we have at least some grasp of this concept – thus making it useful in many contexts.
To begin, it's instructive to consider what a possible world W would need to be like in order for TL or TK to be unreliable across the nearby possibility space to W. For TL, we can imagine this possible state of affairs:
DISGUISE
All of the relevant details are the same as the original Smith-hiking case except for this key alteration: unbeknownst to Smith, a local team of tricksters have a longstanding habit of removing virtually all of the real maple trees in Smith's environment. In addition, the tricksters have painstakingly gone through the entire forest and disguised the poplar trees to look like maples in the following fashion: using scissors, the tricksters have carved all of the poplar leaves into the exact shape of maple leaves.
Let WD denote a possible world that satisfies the description of DISGUISE. Notice, in WD, some of Smith's token processes instantiate TL, given that he'll still experience L phenomenology when looking at the altered poplar leaves. This being the case, it follows that type TL generates mostly false beliefs in WD. Given that the worlds counterfactually close to WD only include slight alterations from WD, it's reasonable to assume that TL has a low truth ratio across these nearby possibilities as well.
Let @ denote the actual world in which token tm occurs. For our purposes, it's crucial to note that WD is, counterfactually, very far away from @. DISGUISE involves radical divergences from the actual state of affairs in tm. Moreover, I contend that cases like DISGUISE are instructive in a more general sense: they highlight just how different the world would have needed to be in order for most instances of TL and TK to have generated false output beliefs. Given the features of @ – and given that @’s nearby worlds feature only minute differences from @ – it seems that the vast majority of TL and TK instances that are counterfactually close to @ are ones in which Smith makes maple-ascribing judgments about things that are genuine maples.Footnote 20 As a result, it's reasonable to conclude that TL and TK are very reliable according to nearby worlds reliabilism.
As we've seen, arguably the two most influential ways of understanding reliability measurements yield the untoward result that TL and TK are very reliable and hence statistically relevant partitions of TB.Footnote 21 Given that these types are also admissible according to UA, it follows that they are genuine candidates for being the relevant types for tm according to Kampa's NS theory of type relevance. But this is clearly the wrong result, as the belief produced by tm, intuitively, has a very low degree of justification.
It looks as if NS succumbs to a very similar problem as TS – determining relevant types that are actually far too narrow. Plausibly, the extension of tm’s relevant type should include instances of Smith's odd belief-forming procedure from the afternoon hike in which he incorrectly ascribes maple to non-maples. However, NS – according to UA and UG – cannot deliver this result.
Here, one might reasonably think that my narrowness-based counter-example to NS emerges simply because reliabilism in general suffers from a much broader objection – namely, the idea that process reliability is insufficient for justification.Footnote 22 The most famous case marshaled in defense of the insufficiency objection is Laurence Bonjour's Norman the Clairvoyant scenario. In this case, Norman finds himself forming beliefs about the far off location of the president of the United States, and as it turns out, these beliefs are always correct (Bonjour Reference Bonjour1985: 41). Norman, however, “has no evidence” in favor of these beliefs nor evidence to believe that he has a reliable clairvoyant power (41–2). As Bonjour describes, “[f]rom [Norman's] subjective perspective, it is an accident that his belief [about the president] is true” (43). By Bonjour's lights, this is a straightforward case of reliable, yet unjustified, belief formation (42).
However, there is a key structural difference between my narrowness objection to NS and the insufficiency objection to reliabilism. The narrowness objection notes that reliabilism, as interpreted according to NS, UA, and UG, generates implausible justification verdicts because it identifies intuitively unreliable cases of belief formation as being highly reliable.Footnote 23 On the other hand, the insufficiency objection to reliabilism asserts that there are intuitively clear cases of reliable belief formation that nonetheless result in unjustified belief. Due to this structural difference, there are two response strategies open to the reliabilist with respect to the insufficiency objection that aren't open to Kampa with respect to the narrowness objection.
In response to cases like Norman, reliabilists could, first, attempt to argue against the additional internalist necessary conditions on justification that Bonjour suggests.Footnote 24 Secondly, reliabilists could possibly deny that Norman's process is a clear case of reliable belief formation. At this point in the dialectic, there is no decisive and widely accepted solution to the generality problem. Hence, it's open to the reliabilist to suggest that, according to the correct theory of type relevance, it could turn out that the relevant type for Norman's process is something like [forming a belief in a way that seems accidental from the subject's perspective]. At the very least, it's by no means clear that this type is reliable.
Notice, neither of these two response strategies is available to Kampa with respect to defending NS from the narrowness objection. This is precisely because Kampa is engaged in offering a particular answer to the generality problem on behalf of reliabilism. I have argued that intuitively unreliable instances of belief formation (like Smith's) turn out to be highly reliable according to NS once interpreted according to UA, UG, and the most common approaches to measuring reliability. Crucially, this needn't be a problem that plagues any response to the generality problem, as it's clearly the case that the unique features of Kampa's view – like admissibility and objective homogeneity – are what determine the overly narrow type assignments of NS according to the unqualified interpretation.Footnote 25 Responses to the generality problem that omit these factors needn't suffer the same fate.
4.2 The qualified interpretation and re-raising the generality problem
As the previous discussion illustrated, the Smith case functions as a counter-example to NS only on the assumption that both UA and UG are the correct interpretations of Kampa's view. For instance, if QG is correct, then we cannot just assume that I-property graining can occur with respect to either phenomenal property L or environmental property K. For all that's been said, L might not fall under phenomenal property kind Y and K might not fall under environmental property kind Z. Also, if QA is part of the correct interpretation of NS, then we cannot assume that TL or TK are admissible sub-types. While I noted that an unqualified interpretation of Kampa's theory appears to be more natural than a qualified interpretation, I needn't commit to a particular interpretation here as I present my dilemma. In what follows, I only argue that were QA or QG correct interpretations of NS, then NS fails to make substantive progress towards solving the generality problem.
To see this, consider once again the original puzzle raised by the generality problem. Without some principled way of typing or categorizing a belief-forming process token, it's unclear how to determine whether a given belief has justification according to reliabilism. On the assumption that only types (rather than tokens) can be measured for reliability, we need some clear conception of a token's relevant type in order to assess the resultant belief's degree of justification. Such a clear conception is what we need from an acceptable theory of type relevance. Reliabilists can't rest content in leaving the notion of relevant type undefined or unexplained in their theory.
Now, let's assume that QG is the correct interpretation of NS. In this case, NS would leave two crucial notions undefined in the theory: phenomenal properties of kind Y and environmental properties of kind Z. Now, there's nothing inherently problematic with leaving a key notion undefined in a philosophical theory. For instance, the reasonability of a reliabilist theory of justified belief doesn't seem to depend on whether reliabilists can offer a supplementary account of belief. However, a theory of justification that invokes kind Y or kind Z without further analyzing these notions seems to raise the exact same sort of problematic ambiguity that's raised by any version of reliabilism that leaves relevant type undefined. By leaving these notions unanalyzed, we don't know how to apply the theory in particular cases to see whether it delivers intuitively plausible justification verdicts.
For example, Kampa claims that environmental property graining allows NS to yield the correct justification verdict on cases of visual belief formation under environmental conditions that are “favorable” (Reference Kampa2018: 239–40). Moreover, Kampa suggests that being formed under favorable conditions can actually constitute part of the relevant type for such token. According to NS, this is because the I-property partially defining this relevant type is determined by a set of inputs that all share a particular sort of environmental condition description. But what exactly constitutes the relevant sense of the “environmental conditions” under which a belief is formed?Footnote 26 As we saw with the Smith example, we shouldn't accept that property K can constitute (or partially constitute) the relevant environmental conditions for token tm. But if K won't do, which environmental properties can constitute the relevant environmental conditions for a relevant type? This is just another way of asking what kind Z really amounts to. Without analyzing kind Z any further, we're left with a theory of type relevance that, first, claims that the environmental conditions under which a belief is formed determine whether the belief is justified, and second, offers no guidance for how to conceive of or demarcate these environmental conditions. As a result, we're left with little understanding as to how to test NS (and reliabilism more generally) to see if it can deliver plausible justification verdicts.
The same problem emerges for phenomenal property graining. If L cannot constitute (or partially constitute) the phenomenal features of the I-property that partially defines the relevant type, then which phenomenal features can? Without an analysis of kind Y, on QG we're left with a theory which tells us that a particular phenomenal feature description partially constitutes the relevant type description, but then doesn't say what this particular phenomenal feature is. Once again, it still seems that we're quite far from having a testable informative reliabilist theory. Lastly, even if one grants UG but then accepts QA, we're still left with little insight into the workings of NS. Without an analysis of kind X (for tri-level properties), we don't know which finer-grained tri-level properties can partially define relevant types, and hence, we won't know which statistically relevant sub-types can determine whether a belief is justified. It is in this sense that adopting the qualified version of NS comes with the cost of simply re-raising the generality problem.
5. Conclusion
At first glance, it may have seemed that Kampa's new statistical solution to the generality problem was a significant improvement over Beebe's original statistical solution. But as we have seen, NS is problematic in its own right. More specifically, I have argued that NS faces the following dilemma. If we adopt the unqualified interpretation of NS, then NS countenances types that are far too narrow. On the other hand, if we adopt a qualified interpretation of NS, then we currently lack the information necessary for discerning whether NS delivers intuitively plausible justification verdicts on particular cases. Either way, in its current form, NS fails to make substantive progress towards answering the generality problem.
Author ORCID
Jeffrey Tolly, 0000-0002-3431-5161