There is a widespread attitude in epistemology that, if you know on the basis of perception, then you couldn't have been wrong as a matter of chance. Despite the apparent intuitive plausibility of this attitude, which I'll refer to here as “stochastic infallibilism”, it fundamentally misunderstands the way that human perceptual systems actually work. Perhaps the most important lesson of signal detection theory (SDT) is that our percepts are inherently subject to random error, and here I'll highlight some key empirical research that underscores this point. In doing so, it becomes clear that we are in fact quite willing to attribute knowledge to S that p even when S's perceptual belief that p could have been randomly false. In short, perceptual processes can randomly fail, and perceptual knowledge is stochastically fallible. The narrow implication here is that any epistemological account that entails stochastic infallibilism, like safety, is simply untenable. More broadly, this myth of stochastic infallibilism provides a valuable illustration of the importance of integrating empirical findings into epistemological thinking.
In arguing against the stochastic infallibility of perceptual knowledge, this paper is structured in the following way. First, I introduce precisely what I have in mind by “stochastic infallibilism” and highlight different ways we might see it manifested in epistemological thought (§1). Next, I discuss empirical findings that demonstrate, at both the neural and behavioural level, how perception is subject to random error (§2). With this in mind, I argue that we are still quite willing to attribute knowledge that p, even when it is clear that the perceptual belief that p could have been false due to random error (§3). After this, I outline key implications of our willingness to attribute stochastically fallible perceptual knowledge (§4), before closing by addressing a couple of objections I imagine might be raised against this account (§5).
1. Stochastic infallibilism for perceptual knowledge
To begin, I first want to discuss what I've opted to call “stochastic infallibilism”, which I will consider here specifically as it pertains to perceptual knowledge. Like infallibilism generally, stochastic infallibilism is intuitively quite appealing. However, unlike proper infallibilism, it is by all assessments the prevailing attitude in epistemology. If S knows that p on the basis of her perception, yes p could have been false – so the idea goes – but p could not have been randomly false. While not usually defended explicitly, we see this principle reflected in a variety of epistemological accounts. In this section, I will first say a bit on what exactly I mean by “stochastically infallible”, after which I will highlight two key areas in which we might see this idea expressed, albeit largely implicitly: (1) Epistemological discussion of perceptual error and (2) epistemological critique of the safety programme. Finally, in the interest of providing a complete picture, I'll close by mentioning two dissenting positions.
Let's start with the principle of stochastic infallibilism.
Stochastic Infallibilism for Perceptual Knowledge: If S knows that p on the basis of her perception, then S's perceptual belief that p could not have been stochastically false.
The first point of order is to specify what I mean here by “stochastically false”. As a starting point, although I certainly don't mean to import the technical connotations that might accompany formal accounts of stochastic processes (e.g. see Paul and Baschnagel Reference Paul and Baschnagel2013), here I will nonetheless be thinking of “stochastic” as something like the property of a process. That is, some processes are stochastic, and others are not. Accordingly, we might then understand “stochastically false” to mean something like “false as the result of a stochastic process”, which then provides us with an alternate expression of stochastic infallibilism: Stochastic processes are not perceptual-knowledge-conducive processes.
Now let's fill in the details more. A stochastic process is, roughly speaking, a random process. A bit more precisely, it is a truly random process, one for which the outputs of the process are only probabilistically related to the inputs. The crucial contrast here is with chaotic processes, the outputs of which might appear random despite being deterministically related to the inputs. Paradigmatic examples of stochastic processes include physical diffusion processes (see Paul and Baschnagel Reference Paul and Baschnagel2013: Ch. 3) and neural transmission (discussed below, §2). However, the most familiar stochastic process in epistemology is likely the lottery. On an idealised lottery, whether or not a given ticket is a winner is simply and truly a matter of chance. Critically, this results in a decoupling of modal ordering from probability. Ordinarily, we think of probable events as being modally closer than improbable events. However, because the lottery is a truly random process, very little needs to change in the actual world for any given ticket to be a winner. Despite the extremely low probability that ticket X is a winner, there will always be very close possible worlds in which it is (see Pritchard Reference Pritchard2015: §4 for more on this). This is precisely what I mean by a “stochastic process”, a process that is lottery-like: a truly probabilistic process in which all possible outcomes are modally equidistant from the actual world. This point is especially important for understanding the relationship between stochastic infallibilism and the safety condition on knowledge, which I discuss below.
Because they are truly random processes, one important hallmark of stochastic processes is that no possible outcome of a stochastic process demands any more explanation than any other possible outcome. While winning a (fair) lottery is far less likely than losing the lottery, we don't expect the lottery winner to be able to explain her win or even for it to be explainable in principle.Footnote 1 There is no explanation for a lottery win; it just is. In the case the lottery winner attempts to explain her win, perhaps along superstitious lines, we immediately recognise such an explanation to be faulty. We might contrast this with the extremely non-stochastic process of a fixed lottery. S winning a fixed lottery of course demands some sort of explanation. Was S in on the fix? Why did the fixers choose her to win? What is the explanation? While winning a fair lottery is something that just happens, winning a fixed lottery doesn't just happen. This point is crucial moving forward. Any possible outcome of a stochastic process, regardless of its probability, could just happen, demanding no more explanation than even far more likely outcomes of that process.
For the remainder of this section, I want to discuss the epistemological status of stochastic infallibilism. As indicated above, this is not something that, as far as I am aware, has been actively defended. It is more of an attitude than a full-blown philosophical thesis, and I'm in part relying on the introspection of the reader to confirm its prevalence. Nevertheless, we see this attitude reflected in various ways across the epistemological literature, and it is clearly common to think about perceptual knowledge along the lines of stochastic infallibility. Here I'll focus on two quite different ways in which we might observe this attitude. The first of these is how epistemologists tend to think about perceptual error. Namely, it isn't generally recognised that perception is susceptible to stochastic error. This omission is especially striking when considered from a more empirically minded perspective, as we'll do in the following sections. Beyond this, the tendency to think of knowledge as stochastically infallible is reflected in the epistemological credibility of the safety programme. As I will argue below, the safety condition on knowledge entails stochastic infallibilism. However, despite the mounting criticisms of safety, it has yet to be challenged along the lines of random perceptual error. Again, this underscores that the prevailing attitude among epistemologists is that perceptual knowledge is stochastically infallible.
First, it is often assumed that perception isn't susceptible to random error. For a good example of this, consider the following from Martin Smith:
If I visually perceive that Bruce's laptop is displaying a blue background when in fact it is not, then there has to be some explanation as to how such a state of affairs came about. Possible explanations have already been floated above – perhaps I'm hallucinating, or have been struck by colour blindness, or am subject to an optical illusion etc. It can't be that I just misperceive – there has to be more to the story. (Smith Reference Smith2016: 51)
As discussed above, to “just misperceive” is to randomly misperceive. Accordingly, we can understand this as Smith thinking of perceptual knowledge as being stochastically infallible. After all, if perception cannot be randomly wrong, then this means that perceptual knowledge cannot be randomly wrong either. There might be space to push back here if one takes “perception” in a narrow sense to only include very low-level neural processes, maintaining that it cannot be randomly wrong, but that subsequent processing of perceptual content can. While this might allow you to get away with saying that perception can't be randomly wrong without accepting stochastic infallibilism for perceptual knowledge, it's not at all clear how one might motivate such a claim (as should become clear in the next section).
Next, I want to discuss a very different way in which we might observe the influence of the attitude that knowledge is stochastically infallible. At least one prominent epistemological account – safety – obviously entails stochastic infallibilism. However, despite having accumulated a wealth of different critiques (see below), no one has yet argued that safety cannot be a necessary condition of knowledge because of cases in which perceptual knowledge could have been randomly wrong. As epistemologists are of course a clever lot, rather than the whole of the discipline collectively missing this connection, I think the explanation is clear: On the whole, epistemologists simply don't think of perceptual knowledge as being stochastically fallible. Thus, the fact that safety entails stochastic infallibilism doesn't present as a problem. In order to drive this point home, let's take a closer look at the safety condition.
On the safety condition for knowledge, roughly, S knows that p only if S's belief that p could not have easily been false. A typical way we might understand this is in terms of possible worlds (e.g. see Pritchard Reference Pritchard2012): S knows that p only if there are no close possible worlds in which S believes falsely that p. Immediately, we note that this effectively filters out any beliefs that could have been randomly false, simply because random possibilities populate worlds very close to the actual world. Returning to the example of the lottery, if I play a truly random lottery, there will be very close possible worlds in which I win the lottery, regardless of how improbable my win may be. As mentioned above, this derives from the fact that very little needs to change in the actual world for a randomly possible event to occur. Insofar as modal ordering might be expressed as similarity between worlds, this means that extremely improbable random possibilities in the actual world can obtain in even the closest possible worlds. Accordingly, if p could have been randomly false, then there are close possible worlds in which p is false (again, regardless of the probability that p is false). In short, if knowledge is safe, then knowledge is stochastically infallible, which of course means that perceptual knowledge is stochastically infallible.
Before continuing, I want to put to rest any nascent concern that perhaps by “stochastic infallibility” I'm somehow just referring to the safety condition with a different name. The reason that safety entails stochastic infallibilism is not because they are identical, but rather because safety is a (substantially) broader claim than stochastic infallibilism. There are countless non-random reasons an actual-world true belief might be false in close possible worlds, and, with the exception of lottery-type scenarios, these tend to be the cases that epistemologists focus on. Stochastic infallibilism excludes a far narrower class of beliefs from knowledge than the safety condition, and in this way we might understand how it can enjoy a broader appeal.
As indicated above, a multitude of challenges have been levied against the safety condition. These challenges are not confined to a decade ago, when safety perhaps was in its prime (e.g. see Comesaña Reference Comesaña2005; Greco Reference Greco2007), but continue to be raised regularly (e.g. see Bricker Reference Bricker2019; Helton and Nanay Reference Helton and NanayForthcoming). However, while there is no shortage of arguments against safety, not one has objected to the fact that it entails stochastic infallibilism or something like it. As I will discuss a bit more below (§4), if one thinks that there is stochastically fallible knowledge, then a straightforward critique of safety follows: Since safety entails that stochastically fallible knowledge is not in fact knowledge, safety is mistaken. The fact that no such critique has been advanced, especially taken together with the “no random error” attitude towards perceptual processing, indicates a systematic failure to recognise the possibility of stochastically fallible knowledge.
To close out this section, in the interest of transparency, it is important to note that certain philosophical accounts explicitly reject the idea that knowledge is stochastically infallible. One such account is my own objection of the modal account of risk (Bricker Reference Bricker2018), which I will discuss in more detail in §3. Here, I want to point out Williamson's contention, in no uncertain terms, that randomly fallible processes can still be knowledge-conducive:
Our perceptual processes are subject to random error. The causal connection between the environment and our perceptual beliefs about it is no doubt probabilistic, but it does not follow that those beliefs rest on probabilistic evidence. Moreover, they may constitute knowledge simply because perceiving counts as a way of knowing. (Williamson Reference Williamson2000: 252)
While I quite agree with what Williamson is saying here, this assertion is striking for a number of reasons. Most immediately puzzling is that earlier in the same volume (Williamson Reference Williamson2000: 147), he endorses a version of safety. Given the above discussion, this appears to be an obvious contradiction. Perhaps Williamson's unique understanding of safety is the key to resolving this apparent contradiction – he of course doesn't take it to be a necessary condition on knowledge in the sense of the analytic project – but this is beside the point for this paper, and I won't discuss it more here. Instead, I simply want the reader to notice how heterodox this statement feels. As with the broader claims of Williamson's Knowledge, the idea of perceptual belief being subject to random error despite constituting knowledge likely doesn't sit right. This feeling, more than anything else, is the myth of stochastic infallibilism. Over the next two sections, as if accidentally recapitulating Williamson's thought, I'll demonstrate first that perception is susceptible to random error, and then that such stochastically fallible beliefs can still be known.
2. Stochastic neurons, stochastic perception
The purpose of this section is to lay the foundation for a critique of stochastic infallibilism. In order to do this, here I want to highlight two related points that are crucial moving forward: On both (i) the low-level, neuronal description of perception and (ii) the high-level description of perception afforded by signal detection theory,Footnote 2 ordinary perception is stochastically fallible. It is uncontroversial that the relationship between neuronal input and neuronal output is fundamentally probabilistic, which makes it unsurprising that the relationship between stimulus and percept is equally probabilistic at the behavioural level. In this section, I will discuss both points in turn, reserving discussion of epistemological consequences for the following sections.
The first point I want to stress in this section is that the activity of individual neurons is uncontroversially stochastic. While it might be tempting to think of the neural activity in a binary fashion – i.e. certain inputs cause a neuron to fire and others don't – this is simply incorrect. Due to the physical mechanisms of neuronal transmission, the relationship between a given neuron's synaptic input and synaptic output is fundamentally probabilistic. There are a multitude of reasons for this (for an excellent overview, see McDonnell and Ward Reference McDonnell and Ward2011: table 2), and here I want to mention two of the most fundamental and widely understood: (1) Synaptic transmission and (2) ion channel opening/closing are both fundamentally probabilistic.
In order to understand the stochastic nature of synaptic transmission, a bit of basic background might be necessary. The functional connections between neurons, from which one neuron might transmit a signal to another, are known as “synapses”. Generally speaking,Footnote 3 the “pre-synaptic” terminal of the upstream neuron receives an electrical signal known as an “action potential”, which then triggers the release of specific molecules called “neurotransmitters”. The neurotransmitter might then trigger an action potential in the downstream, “post-synaptic” neuron by binding to specific receptor sites on the post-synaptic neuron, thereby propagating the neuronal signal. This process is, crucially, stochastic in at least one key respect: The relationship between incoming action potential and subsequent neurotransmitter release is inherently probabilistic. That is, “for each action potential, neurotransmitter release has a certain likelihood of occurrence”, which is “a consequence of the inherently stochastic nature of the molecular and cellular processes that drive [neurotransmitter release]” (Branco and Staras Reference Branco and Staras2009: 373). Interestingly, this probability seems to vary considerably, even between different synapses branching from the same axon (Branco and Staras Reference Branco and Staras2009: 376). This fundamentally probabilistic nature of neurotransmitter release is supported by a wide range of studies (Tsodyks and Markram Reference Tsodyks and Markram1997; Traynelis and Jaramillo Reference Traynelis and Jaramillo1998; Branco et al. Reference Branco, Staras, Darcy and Goda2008; Körber and Kuner Reference Körber and Kuner2016; Sanderson et al. Reference Sanderson, Bradley, Georgiou, Hong, Ng, Lee, Kim, Amici, Son, Zhuo, Kim, Kaang, Kim and Collingridge2018). The key takeaway here is that “sometimes stimulation does not result in synaptic transmission” (Branco and Staras Reference Branco and Staras2009: 379), and because it is inherently probabilistic, this will occur at random. In short, whether a neural input triggers an output is stochastic.
Another key reason for the stochastic behaviour of individual neurons is the inherently probabilistic nature of ion channels. Again speaking very simplistically, action potentials propagate through neurons via the opening and closing of ion channels in the neuronal membrane. However, not only is the opening and closing of these channels stochastic, but “voltage fluctuations attributable to stochastic [ion] channel gating impact on action potential output” (Kole et al. Reference Kole, Stuart and Hallermann2006: 1677; see also Lecar and Nossal Reference Lecar and Nossal1971; Diba et al. Reference Diba, Lester and Koch2004). That is, random variation in ion channel behaviour in any given neuron will impact the output signal of that neuron. Along with neurotransmitter release, this is another sense in which the relationship between neuronal input and neuronal output is fundamentally probabilistic. Moving on from the level of individual neurons, I now want to discuss what this random variability in neural response means for the relationship between perceptual stimulus and conscious percept.
On an intuitive level, it is tempting to think of one's percepts as entirely determined by the physical properties of whatever stimulus one happens to be perceiving: For any given subject S, identical stimuli will elicit identical percepts. However, as intuitive as this may be, it is of course mistaken. First, one of the most important lessons from SDT is that “even stimuli presented repeatedly under identical conditions (intensity, etc.) actually [yield] a continuously varying range of perceived stimulus intensities” (Winer and Snodgrass Reference Winer, Snodgrass and Matthen2015: 719). Put another way, “physically identical stimuli can elicit variable percepts” (Amitay et al. Reference Amitay, Guiraud, Sohoglu, Zobay, Edmonds, Zhang and Moore2013: 1). Crucially, “this occurs even when no stimulus at all is presented” (Winer and Snodgrass Reference Winer, Snodgrass and Matthen2015: 719). Even in the absence of a stimulus, our percepts will, to varying degrees, resemble the percepts produced by a stimulus. As this resemblance is Gaussian in nature, there is always a non-zero chance that the absence of a stimulus will still appear enough like the stimulus for us to believe that we were in fact presented with the stimulus. This is the essence of the “false alarm” (FA), perceiving a stimulus when none is presented (for more, see Winer and Snodgrass Reference Winer, Snodgrass and Matthen2015). As I will discuss in the next section, despite intuitions to the contrary, it is actually fairly easy for us to perceive a stimulus when none is presented. Again, the reason for this is that percepts display continuous trial-to-trial variability, even given physically identical stimuli.
There are a number of widely recognised reasons why identical stimuli will produce a broad range of different percepts in S. Chief among these are variation in “the context of stimulus presentation or attention” (Bernasconi et al. Reference Bernasconi, De Lucia, Tzovara, Manuel, Murray and Spierer2011: 17971). The degree to which we attend to a stimulus will of course modulate the conscious perception engendered by that stimulus, as will concurrent processing activated by context of the stimulus, etc. Were these the only reasons for variable percepts from physically identical stimuli, none of this would be relevant for our purposes. After all, when we misperceive a stimulus because of inattentiveness, for example, this isn't the sort of error that “just happens”, but instead dictates a specific explanation (i.e. that S was inattentive). However, even controlling for attention, experimental context, etc., percepts still display a Gaussian trial-to-trial variability. The foundational assumption of signal detection theory is that the origin of this intractable variability is “random variation in neural efficiency”/“randomness in neural responses”/“neural noise” (Macmillan and Creelman Reference Macmillan and Creelman2005: 273; Lu and Dosher Reference Lu and Dosher2014: 228; Winer and Snodgrass Reference Winer, Snodgrass and Matthen2015: 719). In the sections that follow, I will lay out both what sort of perceptual variability it is reasonable to expect from this neural randomness, as well as what this means for epistemology and stochastic infallibilism. First, however, I want to say a bit more on the motivation behind this assumption.
One way we might understand the contention that randomness in neural response drives perceptual variability is in terms of its explanatory power. As discussed above, it is well established that neuronal responses to identical inputs will be randomly variable. Accordingly, in the absence of any other apparent changes, this is the best explanation for how our percepts can still vary trail-to-trail. Moreover, this assumption then provides us with a power conceptual framework (SDT) for the empirical description of perception. However, beyond all this, there is also direct empirical evidence that conscious perception is impacted by random neural fluctuations. Here, I will focus on two key findings: (1) Random variation in brain activity predicts conscious perception, and (2) the magnitude of random neural variation predicts perceptual performance.
The most direct evidence available that random neural variation contributes to variation in conscious perception comes from EEG experiments observing brain activity in response to identical stimuli. As there is random trial-to-trial variation in the electrophysiological signal, corresponding in part with random variation at the neural level, this allows researchers to test whether random differences in brain activity are associated with differences in conscious perception. Let's take as an example one of the experiments performed by Bernasconi et al. (Reference Bernasconi, De Lucia, Tzovara, Manuel, Murray and Spierer2011). Say participants are presented with two (unbeknownst to them) identical audio tones and told to judge which one was of a higher pitch. If random neural variation produces variation in perception, we would expect the participants to in fact perceive the tones to be of slightly different pitch, allowing them to complete the discrimination task as normal. Moreover, as we expect there to be some systematic relationship between neural activity and percept, collecting data over enough trials would allow researchers to reliably predict a participant's percept on the basis of the corresponding EEG signal, again assuming that there is in fact perceptual variation attributable to random neural variation. This is precisely what Bernasconi et al. observed. Not only did participants report identical tone pairs as different, but whether they reported the first or second tone to be of higher pitch could be predicted from EEG data, demonstrating “the contribution of random fluctuations in brain activity to conscious perception” (Bernasconi et al. Reference Bernasconi, De Lucia, Tzovara, Manuel, Murray and Spierer2011: 17971). Moreover, they reported the same sort of results for discrimination of auditory stimuli by duration, and similar findings have been reported by Amitay et al. (Reference Amitay, Guiraud, Sohoglu, Zobay, Edmonds, Zhang and Moore2013).
At this point, one might wonder how we can be confident that observed EEG variation is actually due to random variation in neural response and not trial-to-trial variation in attention. The answer is simply that changes in attention in no way present like random neural variation, with attentional increase reducing the magnitude of – or “quenching” – neural noise (see Mitchell et al. Reference Mitchell, Sundberg and Reynolds2007; Cohen and Maunsell Reference Cohen and Maunsell2009). This ties well into the second type of direct evidence we have for the contention that random neural variation impacts conscious perception: A given individual's performance on a perceptual task is related to the magnitude of her neural noise during that task, i.e. how randomly variable her neural responses are to a given stimulus (see Arazi et al. Reference Arazi, Gonen-Yaacovi and Dinstein2017). If neural variability didn't result in perceptual variability, it would be difficult to explain why individual-to-individual differences in neural noise would have any impact on perceptual performance. However, this observation fits perfectly into the assumption that neural noise engenders perceptual variation. Higher noise magnitude would mean more variability in percept, which is less conducive to perceptual performance. Taking this together with everything discussed above, it is clear that random variation in neural response underlies variation in conscious perception.
In short, it is quite clear that perception is stochastically variable at both the neuronal and behavioural levels. In what follows, I'll argue that because this random variation can result in false perceptual beliefs, the result is that perceptual knowledge is stochastically fallible.
3. How can stochastically fallible processing generate perceptual knowledge?
In the previous section, I outlined the stochastic nature of perceptual processing: Physically identical stimuli will produce a continuously variable distribution of possible percepts, deriving, at least in part, from the fundamentally random nature of neural activation. Here, bringing epistemology and stochastic infallibilism back into view, I want to explore how such a process is capable of producing knowledge. In doing so, this section will look at three examples in which we are perfectly willing to attribute knowledge in violation of stochastic infallibilism. The first of these is a modified version of the “Mia” example I have introduced elsewhere (Bricker Reference Bricker2018). The second is derived from the above Bernasconi experiment, and the third arises from the ubiquitous perceptual error known as the “phantom vibration”. Accordingly, over the course of this section, it should become clear that a general commitment to stochastic infallibilism is mistaken. At least in some cases, we have no problem attributing knowledge that p to S, even if S could have been randomly wrong in believing that p.
Before discussing these cases, however, I first want to clarify what I take to be the crux of this question of how stochastic processing can produce perceptual knowledge. There is of course a trivial sense in which all perceptual knowledge is likely produced by stochastic processes, given that all perception ultimately supervenes upon randomly variable neural correlates. This, however, isn't especially interesting for our purposes, as the random variation in percept might be far more fine-grained than the knowledge we attribute. For example, in the Bernasconi et al. study discussed above, differences in tone durations were taken to be somewhere in the ballpark of less than 20 ms (Bernasconi et al. Reference Bernasconi, De Lucia, Tzovara, Manuel, Murray and Spierer2011: 17972).Footnote 4 Attributions of knowledge to a subject that (i) there was a tone, (ii) that the tone was short, (iii) that the tone was about as long as the other tone in the pair, etc., are all far coarser in grain than ± 20 ms. Accordingly, the truth-values of the associated beliefs will never hinge on these expected error possibilities, and nothing all that epistemologically interesting is going on. What I want to address is whether this is the only way that stochastic processes might produce perceptual knowledge. What if the random variation in percept is just as coarse-grained as our epistemic evaluation? Might we still attribute knowledge then? My intention with this section is to highlight that yes, sometimes we do. That is, we will sometimes attribute knowledge to S that p even when p might have been stochastically false, in violation of stochastic infallibilism.
Let's begin with a modified version of the “Mia” example. While I originally designed this example with the philosophy of risk in mind, it is worth discussing here, as it offers a good philosophical introduction to the stochastically fallible cognitive ability. Let's begin with the description of the vignette's protagonist, Mia:
Mia … has a special skill for discriminating between very small differences in the weights of objects. For scratch-off lottery tickets, for which the winners weigh slightly more than the losers due to slight manufacturing differences, this means that Mia can identify whether a ticket is a winner or loser with 99% accuracy. That is, the probability that she judges that a ticket is a winner given that it is a winner is 99%, and the probability that she judges that it is a loser given that it is a loser is also 99%. (Bricker Reference Bricker2018: 203)
It should be noted here that I explicitly posited that this ability is “subject to a small amount of random error” (Bricker Reference Bricker2018: 204).Footnote 5 Now let's imagine that Mia handles a lottery card and on that basis comes to believe that it is a loser. Given her perceptual capacity and assuming that this is a regular lottery (i.e. the odds of a win are extremely long), the posterior probability that her belief is true will be exceedingly high. We might fill in the details so the truth of her belief is as arbitrarily probable as we like: 999,999 in 1,000,000; 9,999,999 in 10,000,000; etc. The actual value doesn't matter. Now let's assume that her belief is true. Her ticket is a loser. Does Mia know that her ticket is a loser? Yes, clearly she does. Although we perhapsFootnote 6 aren't willing to attribute knowledge that a lottery ticket is a loser based solely on its probability, the fact that Mia's belief is formed on the basis of a reliable perceptual capacity cements our judgement that she does in fact have knowledge.
Crucially, however, we judge that Mia knows that the ticket is a loser despite the fact that she could have just been wrong in her belief that the ticker was a loser: The 99% probability that she judges that the ticket is a winner given that it is a winner means that 1% of the time she will judge that it is a loser when it is a winner. Assuming that this 1% of Mia's beliefs that the ticket is a loser are false due to random noise in her perceptual processing, this means that all her true beliefs that the ticket is a loser, which we judge to constitute knowledge, are stochastically fallible. In short, it seems that we are willing to attribute knowledge in violation of stochastic infallibilism, suggesting that such a principle does not reflect our actual practice of knowledge attribution.
It's important to not get confused by the multiple layers of randomness in the above example. For our purposes, it is not at all necessary that the belief is about a stochastic process (i.e. the lottery), only that it is formed via one.Footnote 7 In order to illustrate this, let's consider a more realistic scenario involving actual-world perception, a modified version of the Bernasconi et al. (Reference Bernasconi, De Lucia, Tzovara, Manuel, Murray and Spierer2011) experiments discussed above. Recall that in those experiments, participants were presented with pairs of identical auditory stimuli, which they perceived to be slightly different in duration (or pitch) due to random fluctuations in neural response to the stimuli. Let's imagine instead that the stimuli pairs weren't identical, and assume, in keeping with SDT, that the random variation in percept duration (or pitch) is normally distributed. This then allows us to posit a difference between the stimuli magnitude, n ms (or n Hz), such that the probability of S mistakenly perceiving that stimulus 1 is longer (or higher) than stimulus 2 due to that random variation is whatever we like. Let's just posit that n is such that S randomly misperceives stimulus 1 to be longer 1% of the time. This, it should be stressed, is perfectly in keeping with the SDT model of how perception actually works.
Now, let's imagine a trial in which S correctly perceives that stimulus 1 is longer in duration than stimulus 2, and on this basis forms a true belief to this effect. Here we again clearly judge that S knows that stimulus 1 was longer in duration than stimulus 2. Crucially, the stochastic fallibility of this belief – 1% of the trials for which stimulus 2 is longer, S will still perceive stimulus 1 to be longer due to random perceptual error – doesn't seem to have any bearing on our willingness to attribute knowledge. All that seems relevant is that her belief formed on the basis of ordinary, reliable perception. As with the previous example, we can see that our judgements about knowledge do not in fact adhere to stochastic infallibilism. Moreover, this is of course not unique to any features of the Bernasconi et al. experiments or auditory perception, but rather might hold for any near-thresholdFootnote 8 discrimination task, regardless of modality of the stimuli. In short, insofar as we want an epistemic principle like stochastic infallibilism to reflect the conceptual contours of knowledge, it is clearly mistaken.Footnote 9
For the last example of stochastically fallible perceptual knowledge, I want to move in the direction of both (i) everyday and (ii) coarser-grained knowledge. In order to do this, I'd like to shift away from counterfactuals and SDT, and instead invite you, the reader, to consider whether you have ever experienced a specific type of tactile hallucination, in which you felt that your phone was vibrating, only to check it and discover that it actually hadn't gone off? In all likelihood, you have. As it turns out these “phantom vibrations” are not only surprisingly well documented, but also remarkably prevalent. A 2010 survey of medical professionals found that 68% reported having had experienced the phenomenon (Rothberg et al. Reference Rothberg, Arora, Hermann, Kleppel, St Marie and Visintainer2010), and a 2012 study reported the same for a whopping 90% of university students (Drouin et al. Reference Drouin, Kaiser and Miller2012). As these are proper hallucinations, this means that phantom vibrations can occur “in the absence of sensory stimulation”, i.e. without some other tactile stimulus being confused for a phone vibration (Drouin et al. Reference Drouin, Kaiser and Miller2012: 1491). You can have a phantom vibration even when you're just sitting there, doing nothing. However, at a purely conceptual level, we might note that this error possibility for tactile perception doesn't preclude our ability to know that our phone vibrated on the basis of tactile perception alone. Phantom vibrations are very low-probability errors, and our capacity to detect whether our phones vibrated are generally quite reliable. Accordingly, when our phones vibrate, we can of course know on the basis of our tactile perceptions of those vibrations that our phones did vibrate. Crucially, however, not only does it seem perfectly appropriate to conceptualise specific onsets of phantom vibrations as random, but I would contend that this is generally how we think of them. Upon experiencing a phantom vibration, it would be rather unusual for one to spend too much time worrying about its explanation. Unlike a visual or auditory hallucination, which would likely demand explanation, the phantom vibration might be reasonably regarded by those who experience it as something that occasionally just happens. That is, it might be viewed as random. The point here is that thinking about phantom vibrations as inherently random doesn't preclude our willingness to attribute knowledge on the basis of perceiving vibrations. In this manner, we might understand how such knowledge attributions violate stochastic infallibility.
To close out this section, I want to say something about the general structure of these types of cases, in order to make clear exactly what the operative type of perceptual knowledge is here.Footnote 10 To start, we might note that these cases can all be framed as either some sort of detection task for p or discrimination task between p and q, for which both the (i) hit rate (the probability that S believes that p given p) and (ii) correct rejection (CR) rate (the probability that S doesn't believe that p given not p) are both sufficiently high, with the CR rate less than 1. First, both the hit rate and the CR rate need to be sufficiently high in order for us to reasonably think that S's perceptual capacity is knowledge-conducive. Next, the CR rate needs to be less than 1 so that the false alarm rate is non-zero (FA = 1 – CR), i.e. there need to be some cases in which S believes that p when p is false. Finally, some of these false alarms need to be attributable to random perceptual error. The interplay between these conditions, specifically when they might simultaneously obtain, demands a bit more attention.
It seems likely that there is something of a Goldilocks zone for the grain of our epistemic evaluation such that S's perceptual knowledge that p might be stochastically fallible. On the one hand, if this evaluation is too fine-grained, then quite obviously S simply cannot know that p. For example, in the case that an auditory stimulus is exactly 100 db, S likely will never know (assuming ordinary conditions) on the basis of perception that the stimulus she perceived was exactly 100 dB. We need our evaluation to be coarser-grained in order to actually attribute knowledge. However, on the other hand, if we make our evaluation too coarse-grained, then it is less plausible that her known belief that p might ever be stochastically false. For example, in the case that S knows that a 100-dB tone was played, while clearly there's no issue saying that S knew that the tone was played, now it seems we have a problem saying that S could have falsely believed that such a tone was played due simply to random perceptual error. Instead, for cases of stochastically fallible knowledge to be plausible, it seems like we need our grain of evaluation to be just right, not too fine-grained as to preclude knowing but neither too coarse-grained as to preclude stochastic error. The three examples discussed in this section were constructed as to reside within this narrow grain band, relying on empirical evidence to make the case for stochastically fallibility.
We might speculate then that the narrowness of this set of circumstances from which stochastically fallible knowledge seems to arise, coupled with the role of empirical evidence in describing such circumstances, might be at least partially responsible for the prevailing attitude in epistemology that such cases cannot exist. What I have endeavoured to show in this paper, especially in the previous section, is that multiple lines of evidence all show in no uncertain terms that perception indeed displays a propensity for random, lottery-like error. This allows us to construct all manner of examples of stochastically fallible knowledge involving simple discrimination or detection. However, cases within the domain of signal detection theory – i.e. near-threshold – are especially potent, because there it is uncontroversial that discrimination/detection errors can just happen. When these errors are sufficiently frequent, we might of course question whether the associated perceptual beliefs can actually constitute knowledge. However, if we imagine cases in which error rates are comparable to lapses in attention, say around 1%,Footnote 11 we clearly judge that the error doesn't preclude knowledge. In this way, we can understand how these SDT-type examples clearly refute the idea that perceptual knowledge is stochastically infallible.
In conclusion, we might observe our willingness to attribute stochastically fallible knowledge in a narrow but robust range of cases. In the next section, I'll say a bit more on what these attributions mean, not only for stochastic infallibilism but for epistemology generally.
4. Implications
In this section I want to point out three key lessons from our tendency to attribute stochastically fallible knowledge: (1) Our tendency to think of perceptual knowledge as stochastically infallible is mistaken. (2) Perception isn't safe, and safety isn't a necessary condition on knowledge. (3) There is a danger to thinking about perceptual knowledge in isolation from relevant empirical features of perception. While I'll discuss each implication in turn, as they are all quite straightforward, I won't devote an excess of time to this task.
To begin, the most immediate consequence of our tendency to attribute stochastically fallible perceptual knowledge is that stochastic infallibilism is just wrong. This of course requires that stochastic infallibilism is taken as something of a descriptive principle for reliable, competent knowledge attribution, and not something so sacrosanct that it dictates a revision of our attributive practices. This seems obvious. Stochastic infallibilism isn't something like the factivity of knowledge, so integral to the fabric of epistemology that we might question the wisdom of discarding it in the name of retaining our knowledge-attributing practices. Especially considering that proper infallibilism has been roundly dismissed for anti-sceptical reasons, I cannot imagine why stochastic infallibilism shouldn't be regarded as wholly discardable. There is a second issue of whether the attributions highlighted in the previous section are indeed reliable, but setting this concern aside until the next section, it should be otherwise clear that perceptual knowledge is in fact not stochastically infallible.
Next, if perceptual knowledge is stochastically fallible, then it follows that knowledge is not safe. As discussed above, due to their fundamentally probabilistic nature, stochastic errors occur in the closest of possible worlds. This means that if S's knowledge that p is stochastically fallible, then there are very close possible worlds in which S falsely believes that p, and thus S's knowledge isn't safe. While the fact that there is a problem with the safety condition isn't the most important (or novel) lesson to be learned here, it is still worth pointing out. Beyond this, there might be a temptation to think of cognitive abilities generally or perhaps perception specifically as being safe, even if knowledge generally isn't. However, given the stochastic variability perception displays, this we might understand as especially mistaken. It should be noted that all this of course assumes that the same belief-forming processes are employed both when S knows that p on the basis of perception and when S falsely believes that p due to random perceptual error. I will discuss a potential challenge to this assumption in the next section.
Finally, perhaps the most important lesson to learn here is that we cannot properly gauge the actual properties of cognitive abilities like perception in isolation from the wealth of empirical data on human cognition. Granted, intuitively there doesn't seem to be any problem with thinking, like Smith, “It can't be that I just misperceive” (Smith Reference Smith2016: 51). Indeed, this was more or less the attitude of classical psychophysics, before the introduction of the SDT framework. The problem is that such an attitude is simply incorrect, which brings with it its own negative epistemological consequences. In this paper, the main consequence I've highlighted is a missed critique of safety, but the specifics aren't as important here as the general point. Empirical findings, like those about perception, can be highly relevant to epistemology, even for such a seemingly purely conceptual domain as the necessary conditions of knowledge. Accordingly, it is crucial that we as epistemologists seek out such empirical findings as might be relevant.
5. Objections
To close out this paper, I want to address three potential objections that I anticipate might be raised against the above account: (1) One might object to my characterisation of neural-noise-related error as “random”, arguing that the noise is itself an explanation; (2) one perhaps might maintain that the knowledge attributions highlighted in §3 are not reliable and accordingly pose no problem for stochastic infallibilism; and (3) one might question whether the same belief-forming processes are involved in both perceptual knowledge that p and the random misperception that p. I will address each potential objection in turn.
As discussed throughout this paper, the operative sense of “random” here is that random events just happen. If the possible outcomes of a given process are random, then no one possible outcome demands any more explanation than any other possible outcome of that process. Accordingly, the perceptual error possibilities described above are random in the relevant sense that there is no explanation that might be offered for the odd misperception that couldn't also be offered to explain a veridical perception. These perceptual errors just happen – or so I maintained. There is an obvious objection at this point to say that such errors aren't actually random, because the explanation is the noise itself. Even if the noise arises from low-level stochastic processes, it is the presence of the noise that explains why a given perceptual error occurred. Thus, such errors cannot be said to be “random” in the “could just happen” sense, and the call to abandon stochastic infallibilism is unfounded.
As tempting as this sort of objection might be, it fundamentally misunderstands the nature of neural noise. While the magnitude of neural noise notably varies between individuals and perceptual contextsFootnote 12 (Arazi et al. Reference Arazi, Gonen-Yaacovi and Dinstein2017), it is in principle constant for a given person + stimulus + context, for both successful and unsuccessful perception. That is, it's not as if there's noise for perceptual error but no noise for successful perception. There's just always noise. Crucially, even when noise is held constant, e.g. same subject + stimulus + context, the subject will still misperceive for some trials. While the noise can explain the distribution of a subject's percepts, it doesn't offer any explanation for why any given trial elicited a perceptual error. In this way, we can understand how neural noise can result in perceptual errors that really do just happen.
Moving on, one might accept that the sort of perceptual error described above really is random, but question whether the tendency to attribute knowledge in cases of stochastically fallible knowledge is reliable. To be clear, I simply take it as a given that there is a tendency to attribute knowledge in such cases. After all, these are cases of true beliefs formed on the basis of ordinary perceptual capacities in such conditions that those perceptual capacities are highly reliable. If the objector is unwilling to concede at least an impetus to attribute knowledge here, then it's no longer clear that we're talking about knowledge at all. However, one might more reasonably question whether such an impetus towards knowledge attribution, at least here, is reliable. That is, perhaps we are mistaken in thinking that Mia really can know that her ticket was a loser (or that S really can know that stimulus 1 was longer than stimulus 2, or that we really can know when our phones vibrate) on the basis of stochastically fallible perceptions. While I cannot definitively eliminate unreliable knowledge attribution as a possibility, I do think that this is rather unlikely. The reason for this is that our tendency to attribute knowledge on the basis of stochastically fallible perception is nothing like those patterns of knowledge attribution that have been identified as (perhaps) unreliable.
The paradigm example of potentially unreliable patterns of knowledge attribution comes from the contextualism v. invariantism debate. Although the matter of much controversy, psychology-minded epistemologists such as Nagel and Gerken have argued that certain patterns of knowledge attributions are indeed unreliable (see e.g. Nagel Reference Nagel2010; Gerken Reference Gerken2017). Critically, these arguments of unreliability hinge on the assertion that the attribution patterns in question are products of specific cognitive biases. However, it's not at all clear how such an argument would go for attributions of stochastically fallible knowledge attributions. These are cases of knowledge attribution for true beliefs formed on the basis of reliable perceptual capacities, and it seems quite difficult to specify what the cognitive bias (or perhaps other problem) might be. Without such specifics, the objection doesn't strike me as especially threatening.
Finally, one might have an outstanding concern regarding the belief-forming processes involved. In order for stochastically fallible knowledge to pose a problem for safety, it is necessary that S's beliefs that p form according to the same processes in both the case of perceptual knowledge and random misperception. This is simply part of what it means for S's false belief that p to inhabit worlds close to those in which S knows that p. Accordingly, one might challenge my argument against safety by maintaining that the process by which (i) S is presented with a stimulus X and knows that she was presented with X is different than the process by which (ii) S is not presented with stimulus X but comes to believe that she was due to random perceptual error. In order for stochastically fallible knowledge to present a problem for safety, we need at least to have some reason that this is implausible.
The task of individuating belief-forming processes presents as an especially thorny one, so here I'll elect to bypass this task as much as possible by opting to individuate such processes modally. That is, if sufficiently little needs to change from a world in which the belief that p forms according to process A to a world in which the belief that p forms according to process B, then process A = process B. Now, this likely will strike some as a suboptimal method for individuation, and generally speaking this is probably right. If we want our divisions to correspond with some actual ontology of belief-forming processes, I doubt that this is the way to go. However, it is important to keep in mind that the very reason why we're talking about this at all here is not due to some genuine interest in what makes some belief-forming processes the same and others different. We just want to ensure that two different beliefs inhabit sufficiently close worlds. My proposal then is that we cut to the chase and simply discuss whether, in cases like those discussed in §3, the processes by which the perceptual knowledge that p forms are sufficiently modally similar to the processes by which the randomly false beliefs that p form.
In order to complete this task, we then need to ask the following: What needs to change in S's belief forming process in order to get from a world in which she knows that p on the basis of her perception to closest worlds in which she falsely believes that p on the basis of a random misperception? The answer to this question, in brief, is that very little needs to change. First, at the cognitive level, it doesn't seem that there is any change at all. In both cases, S simply forms her beliefs according to ordinary perception, and the stochastic nature of her error means that there need not be any cognitive changes in S between the knowledge world and the error world – attention, executive function, etc., can all remain fixed. The only changes we might observe will be at the neuronal level – certain ion channels/synapses will open/fire differently. However, again the stochastic nature of S's perceptual error means that nothing else needs to happen in her brain in order for these changes to occur. There need not be any changes in the connectivity of S's neural circuits; she need not ingest some external substance that modulates the release of certain neurotransmitters (e.g. a hallucinogenic); there need not be any change in her neural function at all! Because the error derives from random fluctuations in neuronal behaviour, the patterns of activation required for such an error can just happen. In this manner, we see that, especially if we individuate belief-forming processes modally, it seems implausible to suggest that S is employing different belief-forming processes when she knows that p due to perception and falsely believes that p due to random perceptual error.
6. Conclusion
In short, perceptual knowledge is stochastically fallible: S can know that p on the basis of her perception even if that perceptual belief could have been randomly false. This is highlighted by considering the randomly variable nature of perception, which is ultimately neuronal in origin. Not only does this have key implications for how we think about perception in epistemology, as well as safety, but it underscores the invaluable role empirical findings have to play in epistemological practice.