Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-05T14:07:08.377Z Has data issue: false hasContentIssue false

Suboptimality in perceptual decision making

Published online by Cambridge University Press:  27 February 2018

Dobromir Rahnev
Affiliation:
School of Psychology, Georgia Institute of Technology, Atlanta, GA 30332. drahnev@gmail.comrahnevlab.gatech.edu
Rachel N. Denison
Affiliation:
Department of Psychology and Center for Neural Science, New York University, New York, NY 10003. rachel.denison@nyu.eduracheldenison.com
Rights & Permissions [Opens in a new window]

Abstract

Human perceptual decisions are often described as optimal. Critics of this view have argued that claims of optimality are overly flexible and lack explanatory power. Meanwhile, advocates for optimality have countered that such criticisms single out a few selected papers. To elucidate the issue of optimality in perceptual decision making, we review the extensive literature on suboptimal performance in perceptual tasks. We discuss eight different classes of suboptimal perceptual decisions, including improper placement, maintenance, and adjustment of perceptual criteria; inadequate tradeoff between speed and accuracy; inappropriate confidence ratings; misweightings in cue combination; and findings related to various perceptual illusions and biases. In addition, we discuss conceptual shortcomings of a focus on optimality, such as definitional difficulties and the limited value of optimality claims in and of themselves. We therefore advocate that the field drop its emphasis on whether observed behavior is optimal and instead concentrate on building and testing detailed observer models that explain behavior across a wide range of tasks. To facilitate this transition, we compile the proposed hypotheses regarding the origins of suboptimal perceptual decisions reviewed here. We argue that verifying, rejecting, and expanding these explanations for suboptimal behavior – rather than assessing optimality per se – should be among the major goals of the science of perceptual decision making.

Type
Target Article
Copyright
Copyright © Cambridge University Press 2018 

1. Introduction

How do people make perceptual judgments based on the available sensory information? This fundamental question has been a focus of psychological research from the nineteenth century onward (Fechner Reference Fechner1860; Helmholtz Reference Helmholtz1856). Many perceptual tasks naturally lend themselves to what has traditionally been called “ideal observer” analysis, whereby the optimal behavior is mathematically determined given a set of assumptions such as the presence of sensory noise, and human behavior is compared to this standard (Geisler Reference Geisler2011; Green & Swets Reference Green and Swets1966; Ulehla Reference Ulehla1966). The extensive literature on this topic includes many examples of humans performing similarly to an ideal observer but also many examples of suboptimal behavior. Perceptual science has a strong tradition of developing models and theories that attempt to account for the full range of empirical data on how humans perceive (Macmillan & Creelman Reference Macmillan and Creelman2005).

Recent years have seen an impressive surge of Bayesian theories of human cognition and perception (Gershman et al. Reference Gershman, Horvitz and Tenenbaum2015; Griffiths et al. Reference Griffiths, Lieder and Goodman2015; Tenenbaum et al. Reference Tenenbaum, Kemp, Griffiths and Goodman2011). These theories often depict humans as optimal decision makers, especially in the area of perception. А number of high-profile papers have shown examples of human perceptual behavior that is close to optimal (Ernst & Banks Reference Ernst and Banks2002; Körding & Wolpert Reference Körding and Wolpert2004; Landy et al. Reference Landy, Maloney, Johnston and Young1995; Shen & Ma Reference Shen and Ma2016), whereas other papers have attempted to explain apparently suboptimal behaviors as being in fact optimal (Weiss et al. Reference Weiss, Simoncelli and Adelson2002). Consequently, many statements by researchers in the field leave the impression that humans are essentially optimal in perceptual tasks:

Psychophysics is providing a growing body of evidence that human perceptual computations are “Bayes’ optimal.” (Knill & Pouget Reference Knill and Pouget2004, p. 712)

Across a wide range of tasks, people seem to act in a manner consistent with optimal Bayesian models. (Vul et al. Reference Vul, Goodman, Griffiths and Tenenbaum2014, p. 1)

These studies with different approaches have shown that human perception is close to the Bayesian optimal. (Körding & Wolpert Reference Körding and Wolpert2006, p. 321)

Despite a number of recent criticisms of such assertions regarding human optimality (Bowers & Davis Reference Bowers and Davis2012a; Reference Bowers and Davis2012b; Eberhardt & Danks Reference Eberhardt and Danks2011; Jones & Love Reference Jones and Love2011; Marcus & Davis Reference Marcus and Davis2013; Reference Marcus and Davis2015), as well as statements from some of the most prominent Bayesian theorists that their goal is not to demonstrate optimality (Goodman et al. Reference Goodman, Frank, Griffiths, Tenenbaum, Battaglia and Hamrick2015; Griffiths et al. Reference Griffiths, Chater, Norris and Pouget2012), the previous quotes indicate that the view that humans are (close to) optimal when making perceptual decisions has taken a strong foothold.

The main purpose of this article is to counteract assertions about human optimality by bringing together the extensive literature on suboptimal perceptual decision making. Although the description of the many findings of suboptimality will occupy a large part of the article, we do not advocate for a shift of labeling observers from “optimal” to “suboptimal.” Instead, we will ultimately argue that we should abandon any emphasis on optimality or suboptimality and return to building a science of perception that attempts to account for all types of behavior.

The article is organized into six sections. After introducing the topic (sect. 1), we explain the Bayesian approach to perceptual decision making and explicitly define a set of standard assumptions that typically determine what behavior is considered optimal (sect. 2). In the central section of the article, we review the vast literature of suboptimal perceptual decision making and show that suboptimalities have been reported in virtually every class of perceptual tasks (sect. 3). We then discuss theoretical problems with the current narrow focus on optimality, such as difficulties in defining what is truly optimal and the limited value of optimality claims in and of themselves (sect. 4). Finally, we argue that the way forward is to build observer models that give equal emphasis to all components of perceptual decision making, not only the decision rule (sect. 5). We conclude that the field should abandon its emphasis on optimality and instead focus on thoroughly testing the hypotheses that have already been generated (sect. 6).

2. Defining optimality

Optimality can be defined within many frameworks. Here we adopt a Bayesian approach because it is widely used in the field and it is general: other approaches to optimality can often be expressed in Bayesian terms.

2.1. The Bayesian approach to perceptual decision making

The Bayesian approach to perceptual decision making starts with specifying the generative model of the task. The model defines the sets of world states, or stimuli, ${\rm {\cal S}}$, internal responses ${\rm {\rm X}}$, actions ${\rm {\cal A}}$, and relevant parameters Θ (such as the sensitivity of the observer). We will mostly focus on cases in which two possible stimuli s 1 and s 2 are presented, and the possible “actions” a 1 and a 2 are reporting that the corresponding stimulus was shown. The Bayesian approach then specifies the following quantities (see Fig. 1 for a graphical depiction):

  • Likelihood function. An external stimulus can produce a range of internal responses. The measurement density, or distribution, p(x|s, θ) is the probability density of obtaining an internal response x given a particular stimulus s. The likelihood function l(s|x, θ) is equal to the measurement density but is defined for a fixed internal response as opposed to a fixed stimulus.

  • Prior. The prior π(s) describes one's assumptions about the probability of each stimulus s.

  • Cost function. The cost function${\rm \;\ {\cal L}}\lpar {s,a} \rpar $ (also called loss function) specifies the cost of taking a specific action for a specific stimulus.

  • Decision rule. The decision rule δ(x) indicates under what combination of the other quantities you should perform one action or another.

Figure 1. Graphical depiction of Bayesian inference. An observer is deciding between two possible stimuli – s 1 (e.g., leftward motion) and s 2 (e.g., rightward motion) – which produce Gaussian measurement distributions of internal responses. The observer's internal response varies from trial to trial, depicted by the three yellow circles for three example trials. On a given trial, the likelihood function is equal to the height of each of the two measurement densities at the value of the observed internal response (lines drawn from each yellow circle) – that is, the likelihood of each stimulus given an internal response. For illustrative purposes, a different experimenter-provided prior and cost function are assumed on each trial. The action a i corresponds to choosing stimulus s i. We obtain the expected cost of each action by multiplying the likelihood, prior, and cost corresponding to each stimulus and then summing the costs associated with the two possible stimuli. The optimal decision rule is to choose the action with the lower cost (the bar with less negative values). In trial 1, the prior and cost function are unbiased, so the optimal decision depends only on the likelihood function. In trial 2, the prior is biased toward s 2, making a 2 the optimal choice even though s 1 is slightly more likely. In trial 3, the cost function favors a 1, but the much higher likelihood of s 2 makes a 2 the optimal choice.

We refer to the likelihood function, prior, cost function, and decision rule as the LPCD components of perceptual decision making.

According to Bayesian decision theory (Körding & Wolpert Reference Körding and Wolpert2006; Maloney & Mamassian Reference Maloney and Mamassian2009), the optimal decision rule is to choose the action a that minimizes the expected loss over all possible stimuli. Using Bayes’ theorem, we can derive the optimal decision rule as a function of the likelihood, prior, and cost function:

$$\delta \lpar x \rpar = {\rm argmin}_{a\in A}\mathop \sum \limits_{s\in {\rm {\cal S}}} l(s \vert x,\theta ){\rm \;} \pi \lpar s \rpar {\rm \;\ {\cal L}}\lpar {s,a} \rpar. $$

2.2. Standard assumptions

Determining whether observers’ decisions are optimal requires the specification of the four LPCD components. How do researchers determine the quantitative form of each component? The following is a typical set of standard assumptions related to each LPCD component:

  • Likelihood function assumptions. The standard assumptions here include Gaussian measurement distributions and stimulus encoding that is independent from other factors such as stimulus presentation history. Note that the experimenter derives the likelihood function from the assumed measurement distributions.

  • Prior and cost function assumptions. The standard assumption about observers’ internal representations of the prior and cost function is that they are identical to the quantities defined by the experimenter. Unless specifically mentioned, the experiments reviewed subsequently here present s 1 and s 2 equally often, which is equivalent to a uniform prior (e.g., $\pi \lpar {s_i} \rpar = \; \displaystyle{1 \over 2}$ when there are two stimuli), and expect observers to maximize percent correct, which is equivalent to a cost function that punishes all incorrect responses, and rewards all correct responses, equally.

  • Decision rule assumptions. The standard assumption about the decision rule is that it is identical to the optimal decision rule.

Finally, additional general standard assumptions include expectations that observers can perform the proper computations on the LPCD components. Note that as specified, the standard assumptions consider Gaussian variability at encoding as the sole corrupting element for perceptual decisions. Section 3 assembles the evidence against this claim.

The attentive reader may object that the standard assumptions cannot be universally true. For example, assumptions related to the likelihood function are likely false for specific paradigms (e.g., measurement noise may not be Gaussian), and assumptions about observers adopting the experimentally defined prior and cost function are likely false for complex experimental designs (Beck et al. Reference Beck, Ma, Pitkow, Latham and Pouget2012). Nevertheless, we take the standard assumptions as a useful starting point for our review because, explicitly or implicitly, they are assumed in most (although not all) studies. In section 3, we label all deviations from behavior prescribed by the standard assumptions as examples of suboptimality. We discuss alternative ways of defining optimality in section 4 and ultimately argue that general statements about the optimality or suboptimality of perceptual decisions are meaningless.

3. Review of suboptimality in perceptual decision making

We review eight categories of tasks for which the optimal decision rule can be determined. For each task category, we first note any relevant information about the measurement distribution, prior, or cost function. We plot the measurement distributions together with the optimal decision rule (which we depict as a criterion drawn on the internal responses ${\rm {\rm X}}$). We then review specific suboptimalities within each task category. For each explanation of apparently suboptimal behavior, we indicate the standard LPCD components proposed to have been violated using the notation [LPCD component], such as [decision rule]. Note that violations of the assumed measurement distributions result in violations of the assumed likelihood functions. In some cases, suboptimalities have been attributed to issues that apply to multiple components (indicated as [general]) or issues of methodology (indicated as [methodological]).

3.1. Criterion in two-choice tasks

In the most common case, observers must distinguish between two possible stimuli, s 1 and s 2, presented with equal probability and associated with equal reward. In Figure 2, we plot the measurement distributions and optimal criteria for the cases of equal and unequal internal variability. The criterion used to make the decision corresponds to the decision rule.

Figure 2. Depiction of the measurement distributions (colored curves) and optimal criteria (equivalent to the decision rules) in two-choice tasks. The upper panel depicts the case when the two stimuli produce the same internal variability (σ 1 = σ 2, where σ is the standard deviation of the Gaussian measurement distribution). The gray vertical line represents the location of the optimal criterion. The lower panel shows the location of the optimal criterion when the variability of the two measurement distributions differs (σ 1 < σ 2, in which case the optimal criterion results in a higher proportion of s 1 responses).

3.1.1. Detection criteria

Many tasks involve the simple distinction between noise (s1) and signal + noise (s2). These are usually referred to as detection tasks. In most cases, s1 is found to produce smaller internal variability than s2 (Green & Swets Reference Green and Swets1966; Macmillan & Creelman Reference Macmillan and Creelman2005; Swets et al. Reference Swets, Tanner and Birdsall1961), from which it follows that an optimal observer would choose s1 more often than s2 even when the two stimuli are presented at equal rates (Fig. 2). Indeed, many detection studies find that observers choose the noise distribution s1 more than half of the time (Gorea & Sagi Reference Gorea and Sagi2000; Green & Swets Reference Green and Swets1966; Rahnev et al. Reference Rahnev, Maniscalco, Graves, Huang, de Lange and Lau2011b; Reckless et al. Reference Reckless, Ousdal, Server, Walter, Andreassen and Jensen2014; Solovey et al. Reference Solovey, Graney and Lau2015; Swets et al. Reference Swets, Tanner and Birdsall1961). However, most studies do not allow for the estimation of the exact measurement distributions for individual observers, and hence it is an open question how optimal observers in those studies actually are. A few studies have reported conditions in which observers choose the noise stimulus s1 less than half of the time (Morales et al. Reference Morales, Solovey, Maniscalco, Rahnev, de Lange and Lau2015; Rahnev et al. Reference Rahnev, Maniscalco, Graves, Huang, de Lange and Lau2011b; Solovey et al. Reference Solovey, Graney and Lau2015). Assuming that the noise distributions in those studies also had lower variability, such behavior is likely suboptimal.

3.1.2. Discrimination criteria

Detection tasks require observers to distinguish between the noise versus signal + noise stimuli, but other tasks require observers to discriminate between two roughly equivalent stimuli. For example, observers might discriminate leftward versus rightward motion or clockwise versus counterclockwise grating orientation. For these types of stimuli, the measurement distributions for each stimulus category can be assumed to have approximately equal variability (Macmillan & Creelman Reference Macmillan and Creelman2005; See et al. Reference See, Warm, Dember and Howe1997). Such studies find that the average criterion location across the whole group of observers is usually close to optimal, but individual observers can still exhibit substantial biases (e.g., Whiteley & Sahani, Reference Whiteley and Sahani2008). In other words, what appears as an optimal criterion on average (across observers) may be an average of suboptimal criteria (Mozer et al. Reference Mozer, Pashler and Homaei2008; Vul et al. Reference Vul, Goodman, Griffiths and Tenenbaum2014). This issue can appear within an individual observer, too, with suboptimal criteria on different trials averaging out to resemble an optimal criterion (see sect. 3.2). To check for criterion optimality within individual observers, we re-analyzed the data from a recent study in which observers discriminated between a grating tilted 45 degrees clockwise or counterclockwise from vertical (Rahnev et al. Reference Rahnev, Nee, Riddle, Larson and D'Esposito2016). Seventeen observers came for four sessions on different days completing 480 trials each time. Using a binomial test, we found that 57 of the 68 total sessions exhibited significant deviation from unbiased responding. Further, observers tended to have relatively stable biases as demonstrated by a positive criterion correlation across all pairs of sessions (all p's < .003). Hence, even if the performance of the group appears to be close to optimal, individual observers may deviate substantially from optimality.

3.1.3. Two-stimulus tasks

The biases observed in detection and discrimination experiments led to the development of the two-alternative forced-choice (2AFC) task, in which both stimulus categories are presented on each trial (Macmillan & Creelman Reference Macmillan and Creelman2005). The 2AFC tasks separate the two stimuli either temporally (also referred to as two-interval forced-choice or 2IFC tasks) or spatially. Note that, in recent years, researchers have begun to use the term “2AFC” for two-choice tasks in which only one stimulus is presented. To avoid confusion, we adopt the term “two-stimulus tasks” to refer to tasks where two stimuli are presented (the original meaning of 2AFC) and the term “one-stimulus tasks” to refer to tasks like single-stimulus detection and discrimination (e.g., the tasks discussed in sects. 3.1.1 and 3.1.2).

Even though two-stimulus tasks were designed to remove observer bias, significant biases have been observed for them, too. Although biases in spatial 2AFC tasks have received less attention, several suboptimalities have been documented for 2IFC tasks. For example, early research suggested that the second stimulus is more often selected as the one of higher intensity, a phenomenon called time-order errors (Fechner Reference Fechner1860; Osgood Reference Osgood1953). More recently, Yeshurun et al. (Reference Yeshurun, Carrasco and Maloney2008) re-analyzed 2IFC data from 17 previous experiments and found significant interval biases. The direction of the bias varied across the different experiments, suggesting that the specific experimental design has an influence on observers’ bias.

3.1.4. Explaining suboptimality in two-choice tasks

Why do people appear to have trouble setting appropriate criteria in two-choice tasks? One possibility is that they have a tendency to give the same fixed response when uncertain [decision rule]. For example, a given observer may respond that he saw left (rather than right) motion every time he got distracted or had very low evidence for either choice. This could be because of a preference for one of the two stimuli or one of the two motor responses. Re-analysis of another previous study (Rahnev et al. Reference Rahnev, Lau and de Lange2011a), where we withheld the stimulus-response mapping until after the stimulus presentation, found that 12 of the 21 observers still showed a significant response bias for motion direction. Therefore, a preference in motor behavior cannot fully account for this type of suboptimality.

Another possibility is that for many observers even ostensibly “equivalent” stimuli such as left and right motion give rise to measurement distributions with unequal variance [likelihood function]. In that case, an optimal decision rule would produce behavior that appears biased. Similarly, in two-stimulus tasks, it is possible that the two stimuli are not given the same resources or that the internal representations for each stimulus are not independent of each other [likelihood function]. Finally, in the case of detection tasks, it is possible that some observers employ an idiosyncratic cost function by treating misses as less costly than false alarms because the latter can be interpreted as lying [cost function].

3.2. Maintaining stable criteria

So far, we have considered the optimality of the decision rule when all trials are considered together. We now turn our attention to whether observers’ decision behavior varies across trials or conditions (Fig. 3).

Figure 3. Depiction of a failure to maintain a stable criterion. The optimal criterion is shown in Figure 2, but observers often fail to maintain that criterion over the course of the experiment, resulting in a criterion that effectively varies across trials. Colored curves show measurement distributions.

3.2.1. Sequential effects

Optimality in laboratory tasks requires that judgments are made based on the evidence from the current stimulus independent of previous stimuli. However, sequential effects are ubiquitous in perceptual tasks (Fischer & Whitney Reference Fischer and Whitney2014; Fründ et al. Reference Frund, Wichmann and Macke2014; Kaneko & Sakai Reference Kaneko and Sakai2015; Liberman et al. Reference Liberman, Fischer and Whitney2014; Norton et al. Reference Norton, Fleming, Daw and Landy2017; Tanner et al. Reference Tanner, Haller and Atkinson1967; Treisman & Faulkner Reference Treisman and Faulkner1984; Ward & Lockhead Reference Ward and Lockhead1970; Yu & Cohen Reference Yu, Cohen, Koller, Schuurmans, Bengio and Bottou2009). The general finding is that observers’ responses are positively autocorrelated such that the response on the current trial is likely to be the same as on the previous trial, though in some cases negative autocorrelations have also been reported (Tanner et al. Reference Tanner, Haller and Atkinson1967; Ward & Lockhead Reference Ward and Lockhead1970). Further, observers are able to adjust to new trial-to-trial statistics, but this adjustment is only strong in the direction of default biases and weak in the opposite direction (Abrahamyan et al. Reference Abrahamyan, Luz Silva, Dakin, Carandini and Gardner2016). Similar effects have been observed in other species such as mice (Busse et al. Reference Busse, Ayaz, Dhruv, Katzner, Saleem, Schölvinck, Zaharia and Carandini2011).

3.2.2. Criterion attraction

Interleaving trials that require different criteria also hinders optimal criterion placement. Gorea and Sagi (Reference Gorea and Sagi2000) proposed that when high-contrast stimuli (optimally requiring a relatively conservative detection criterion) and low-contrast stimuli (optimally requiring a relatively liberal detection criterion) were presented simultaneously, observers used the same compromised detection criterion that was suboptimal for both the high- and low-contrast stimuli. This was despite the fact that, on each trial, they told observers with 100% certainty which contrasts might have been present in each location. Similar criterion attraction has been proposed in a variety of paradigms that involved using stimuli of different contrasts (Gorea & Sagi Reference Gorea and Sagi2001; Reference Gorea and Sagi2002; Gorea et al. Reference Gorea, Caetta and Sagi2005; Zak et al. Reference Zak, Katkov, Gorea and Sagi2012), attended versus unattended stimuli (Morales et al. Reference Morales, Solovey, Maniscalco, Rahnev, de Lange and Lau2015; Rahnev et al. Reference Rahnev, Maniscalco, Graves, Huang, de Lange and Lau2011b), and central versus peripheral stimuli (Solovey et al. Reference Solovey, Graney and Lau2015). Although proposals of criterion attraction consider the absolute location of the criterion on the internal decision axis, recent work has noted the methodological difficulties of recovering absolute criteria in signal detection tasks (Denison et al. Reference Denison, Adler, Carrasco and Ma2018).

3.2.3. Irrelevant reward influencing the criterion

The optimal decision rule is insensitive to multiplicative changes to the cost function. For example, rewarding all correct responses with $0.01 versus $0.03, while incorrect responses receive $0, should not alter the decision criterion; in both cases, the optimal decision rule is the one that maximizes percent correct. However, greater monetary rewards or punishments lead observers to adopt a more liberal detection criterion such that more stimuli are identified as targets (Reckless et al. Reference Reckless, Bolstad, Nakstad, Andreassen and Jensen2013; Reference Reckless, Ousdal, Server, Walter, Andreassen and Jensen2014). Similar changes to the response criterion because of monetary motivation are obtained in a variety of paradigms (Henriques et al. Reference Henriques, Glowacki and Davidson1994; Taylor et al. Reference Taylor, Welsh, Wagner, Phan, Fitzgerald and Gehring2004). To complicate matters, observers’ personality traits interact with the type of monetary reward in altering response criteria (Markman et al. Reference Markman, Baldwin and Maddox2005).

3.2.4. Explaining suboptimality in maintaining stable criteria

Why do people appear to shift their response criteria based on factors that should be irrelevant for criterion placement? Sequential effects are typically explained in terms of an automatic tendency to exploit the continuity in our normal environment, even though such continuity is not present in most experimental setups (Fischer & Whitney Reference Fischer and Whitney2014; Fritsche et al. Reference Fritsche, Mostert and de Lange2017; Liberman et al. Reference Liberman, Fischer and Whitney2014). The visual system could have built-in mechanisms that bias new representations toward recent ones [likelihood function], or it may assume that a new stimulus is likely to be similar to a recent one [prior]. (Note that the alternative likelihoods or priors would need to be defined over pairs or sequences of trials.) Adopting a prior that the environment is autocorrelated may be a good strategy for maximizing reward: Environments typically are autocorrelated and, if they are not, such a prior may not hurt performance (Yu & Cohen Reference Yu, Cohen, Koller, Schuurmans, Bengio and Bottou2009).

Criterion attraction may stem from difficulty maintaining two separate criteria simultaneously. This is equivalent to asserting that in certain situations observers cannot maintain a more complicated decision rule (e.g., different criteria for different conditions) and instead use a simpler one (e.g., single criterion for all conditions) [decision rule]. It is harder to explain why personality traits or task features such as increased monetary rewards (that should be irrelevant to the response criterion) change observers’ criteria.

3.3. Adjusting choice criteria

Two of the most common ways to assess optimality in perceptual decision making are to manipulate the prior probabilities of the stimulus classes and to provide unequal payoffs that bias responses toward one of the stimulus categories (Macmillan & Creelman Reference Macmillan and Creelman2005). Manipulating prior probabilities affects the prior π(s), whereas manipulating payoffs affects the cost function ${\rm {\cal L}}\lpar {s,a} \rpar $. However, the two manipulations have an equivalent effect on the optimal decision rule: Both require observers to shift their decision criterion by a factor dictated by the specific prior probability or reward structure (Fig. 4).

Figure 4. Depiction of optimal adjustment of choice criteria. In addition to the s 1 and s 2 measurement distributions (in thin red and blue lines), the figure shows the corresponding posterior probabilities as a function of x assuming uniform prior (in thick red and blue lines). The vertical criteria depict optimal criterion locations on x (thin gray lines) and correspond to the horizontal thresholds (thick yellow lines). Optimal criterion and threshold for equal prior probabilities and payoffs are shown in dashed lines. If unequal prior probability or unequal payoff is provided such that s 1 ought to be chosen three times as often as s 2, then the threshold would optimally be shifted to 0.75, corresponding to a shift in the criterion such that the horizontal threshold and vertical criterion intersect on the s 2 posterior probability function. The y-axis is probability density for the measurement distributions and probability for the posterior probability functions; the y-axis ticks refer to the posterior probability.

3.3.1. Priors

Two main approaches have been used to determine whether observers can optimally adjust their criterion when one of two stimuli has a higher probability of occurrence. In base-rate manipulations, long blocks of the same occurrence frequency are employed, and observers are typically not informed of the probabilities of occurrence in advance (e.g., Maddox Reference Maddox1995). Most studies find that observers adjust their criterion to account for the unequal base rate, but this adjustment is smaller than what is required for optimal performance, resulting in a conservative criterion placement (Bohil & Maddox Reference Bohil and Maddox2003b; Green & Swets Reference Green and Swets1966; Maddox & Bohil Reference Maddox and Bohil2001; Reference Maddox and Bohil2003; Reference Maddox and Bohil2005; Maddox & Dodd Reference Maddox and Dodd2001; Maddox et al. Reference Maddox, Bohil and Dodd2003; Tanner Reference Tanner1956; Tanner et al. Reference Tanner, Haller and Atkinson1967; Vincent Reference Vincent2011). Some studies have suggested that observers become progressively more suboptimal as the base rate becomes progressively more extreme (Bohil & Maddox Reference Bohil and Maddox2003b; Green & Swets Reference Green and Swets1966). However, a few studies have reported that certain conditions result in extreme criterion placement such that observers rely more on base rate information than is optimal (Maddox & Bohil Reference Maddox and Bohil1998b).

A second way to manipulate the probability of occurrence is to do it on a trial-by-trial basis and explicitly inform observers about the stimulus probabilities before each trial. This approach also leads to conservative criterion placement such that observers do not shift their criterion enough (Ackermann & Landy Reference Ackermann and Landy2015; de Lange et al. Reference de Lange, Rahnev, Donner and Lau2013; Rahnev et al. Reference Rahnev, Lau and de Lange2011a; Summerfield & Koechlin Reference Summerfield and Koechlin2010; Ulehla Reference Ulehla1966).

3.3.2. Payoffs

The decision criterion can also be manipulated by giving different payoffs for different responses. The general finding with this manipulation is that observers, again, do not adjust their criterion enough (Ackermann & Landy Reference Ackermann and Landy2015; Bohil & Maddox Reference Bohil and Maddox2001; Reference Bohil and Maddox2003a; Reference Bohil and Maddox2003b; Busemeyer & Myung Reference Busemeyer and Myung1992; Maddox & Bohil Reference Maddox and Bohil1998a; Reference Maddox and Bohil2000; Reference Maddox and Bohil2001; Reference Maddox and Bohil2003; Reference Maddox and Bohil2005; Maddox & Dodd Reference Maddox and Dodd2001; Maddox et al. Reference Maddox, Bohil and Dodd2003; Markman et al. Reference Markman, Baldwin and Maddox2005; Taylor et al. Reference Taylor, Welsh, Wagner, Phan, Fitzgerald and Gehring2004; Ulehla Reference Ulehla1966) and, as with base rates, become more suboptimal for more extreme payoffs (Bohil & Maddox Reference Bohil and Maddox2003b). Nevertheless, one study that involved a very large number of sessions with two monkeys reported extreme criterion changes (Feng et al. Reference Feng, Holmes, Rorie and Newsome2009).

Criterion adjustments in response to unequal payoffs are usually found to be more suboptimal compared with adjustments in response to unequal base rates (Ackermann & Landy Reference Ackermann and Landy2015; Bohil & Maddox Reference Bohil and Maddox2001; Reference Bohil and Maddox2003a; Busemeyer & Myung Reference Busemeyer and Myung1992; Healy & Kubovy Reference Healy and Kubovy1981; Maddox Reference Maddox2002; Maddox & Bohil Reference Maddox and Bohil1998a; Maddox & Dodd Reference Maddox and Dodd2001), though the opposite pattern was found by Green and Swets (Reference Green and Swets1966).

Finally, the exact payoff structure may also influence observers’ optimality. For example, introducing a cost for incorrect answers leads to more suboptimal criterion placement compared with conditions with the same optimal criterion shift but without a cost for incorrect answers (Maddox & Bohil Reference Maddox and Bohil2000; Maddox & Dodd Reference Maddox and Dodd2001; Maddox et al. Reference Maddox, Bohil and Dodd2003).

3.3.3. Explaining suboptimality in adjusting choice criteria

Why do people appear not to adjust their decision criteria optimally in response to priors and rewards? One possibility is that they do not have an accurate internal representation of the relevant probability implied by the prior or reward structure [general] (Acerbi et al. Reference Acerbi, Vijayakumar and Wolpert2014b; Ackermann & Landy Reference Ackermann and Landy2015; Zhang & Maloney Reference Zhang and Maloney2012). For example, Zhang and Maloney (Reference Zhang and Maloney2012) argued for the presence of “ubiquitous log odds” that systematically distort people's probability judgments such that small values are overestimated and large values are underestimated (Brooke & MacRae Reference Brooke and MacRae1977; Juslin et al. Reference Juslin, Nilsson and Winman2009; Kahneman & Tversky Reference Kahneman and Tversky1979; Varey et al. Reference Varey, Mellers and Birnbaum1990).

A possible explanation for the suboptimality in base-rate experiments is the “flat-maxima” hypothesis, according to which the observer adjusts the decision criterion based on the change in reward and has trouble finding its optimal value if other criterion positions result in similar reward rates [methodological] (Bohil & Maddox Reference Bohil and Maddox2003a; Busemeyer & Myung Reference Busemeyer and Myung1992; Maddox & Bohil Reference Maddox and Bohil2001; Reference Maddox and Bohil2003; Reference Maddox and Bohil2004; Reference Maddox and Bohil2005; Maddox & Dodd Reference Maddox and Dodd2001; Maddox et al. Reference Maddox, Bohil and Dodd2003; von Winterfeldt & Edwards Reference von Winterfeldt and Edwards1982). Another possibility is that the prior observers adopt in base-rate experiments comes from a separate process of Bayesian inference. If observers are uncertain about the true base rate, a prior assumption that it is likely to be unbiased would result in insufficient base rate adjustment [methodological]. A central tendency bias can also arise when observers form a prior based on the sample of stimuli they have encountered so far, which are unlikely to cover the full range of the experimenter-defined stimulus distribution (Petzschner & Glasauer Reference Petzschner and Glasauer2011). We classify these issues as methodological because if the observers have not been able to learn a particular likelihood, prior, and cost function (LPC) component, then they cannot adopt the optimal decision rule.

Finally, another possibility is that observers also place a premium on being correct rather than just maximizing reward [cost function]. Maddox and Bohil (Reference Maddox and Bohil1998a) posited the competition between reward and accuracy maximization (COBRA) hypothesis according to which observers attempt to maximize reward but also place a premium on accuracy (Maddox & Bohil Reference Maddox and Bohil2004; Reference Maddox and Bohil2005). This consideration applies to manipulations of payoffs but not of prior probabilities and may explain why payoff manipulations typically lead to larger deviations from optimality than priors.

3.4. Tradeoff between speed and accuracy

In the previous examples, the only variable of interest has been observers’ choice irrespective of their reaction times (RTs). However, if instructed, observers can provide responses faster at lower accuracy, a phenomenon known as speed-accuracy tradeoff (SAT; Fitts Reference Fitts1966; Heitz Reference Heitz2014). An important question here is whether observers can adjust their RTs optimally to achieve maximum reward in a given amount of time (Fig. 5). A practical difficulty for studies attempting to address this question is that the accuracy/RT curve is not generally known and is likely to differ substantially between different tasks (Heitz Reference Heitz2014). Therefore, the only standard assumption here is that accuracy increases monotonically as a function of RT. Precise accuracy/RT curves can be constructed by assuming one of the many models from the sequential sampling modeling framework (Forstmann et al. Reference Forstmann, Ratcliff and Wagenmakers2016), and there is a vibrant discussion about the optimal stopping rule depending on whether signal reliability is known or unknown (Bogacz Reference Bogacz2007; Bogacz et al. Reference Bogacz, Brown, Moehlis, Holmes and Cohen2006; Drugowitsch et al. Reference Drugowitsch, Moreno-Bote, Churchland, Shadlen and Pouget2012; Reference Drugowitsch, DeAngelis, Angelaki and Pouget2015; Hanks et al. Reference Hanks, Mazurek, Kiani, Hopp and Shadlen2011; Hawkins et al. Reference Hawkins, Forstmann, Wagenmakers, Ratcliff and Brown2015; Thura et al. Reference Thura, Beauregard-Racine, Fradet and Cisek2012). However, because different models predict different accuracy/RT curves, in what follows we only assume a monotonic relationship between accuracy and RT.

Figure 5. (A) Depiction of one possible accuracy/reaction time (RT) curve. Percent correct responses increases monotonically as a function of RT and asymptotes at 90%. (B) The total reward/RT curve for the accuracy/RT curve from panel A with the following additional assumptions: (1) observers complete as many trials as possible within a 30-minute window, (2) completing a trial takes 1.5 seconds on top of the RT (because of stimulus presentation and between-trial breaks), and (3) each correct answer results in 1 point, whereas incorrect answers result in 0 points. The optimal RT – the one that maximizes the total reward – is depicted with dashed lines.

3.4.1. Trading off speed and accuracy

Although observers are able to adjust their behavior to account for both accuracy and RT, they cannot do so optimally (Balcı et al. Reference Balcı, Simen, Niyogi, Saxe, Hughes, Holmes and Cohen2011b; Bogacz et al. Reference Bogacz, Hu, Holmes and Cohen2010; Simen et al. Reference Simen, Contreras, Buck, Hu, Holmes and Cohen2009; Starns & Ratcliff Reference Starns and Ratcliff2010; Reference Starns and Ratcliff2012; Tsetsos et al. Reference Tsetsos, Pfeffer, Jentgens and Donner2015). In most cases, observers take too long to decide, leading to slightly higher accuracy but substantially longer RTs than optimal (Bogacz et al. Reference Bogacz, Hu, Holmes and Cohen2010; Simen et al. Reference Simen, Contreras, Buck, Hu, Holmes and Cohen2009; Starns & Ratcliff Reference Starns and Ratcliff2010; Reference Starns and Ratcliff2012). This effect occurs when observers have a fixed period of time to complete as many trials as possible (Bogacz et al. Reference Bogacz, Hu, Holmes and Cohen2010; Simen et al. Reference Simen, Contreras, Buck, Hu, Holmes and Cohen2009; Starns & Ratcliff Reference Starns and Ratcliff2010; Reference Starns and Ratcliff2012) and in the more familiar design with a fixed number of trials per block (Starns & Ratcliff Reference Starns and Ratcliff2010; Reference Starns and Ratcliff2012). Further, observers take longer to decide for more difficult compared with easier conditions, even though optimizing the total reward demands that they do the opposite (Oud et al. Reference Oud, Krajbich, Miller, Cheong, Botvinick and Fehr2016; Starns & Ratcliff Reference Starns and Ratcliff2012). Older adults are even more suboptimal than college-age participants by this measure (Starns & Ratcliff Reference Starns and Ratcliff2010; Reference Starns and Ratcliff2012).

3.4.2. Keeping a low error rate under implicit time pressure

Even though observers tend to overemphasize accuracy, they are also suboptimal in tasks that require an extreme emphasis on accuracy. This conclusion comes from a line of research on visual search in which observers are typically given an unlimited amount of time to decide whether a target is present or not (Eckstein Reference Eckstein2011). In certain situations, such as airport checkpoints or detecting tumors in mammograms, the goal is to keep a very low miss rate irrespective of RT, because misses can have dire consequences (Evans et al. Reference Evans, Birdwell and Wolfe2013; Wolfe et al. Reference Wolfe, Brunelli, Rubinstein and Horowitz2013). The optimal RT can be derived from Figure 5A as the minimal RT that results in the desired accuracy rate. A series of studies by Wolfe and colleagues found that observers, even trained doctors and airport checkpoint screeners, are suboptimal in such tasks in that they allow overly high rates of misses (Evans et al. Reference Evans, Tambouret, Evered, Wilbur and Wolfe2011; Reference Evans, Birdwell and Wolfe2013; Wolfe & Van Wert Reference Wolfe and Van Wert2010; Wolfe et al. Reference Wolfe, Horowitz and Kenner2005; Reference Wolfe, Brunelli, Rubinstein and Horowitz2013). Further, this effect was robust and resistant to a variety of methods designed to help observers take longer in order to achieve higher accuracy (Wolfe et al. Reference Wolfe, Horowitz, Van Wert, Kenner, Place and Kibbi2007) or reduce motor errors (Van Wert et al. Reference van Wert, Horowitz and Wolfe2009). An explanation of this suboptimality based on capacity limits is rejected by two studies that found that observers can be induced to take longer time, and thus achieve higher accuracy, by first providing them with a block of high prevalence targets accompanied by feedback (Wolfe et al. Reference Wolfe, Horowitz, Van Wert, Kenner, Place and Kibbi2007; Reference Wolfe, Brunelli, Rubinstein and Horowitz2013).

3.4.3. Explaining suboptimality in the speed-accuracy tradeoff

Why do people appear to be unable to trade off speed and accuracy optimally? Similar to explanations from the previous sections, it is possible to account for overly long RTs by postulating that, in addition to maximizing their total reward, observers place a premium on being accurate [cost function] (Balcı et al. Reference Balcı, Simen, Niyogi, Saxe, Hughes, Holmes and Cohen2011b; Bogacz et al. Reference Bogacz, Hu, Holmes and Cohen2010; Holmes & Cohen Reference Holmes and Cohen2014). Another possibility is that observers’ judgments of elapsed time are noisy [general], and longer-than-optimal RTs lead to a higher reward rate than RTs that are shorter than optimal by the same amount (Simen et al. Reference Simen, Contreras, Buck, Hu, Holmes and Cohen2009; Zacksenhouse et al. Reference Zacksenhouse, Bogacz and Holmes2010). Finally, in some situations, observers may also place a premium on speed [cost function], preventing a very low error rate (Wolfe et al. Reference Wolfe, Brunelli, Rubinstein and Horowitz2013).

3.5. Confidence in one's decision

The Bayesian approach prescribes how the posterior probability should be computed. Although researchers typically examine the question of whether the stimulus with highest posterior probability is selected, it is also possible to examine whether observers can report the actual value of the posterior distribution or perform simple computations with it (Fig. 6). In such cases, observers are asked to provide “metacognitive” confidence ratings about the accuracy of their decisions (Metcalfe & Shimamura Reference Metcalfe and Shimamura1994; Yeung & Summerfield Reference Yeung and Summerfield2012). Such studies rarely provide subjects with an explicit cost function (but see Kiani & Shadlen Reference Kiani and Shadlen2009; Rahnev et al. Reference Rahnev, Kok, Munneke, Bahdo, de Lange and Lau2013) but, in many cases, reasonable assumptions can be made in order to derive optimal performance (see sects. 3.5.1–3.5.4).

Figure 6. Depiction of how an observer should give confidence ratings. Similar to Figure 4, both the measurement distributions and posterior probabilities as a function of x assuming uniform prior are depicted. The confidence thresholds (depicted as yellow lines) correspond to criteria defined on x (depicted as gray lines). The horizontal thresholds and vertical criteria intersect on the posterior probability functions. The y-axis is probability density for the measurement distributions and probability for the posterior probability functions; the y-axis ticks refer to the posterior probability.

3.5.1. Overconfidence and underconfidence (confidence calibration)

It is straightforward to construct a payoff structure for confidence ratings such that observers gain the most reward when their confidence reflects the posterior probability of being correct (e.g., Fleming et al. Reference Fleming, Massoni, Gajdos and Vergnaud2016; Massoni et al. Reference Massoni, Gajdos and Vergnaud2014). Most studies, however, do not provide observers with such a payoff structure, so assessing the optimality of the confidence ratings necessitates the further assumption that observers create a similar function internally. To test for optimality, we can then consider, for example, all trials in which an observer has 70% confidence of being correct and test whether the average accuracy on those trials is indeed 70%. This type of relationship between confidence and accuracy is often referred to as confidence calibration (Baranski & Petrusic Reference Baranski and Petrusic1994). Studies of confidence have found that for certain tasks observers are overconfident (i.e., they overestimate their accuracy) (Adams Reference Adams1957; Baranski & Petrusic Reference Baranski and Petrusic1994; Dawes Reference Dawes, Lantermann and Feger1980; Harvey Reference Harvey1997; Keren Reference Keren1988; Koriat Reference Koriat2011), whereas for other tasks observers are underconfident (i.e., they underestimate their accuracy) (Baranski & Petrusic Reference Baranski and Petrusic1994; Björkman et al. Reference Björkman, Juslin and Winman1993; Dawes Reference Dawes, Lantermann and Feger1980; Harvey Reference Harvey1997; Winman & Juslin Reference Winman and Juslin1993). One pattern that emerges consistently is that overconfidence occurs in difficult tasks, whereas underconfidence occurs in easy tasks (Baranski & Petrusic Reference Baranski and Petrusic1994, Reference Baranski and Petrusic1995, Reference Baranski and Petrusic1999), a phenomenon known as the hard-easy effect (Gigerenzer et al. Reference Gigerenzer, Hoffrage and Kleinbölting1991). Similar results are seen for tasks outside of the perceptual domain such as answering general knowledge questions (Griffin & Tversky Reference Griffin and Tversky1992). Overconfidence and underconfidence are stable over different tasks (Ais et al. Reference Ais, Zylberberg, Barttfeld and Sigman2015; Song et al. Reference Song, Kanai, Fleming, Weil, Schwarzkopf and Rees2011) and depend on non-perceptual factors such as one's optimism bias (Ais et al. Reference Ais, Zylberberg, Barttfeld and Sigman2015).

3.5.2. Dissociations of confidence and accuracy across different experimental conditions

Although precise confidence calibration is computationally difficult, a weaker test of optimality examines whether experimental conditions that lead to the same performance are judged with the same level of confidence (even if this level is too high or too low). This test only requires that observers’ confidence ratings follow a consistent internal cost function across the two tasks. Many studies demonstrate dissociations between confidence and accuracy across tasks, thus showing that observers fail this weaker optimality test. For example, speeded responses can decrease accuracy but leave confidence unchanged (Baranski & Petrusic Reference Baranski and Petrusic1994; Vickers & Packer Reference Vickers and Packer1982), whereas slowed responses can lead to the same accuracy but lower confidence (Kiani et al. Reference Kiani, Corthell and Shadlen2014). Dissociations between confidence and accuracy have also been found in conditions that differ in attention (Rahnev et al. Reference Rahnev, Bahdo, de Lange and Lau2012a; Rahnev et al. Reference Rahnev, Maniscalco, Graves, Huang, de Lange and Lau2011b; Wilimzig et al. Reference Wilimzig, Tsuchiya, Fahle, Einhäuser and Koch2008), the variability of the perceptual signal (de Gardelle & Mamassian Reference de Gardelle and Mamassian2015; Koizumi et al. Reference Koizumi, Maniscalco and Lau2015; Samaha et al. Reference Samaha, Barrett, Sheldon, LaRocque and Postle2016; Song et al. Reference Song, Koizumi, Lau and Overgaard2015; Spence et al. Reference Spence, Dux and Arnold2016; Zylberberg et al. Reference Zylberberg, Roelfsema and Sigman2014), the stimulus-onset asynchrony in metacontrast masking (Lau & Passingham Reference Lau and Passingham2006), the presence of unconscious information (Vlassova et al. Reference Vlassova, Donkin and Pearson2014), and the relative timing of a concurrent saccade (Navajas et al. Reference Navajas, Sigman and Kamienkowski2014). Further, some of these biases seem to arise from individual differences that are stable across multiple sessions (de Gardelle & Mamassian Reference de Gardelle and Mamassian2015). Finally, dissociations between confidence and accuracy have been found in studies that applied transcranial magnetic stimulation (TMS) to the visual (Rahnev et al. Reference Rahnev, Maniscalco, Luber, Lau and Lisanby2012b), premotor (Fleming et al. Reference Fleming, Maniscalco and Ko2015), or frontal cortex (Chiang et al. Reference Chiang, Lu, Hsieh, Chang and Yang2014).

3.5.3. Metacognitive sensitivity (confidence resolution)

The previous sections were concerned with the average magnitude of confidence ratings over many trials. Another measure of interest is the degree of correspondence between confidence and accuracy on individual trials (Metcalfe & Shimamura Reference Yeung and Summerfield1994), called metacognitive sensitivity (Fleming & Lau Reference Fleming and Lau2014) or confidence resolution (Baranski & Petrusic Reference Baranski and Petrusic1994). Recently, Maniscalco and Lau (Reference Maniscalco and Lau2012) developed a method to quantify how optimal an observer's metacognitive sensitivity is. Their method computes meta-d′, a measure of how much information is available for metacognition, which can then be compared with the actual d′ value. An optimal observer would have a meta-d′/d′ ratio of 1. Maniscalco and Lau (Reference Maniscalco and Lau2012) obtained a ratio of 0.77, suggesting a 23% loss of information for confidence judgments. Even though some studies that used the same measure but different perceptual paradigms found values close to 1 (Fleming et al. Reference Fleming, Ryu, Golfinos and Blackmon2014), many others arrived at values substantially lower than 1 (Bang et al. Reference Bang, Shekhar and Rahnevin press; Maniscalco & Lau Reference Maniscalco and Lau2015; Maniscalco et al. Reference Maniscalco, Peters and Lau2016; Massoni Reference Massoni2014; McCurdy et al. Reference McCurdy, Maniscalco, Metcalfe, Liu, de Lange and Lau2013; Schurger et al. Reference Schurger, Kim and Cohen2015; Sherman et al. Reference Sherman, Seth, Barrett and Kanai2015; Vlassova et al. Reference Vlassova, Donkin and Pearson2014). Interestingly, at least one study has reported values significantly greater than 1, suggesting that in certain cases the metacognitive system has more information than was used for the primary decision (Charles et al. Reference Charles, Van Opstal, Marti and Dehaene2013), thus implying the presence of suboptimality in the perceptual decision.

3.5.4. Confidence does not simply reflect the posterior probability of being correct

Another way of assessing the optimality of confidence ratings is to determine whether observers compute confidence in a manner consistent with the posterior probability of being correct. This is also a weaker condition than reporting the actual posterior probability of being correct, because it does not specify how observers should place decision boundaries between different confidence ratings, only that these boundaries should depend on the posterior probability of being correct. Although one study found that confidence ratings are consistent with computations based on the posterior probability (Sanders et al. Reference Sanders, Hangya and Kepecs2016; but see Adler & Ma Reference Adler and Ma2018b), others showed that either some (Aitchison et al. Reference Aitchison, Bang, Bahrami and Latham2015; Navajas et al. Reference Navajas, Hindocha, Foda, Keramati, Latham and Bahrami2017) or most (Adler & Ma Reference Adler and Ma2018a; Denison et al. Reference Denison, Adler, Carrasco and Ma2018) observers are described better by heuristic models in which confidence depends on uncertainty but not on the actual posterior probability of being correct.

Further, confidence judgments are influenced by a host of factors unrelated to the perceptual signal at hand and thus in violation of the principle that they should reflect the posterior probability of being correct. For example, emotional states, such as worry (Massoni Reference Massoni2014) and arousal (Allen et al. Reference Allen, Frank, Schwarzkopf, Fardo, Winston, Hauser and Rees2016), affect how sensory information relates to confidence ratings. Other factors, such as eye gaze stability (Schurger et al. Reference Schurger, Kim and Cohen2015), working memory load (Maniscalco & Lau Reference Maniscalco and Lau2015), and age (Weil et al. Reference Weil, Fleming, Dumontheil, Kilford, Weil, Rees, Dolan and Blakemore2013), affect the relationship between confidence and accuracy. Sequential effects have also been reported for confidence judgments such that a high confidence rating is more likely to follow a high, rather than low, confidence rating (Mueller & Weidemann Reference Mueller and Weidemann2008). Confidence dependencies exist even between different tasks, such as letter and color discrimination, that depend on different neural populations in the visual cortex (Rahnev et al. Reference Rahnev, Koizumi, McCurdy, D'Esposito and Lau2015). Inter-task confidence influences have been dubbed “confidence leak” and have been shown to be negatively correlated with observers’ metacognitive sensitivity (Rahnev et al. Reference Rahnev, Koizumi, McCurdy, D'Esposito and Lau2015).

Confidence has also been shown to exhibit a “positive evidence” bias (Maniscalco et al. Reference Maniscalco, Peters and Lau2016; Zylberberg et al. Reference Zylberberg, Barttfeld and Sigman2012). In two-choice tasks, one can distinguish between sensory evidence in a trial that is congruent with the observer's response on that trial (positive evidence) and sensory evidence that is incongruent with the response (negative evidence). Even though the perceptual decisions usually follow the optimal strategy of weighting equally both of these sources of evidence, confidence ratings are suboptimal in depending more on the positive evidence (Koizumi et al. Reference Koizumi, Maniscalco and Lau2015; Maniscalco et al. Reference Maniscalco, Peters and Lau2016; Samaha et al. Reference Samaha, Barrett, Sheldon, LaRocque and Postle2016; Song et al. Reference Song, Koizumi, Lau and Overgaard2015; Zylberberg et al. Reference Zylberberg, Barttfeld and Sigman2012).

3.5.5. Explaining suboptimality in confidence ratings

Why do people appear to give inappropriate confidence ratings? Some components of overconfidence and underconfidence can be explained by inappropriate transformation of internal evidence into probabilities [general] (Zhang & Maloney Reference Zhang and Maloney2012), methodological considerations such as interleaving conditions with different difficulty levels, which can have inadvertent effects on the prior [methodological] (Drugowitsch et al. Reference Drugowitsch, Moreno-Bote and Pouget2014b), or even individual differences such as shyness about giving high confidence, which can be conceptualized as extra cost for high-confidence responses [cost function]. Confidence-accuracy dissociations are often attributed to observers’ inability to maintain different criteria for different conditions, even if they are clearly distinguishable [decision rule] (Koizumi et al. Reference Koizumi, Maniscalco and Lau2015; Rahnev et al. Reference Rahnev, Maniscalco, Graves, Huang, de Lange and Lau2011b). The “positive evidence” bias [decision rule] introduced in the end of section 3.5.4 can also account for certain suboptimalities in confidence ratings.

More generally, it is possible that confidence ratings are not only based on the available perceptual evidence as assumed by most modeling approaches (Drugowitsch & Pouget Reference Drugowitsch and Pouget2012; Green & Swets Reference Green and Swets1966; Macmillan & Creelman Reference Macmillan and Creelman2005; Ratcliff & Starns Reference Ratcliff and Starns2009; Vickers Reference Vickers1979). Other theories postulate the existence of either different processing streams that contribute differentially to the perceptual decision and the subjective confidence judgment (Del Cul et al. Reference Del Cul, Dehaene, Reyes, Bravo and Slachevsky2009; Jolij & Lamme Reference Jolij and Lamme2005; Weiskrantz Reference Weiskrantz1996) or a second processing stage that determines the confidence judgment and that builds on the information in an earlier processing stage responsible for the perceptual decision (Bang et al. Reference Bang, Shekhar and Rahnevin press; Fleming & Daw Reference Fleming and Daw2017; Lau & Rosenthal Reference Lau and Rosenthal2011; Maniscalco & Lau Reference Maniscalco and Lau2010, Reference Maniscalco and Lau2016; Pleskac & Busemeyer Reference Pleskac and Busemeyer2010; van den Berg et al. Reference van den Berg, Yoo and Ma2017). Both types of models could be used to explain the various findings of suboptimal behavior and imply the existence of different measurement distributions for decision and confidence [likelihood function].

3.6. Comparing sensitivity in different tasks

The previous sections discussed observers’ performance on a single task. Another way of examining optimality is to compare the performance on two related tasks. If the two tasks have a formal relationship, then an optimal observer's sensitivity on the two tasks should follow that relationship.

3.6.1. Comparing performance in one-stimulus and two-stimulus tasks

Visual sensitivity has traditionally been measured by employing either (1) a one-stimulus (detection or discrimination) task in which a single stimulus from one of two stimulus classes is presented on each trial or (2) a two-stimulus task in which both stimulus classes are presented on each trial (see sect. 3.1.3). Intuitively, two-stimulus tasks are easier because the final decision is based on more perceptual information. Assuming independent processing of each stimulus, the relationship between the sensitivity on these two types of tasks can be mathematically defined: The sensitivity on the two-stimulus task should be $\sqrt 2 $ times higher than on the one-stimulus task (Macmillan & Creelman, Reference Macmillan and Creelman2005; Fig. 7). Nevertheless, empirical studies have often contradicted this predicted relationship: Many studies have found sensitivity ratios smaller than $\sqrt 2 $ (Creelman & Macmillan Reference Creelman and Macmillan1979; Jesteadt Reference Jesteadt1974; Leshowitz Reference Leshowitz1969; Markowitz & Swets Reference Markowitz and Swets1967; Pynn Reference Pynn1972; Schulman & Mitchell Reference Schulman and Mitchell1966; Swets & Green Reference Swets, Green and Cherry1961; Viemeister Reference Viemeister1970; Watson et al. Reference Watson, Kellogg, Kawanishi and Lucas1973; Yeshurun et al. Reference Yeshurun, Carrasco and Maloney2008), though a few have found ratios larger than $\sqrt 2 $ (Leshowitz Reference Leshowitz1969; Markowitz & Swets Reference Markowitz and Swets1967; Swets & Green Reference Swets, Green and Cherry1961).

Figure 7. Depiction of the relationship between one-stimulus and two-stimulus tasks. Each axis corresponds to a one-stimulus task (e.g., Fig. 2). The three sets of concentric circles represent two-dimensional circular Gaussian distributions corresponding to presenting two stimuli in a row (e.g., s 2,s 1 means that s 2 was the first stimulus and s 1 was the second stimulus). If the discriminability between s 1 and s 2 is d′ (one-stimulus task; gray lines in triangle), then the Pythagorean theorem gives us the expected discriminability between s 1,s 2 and s 2,s 1 (two-stimulus task; blue line in triangle).

3.6.2. Comparing performance in other tasks

Many other comparisons between tasks have been performed. In temporal 2IFC tasks, observers often have different sensitivities to the two stimulus intervals (García-Pérez & Alcalá-Quintana Reference García-Pérez and Alcalá-Quintana2010; Reference García-Pérez and Alcalá-Quintana2011; Yeshurun et al. Reference Yeshurun, Carrasco and Maloney2008), suggesting an inability to distribute resources equally. Other studies find that longer inter-stimulus intervals in 2IFC tasks lead to decreases in sensitivity (Berliner & Durlach Reference Berliner and Durlach1973; Kinchla & Smyzer Reference Kinchla and Smyzer1967; Tanner Reference Tanner1961), presumably because of memory limitations. Further, choice variability on three-choice tasks is greater than what would be predicted by a related two-choice task (Drugowitsch et al. Reference Drugowitsch, Wyart, Devauchelle and Koechlin2016). Creelman and Macmillan (Reference Creelman and Macmillan1979) compared the sensitivity on nine different psychophysical tasks and found a complex pattern of dependencies, many of which were at odds with optimal performance. Finally, Olzak (Reference Olzak1985) demonstrated deviations from the expected relationship between detection and discrimination tasks.

An alternative approach to comparing an observer's performance on different tasks is allowing observers to choose which tasks they prefer to complete and analyzing the optimality of these decisions. In particular, one can test for the presence of transitivity: If an observer prefers task A to task B and task B to task C, then the observer should prefer task A to task C. Several studies suggest that human observers violate the transitivity principle both in choosing tasks (Zhang et al. Reference Zhang, Morvan and Maloney2010) and choosing stimuli (Tsetsos et al. Reference Tsetsos, Moran, Moreland, Chater, Usher and Summerfield2016a), though there is considerable controversy surrounding such findings (Davis-Stober et al. Reference Davis-Stober, Park, Brown and Regenwetter2016; Kalenscher et al. Reference Kalenscher, Tobler, Huijbers, Daselaar and Pennartz2010; Regenwetter et al. Reference Regenwetter, Dana and Davis-Stober2010; Reference Regenwetter, Dana, Davis-Stober and Guo2011, Reference Regenwetter, Cavagnaro, Popova, Guo, Zwilling, Lim and Stevens2017).

3.6.3. Explaining suboptimality in between-task comparisons

Why does human performance on different tasks violate the expected relationship between these tasks? One possibility is that observers face certain capacity limits in one task, but not the other, that alter how the stimuli are encoded [likelihood function]. For example, compared to a one-stimulus task, the more complex two-stimulus task requires the simultaneous processing of two stimuli. If limited resources hamper the processing of the second stimulus, then sensitivity in that task will fall short of what is predicted based on the one-stimulus task.

In some experiments, observers performed worse than expected on the one-stimulus task, rather than on the two-stimulus task. A possible explanation of this effect is the presence of a larger “criterion jitter” in the one-stimulus task (i.e., a larger variability in the decision criterion from trial to trial). Because two-stimulus tasks involve the comparison of two stimuli on each trial, these tasks are less susceptible to criterion jitter. Such criterion variability, which could stem from sequential dependencies or even random criterion fluctuations (see sect. 3.2), decreases the estimated stimulus sensitivity (Mueller & Weidemann Reference Mueller and Weidemann2008). The criterion jitter could also be the result of computational imprecision [general] (Bays & Dowding Reference Bays and Dowding2017; Beck et al. Reference Beck, Ma, Pitkow, Latham and Pouget2012; Dayan Reference Dayan2014; Drugowitsch et al. Reference Drugowitsch, Wyart, Devauchelle and Koechlin2016; Renart & Machens Reference Renart and Machens2014; Whiteley & Sahani Reference Whiteley and Sahani2012; Wyart & Koechlin Reference Wyart and Koechlin2016). Such imprecision could arise from constraints at the neural level and may account for a large amount of choice suboptimality (Drugowitsch et al. Reference Drugowitsch, Wyart, Devauchelle and Koechlin2016).

3.7. Cue combination

Studies of cue combination have been fundamental to the view that sensory perception is optimal (Trommershäuser et al. Reference Trommershäuser, Körding and Landy2011). Cue combination (also called “cue integration”) is needed whenever different sensory features provide separate pieces of information about a single physical quantity. For example, auditory and visual signals can separately inform about the location of an object. Each cue provides imperfect information about the physical world, but different cues have different sources of variability. As a result, integrating the different cues can provide a more accurate and reliable estimate of the physical quantity of interest.

One can test for optimality in cue combination by comparing the perceptual estimate formed from two cues with the estimates formed from each cue individually. The optimal estimate is typically taken to be the one that maximizes precision (minimizes variability) across trials (Fig. 8). When the variability for each cue is Gaussian and independent of the other cues, the maximum likelihood estimate (MLE) is a linear combination of the estimates from each cue, weighted by their individual reliabilities (Landy et al. Reference Landy, Banks, Knill, Trommershäuser, Körding and Landy2011). Whether observers conform to this weighted sum formula can be readily tested psychophysically, and a large number of studies have done exactly this for different types of cues and tasks (for reviews, see Ma Reference Ma2010; Trommershäuser et al. Reference Trommershäuser, Körding and Landy2011).

Figure 8. Optimal cue combination. Two cues that give independent information about the value of a sensory feature (red and blue curves) are combined to form a single estimate of the feature value (yellow curve). For Gaussian cue distributions, the combined cue distribution is narrower than both individual cue distributions, and its mean is closer to the mean of the distribution of the more reliable cue.

In particular, the optimal mean perceptual estimate (x) after observing cue 1 (with feature estimate x 1 and variance $\sigma _1^2 $) and cue 2 (with feature estimate x 2 and variance $\sigma _2^2 $) is

$$x = \displaystyle{{\displaystyle{{x_1} \over {\sigma _1^2}} + \displaystyle{{x_2} \over {\sigma _2^2}}} \over {\displaystyle{1 \over {\sigma _1^2}} + \displaystyle{1 \over {\sigma _2^2}}}}, $$

such that the feature estimate x i is weighted by its reliability $\displaystyle{1 \over {\sigma _i^2}} $ and the whole expression is normalized by the sum of the reliabilities. The optimal variance of the perceptual estimate (σ 2) is

$$\sigma ^2 = \displaystyle{{\sigma _1^2 \sigma _2^2} \over {\sigma _1^2 + \sigma _2^2}}. $$

3.7.1. Examples of optimality in cue combination

A classic example of cue combination is a study of visual-haptic cue combination by Ernst and Banks (Reference Ernst and Banks2002). In this study, observers estimated the height of a rectangle using (1) only sight, (2) only touch, or (3) both sight and touch. Performance in the visual-haptic condition was well described by the MLE formula: The single cue measurements predicted both the reliability of the combined estimates and the weights given to each cue. Many studies have observed similar optimal cue combination behavior in a range of tasks estimating different physical quantities (Trommershäuser et al. Reference Trommershäuser, Körding and Landy2011). These studies have investigated integration across two modalities (including vision, touch, audition, the vestibular sense, and proprioception; e.g., Alais & Burr, Reference Alais and Burr2004; Ernst & Banks, Reference Ernst and Banks2002; Gu et al. Reference Gu, Angelaki and DeAngelis2008; van Beers et al. Reference van Beers, Sittig and van der Gon Denier1996) and across two features in the same modality, such as various visual cues to depth (e.g., Jacobs Reference Jacobs1999; Landy et al. Reference Landy, Maloney, Johnston and Young1995). Common among these experiments is that trained observers complete many trials of a psychophysical task, and the two cues provide similar estimates of the quantity of interest. Optimal cue combination has also been observed during sensory-motor integration (Maloney & Zhang Reference Maloney and Zhang2010; Trommershäuser Reference Trommershäuser2009; Wei & Körding Reference Wei, Körding, Trommershäuser, Körding and Landy2011; Yeshurun et al. Reference Yeshurun, Carrasco and Maloney2008).

3.7.2. Examples of suboptimality in cue combination

Because optimality is often the hypothesized outcome in cue combination studies, findings of suboptimality may be underreported or underemphasized in the literature (Rosas & Wichmann Reference Rosas, Wichmann, Trommershäuser, Körding and Landy2011). Still, a number of studies have demonstrated suboptimal cue combination that violates some part of the MLE formula. These violations fall into two categories: (1) those in which the cues are integrated but are not weighted according to their independently measured reliabilities, and (2) those in which estimates from two cues are no better than estimates from a single cue.

In the first category are findings from a wide range of combined modalities: visual-auditory (Battaglia et al. Reference Battaglia, Jacobs and Aslin2003; Burr et al. Reference Burr, Banks and Morrone2009; Maiworm & Röder Reference Maiworm and Röder2011), visual-vestibular (Fetsch et al. Reference Fetsch, Pouget, Deangelis and Angelaki2012; Prsa et al. Reference Prsa, Gale and Blanke2012), visual-haptic (Battaglia et al. Reference Battaglia, Kersten and Schrater2011; Rosas et al. Reference Rosas, Wagemans, Ernst and Wichmann2005), and visual-visual (Knill & Saunders Reference Knill and Saunders2003; Rosas et al. Reference Rosas, Wichmann and Wagemans2007). For example, auditory and visual cues were not integrated according to the MLE rule in a localization task; instead, observers treated the visual cue as though it were more reliable than it really was (Battaglia et al. Reference Battaglia, Jacobs and Aslin2003). Similarly, visual and haptic texture cues were integrated according to their reliabilities, but observers underweighted the visual cue (Rosas et al. Reference Rosas, Wagemans, Ernst and Wichmann2005). Suboptimal integration of visual and auditory cues was also found for patients with central vision loss, but not for patients with peripheral vision loss (Garcia et al. Reference Garcia, Jones, Reeve, Michaelides, Rubin and Nardini2017).

In some of these studies, cue misweighting was restricted to low-reliability cues: In a visual-vestibular heading task, observers overweighted vestibular cues when visual reliability was low (Fetsch et al. Reference Fetsch, Pouget, Deangelis and Angelaki2012), and in a visual-auditory temporal order judgment task, observers overweighted auditory cues when auditory reliability was low (Maiworm & Röder Reference Maiworm and Röder2011). However, overweighting does not only occur within a limited range of reliabilities (e.g., Battaglia et al. Reference Battaglia, Jacobs and Aslin2003; Prsa et al. Reference Prsa, Gale and Blanke2012).

Several studies have failed to find optimal cue combination in the temporal domain. In an audiovisual rate combination task, observers only partially integrated the auditory and visual cues, and they did not integrate them at all when the rates were very different (Roach et al. Reference Roach, Heron and McGraw2006). Observers also overweighted auditory cues in temporal order judgment tasks (Maiworm & Röder Reference Maiworm and Röder2011) and temporal bisection tasks (Burr et al. Reference Burr, Banks and Morrone2009). It is well established that when two cues give very different estimates, observers tend to discount one of them (Gepshtein et al. Reference Gepshtein, Burge, Ernst and Banks2005; Jack & Thurlow Reference Jack and Thurlow1973; Körding et al. Reference Körding, Beierholm, Ma, Quartz, Tenenbaum and Shams2007; Roach et al. Reference Roach, Heron and McGraw2006; Warren & Cleaves Reference Warren and Cleaves1971), an effect which has been called “robust fusion” (Maloney & Landy Reference Maloney, Landy and Pearlman1989), which may arise from inferring that the two cues come from separate sources (Körding et al. Reference Körding, Beierholm, Ma, Quartz, Tenenbaum and Shams2007). However, in most of the studies just described, suboptimal cue combination was observed even when the cues gave similar estimates.

In the second category of suboptimal cue combination findings, two cues are no better than one (Chen & Tyler Reference Chen and Tyler2015; Drugowitsch et al. Reference Drugowitsch, DeAngelis, Klier, Angelaki and Pouget2014a; Landy & Kojima Reference Landy and Kojima2001; Oruç et al. Reference Oruç, Maloney and Landy2003; Rosas et al. Reference Rosas, Wagemans, Ernst and Wichmann2005; Reference Rosas, Wichmann and Wagemans2007). (Note that some of these studies found a mix of optimal and suboptimal observers.) Picking the best cue is known as a “veto” type of cue combination (Bülthoff & Mallot Reference Bülthoff and Mallot1988) and is considered a case of “strong fusion” (Clark & Yullie Reference Clark and Yullie1990; Landy et al. Reference Landy, Maloney, Johnston and Young1995). This is an even more serious violation of optimal cue combination, because it is as though no integration has taken place at all – the system either picks the best cue or, in some cases, does worse with two cues than with one.

Cues may also be mandatorily combined even when doing so is not suitable for the observer's task. For example, texture and disparity information about slant was subsumed in a combined estimate, rendering the single cue estimates unrecoverable (Hillis et al. Reference Hillis, Ernst, Banks and Landy2002). Interestingly, the single cue estimates were not lost for children, allowing them to outperform adults when the cues disagreed (Nardini et al. Reference Nardini, Bedford and Mareschal2010). In a related finding, observers used multiple visual features to identify a letter even when the optimal strategy was to use only a single, relevant feature (Saarela & Landy Reference Saarela and Landy2015).

3.7.3. Combining stimuli of the same type

So far, we have only considered cue combination studies in which the two cues come from different sensory modalities or dimensions. Suboptimal behavior has also been observed when combining cues from the same dimension. For example, Summerfield and colleagues have shown that observers do not weight every sample stimulus equally in a decision (Summerfield & Tsetsos Reference Summerfield and Tsetsos2015). For simultaneous samples, observers underweighted “outlier” stimuli lying far from the mean of the sample (de Gardelle & Summerfield Reference de Gardelle and Summerfield2011; Michael et al. Reference Michael, de Gardelle and Summerfield2014; Reference Michael, de Gardelle, Nevado-Holgado and Summerfield2015; Vandormael et al. Reference vandormael, Castañón, Balaguer, Li and Summerfield2017). For sequential samples, observers overweighted stimuli toward the end of the sequence (a recency effect) as well as stimuli that are similar to recently presented items (Bang & Rahnev Reference Bang and Rahnev2017; Cheadle et al. Reference Cheadle, Wyart, Tsetsos, Myers, de Gardelle, Castañón and Summerfield2014; Wyart et al. Reference Wyart, Myers and Summerfield2015). Observers also used only a subset of a sample of orientations to estimate the mean orientation of the sample (Dakin Reference Dakin2001). More generally, accuracy on tasks with sequential samples is substantially lower than what would be predicted by sensory noise alone (Drugowitsch et al. Reference Drugowitsch, Wyart, Devauchelle and Koechlin2016).

3.7.4. Combining sensory and motor cues

Suboptimal cue integration has also been found in sensory-motor tasks. For example, when integrating the path of a pointing movement with online visual feedback, observers underestimated the uncertainty indicated by the feedback (Körding & Wolpert Reference Körding and Wolpert2004). In a pointing task in which observers were rewarded for physically touching the correct visual target, observers underweighted the difficulty of the motor task by aiming for a small target, even though the perceptual information indicating the target was also uncertain (Fleming et al. Reference Fleming, Maloney and Daw2013). Similar biases were reported in a related task (Landy et al. Reference Landy, Goutcher, Trommershäuser and Mamassian2007). Within the action domain (and so beyond our focus on perception), Maloney and Zhang (Reference Maloney and Zhang2010) have reviewed studies showing both optimal and suboptimal behavior.

3.7.5. Cue combination in children

Optimal cue integration takes time to develop. Children are suboptimal until around 10 years of age when combining multisensory (Gori et al. Reference Gori, Del Viva, Sandini and Burr2008; Nardini et al. Reference Nardini, Jones, Bedford and Braddick2008; Petrini et al. Reference Petrini, Remark, Smith and Nardini2014) or visual (Dekker et al. Reference Dekker, Ban, van der Velde, Sereno, Welchman and Nardini2015; Nardini et al. Reference Nardini, Bedford and Mareschal2010) cues.

3.7.6. Explaining suboptimal cue combination

Why do people sometimes appear to combine cues suboptimally? One possible explanation is that observers do not have accurate representations of the reliability of the cues (Knill & Saunders Reference Knill and Saunders2003; Rosas et al. Reference Rosas, Wagemans, Ernst and Wichmann2005) because learning the reliability is difficult [methodological]. This methodological issue is particularly acute when the cues are new to the observer. For example, in one task for which cue combination was suboptimal, observers haptically explored a surface with a single finger to estimate its slant. However, observers may have little experience with single-finger slant estimation, because multiple fingers or the whole hand might ordinarily be used for such a task (Rosas et al. Reference Rosas, Wagemans, Ernst and Wichmann2005). Alternatively, cue combination may be suboptimal when one cue provides all information in parallel but the other cue provides information serially (Plaisier et al. Reference Plaisier, van Dam, Glowania and Ernst2014). Reliability estimation might also be difficult when the reliability is very low. This possibility may apply to studies in which observers were optimal within a range of sensory reliabilities, but not outside it (Fetsch et al. Reference Fetsch, Pouget, Deangelis and Angelaki2012; Maiworm & Röder Reference Maiworm and Röder2011).

Some authors suggest that another reason for overweighting or underweighting a certain cue could be prior knowledge about how cues ought to be combined [prior]. This could include a prior assumption about how likely a cue is to be related to the desired physical property (Battaglia et al. Reference Battaglia, Kersten and Schrater2011; Ganmor et al. Reference Ganmor, Landy and Simoncelli2015), how likely two cue types are to correspond to one another (and thus be beneficial to integrate) (Roach et al. Reference Roach, Heron and McGraw2006), or a general preference to rely on a particular modality, such as audition in a timing task (Maiworm & Röder Reference Maiworm and Röder2011).

For certain tasks, some researchers question the assumptions of the MLE model, such as Gaussian noise [likelihood function] (Burr et al. Reference Burr, Banks and Morrone2009) or the independence of the neural representations of the two cues [likelihood function] (Rosas et al. Reference Rosas, Wichmann and Wagemans2007). In other cases, it appears that observers use alternative cost functions by, for example, taking RT into account [cost function] (Drugowitsch et al. Reference Drugowitsch, DeAngelis, Klier, Angelaki and Pouget2014a).

“Robust averaging,” or down-weighting of outliers, has been observed when observers must combine multiple pieces of information that give very different perceptual estimates. Such down-weighting can stem from adaptive gain changes [likelihood function] that result in highest sensitivity to stimuli close to the mean of the sample (or in the sequential case, the subset of the sample that has been presented so far; Summerfield & Tsetsos, Reference Summerfield and Tsetsos2015). This adaptive gain mechanism is similar to models of sensory adaptation (Barlow Reference Barlow and Blakemore1990; Carandini & Heeger Reference Carandini and Heeger2012; Wark et al. Reference Wark, Lundstrom and Fairhall2007). By following principles of efficient coding that place the largest dynamic range at the center of the sample (Barlow Reference Barlow and Rosenblith1961; Brenner et al. Reference Brenner, Bialek and de Ruyter van Steveninck2000; Wainwright Reference Wainwright1999), different stimuli receive unequal weightings. Psychophysical studies in which stimulus variability is low would not be expected to show this kind of suboptimality (Cheadle et al. Reference Cheadle, Wyart, Tsetsos, Myers, de Gardelle, Castañón and Summerfield2014).

It is debated whether suboptimal cue combination in children reflects a switching strategy (Adams Reference Adams2016) or immature neural mechanisms for integrating cues, or whether the developing brain is optimized for a different task, such as multisensory calibration or conflict detection (Gori et al. Reference Gori, Del Viva, Sandini and Burr2008; Nardini et al. Reference Nardini, Bedford and Mareschal2010).

3.8. Other examples of suboptimality

Thus far we have specifically focused on tasks where the optimal behavior can be specified mathematically in a relatively uncontroversial manner (though see sect. 4.2). However, the issue of optimality has been discussed in a variety of other contexts.

3.8.1. Perceptual biases, illusions, and improbabilities

A number of basic visual biases have been documented. Some examples include repulsion of orientation or motion direction estimates away from cardinal directions (Fig. 9A; Jastrow Reference Jastrow1892; Rauber & Treue Reference Rauber and Treue1998), a bias to perceive speeds as slower than they are when stimuli are low contrast (Stone & Thompson Reference Stone and Thompson1992; Thompson Reference Thompson1982; but see Thompson et al. Reference Thompson, Brooks and Hammett2006), a bias to perceive surfaces as convex (Langer & Bülthoff Reference Langer and Bülthoff2001; Sun & Perona Reference Sun and Perona1997), and a bias to perceive visual stimuli closer to fixation than they are (whereas the opposite is true for auditory stimuli; Odegaard et al. Reference Odegaard, Wozny and Shams2015).

Figure 9. Examples of illusions and biases. (A) Cardinal repulsion. A nearly vertical (or horizontal) line looks more tilted away from the cardinal axis than it is. (B) Adelson's checkerboard brightness illusion. Square B appears brighter than square A, even though the two squares have the same luminance. (Image ©1995, Edward H. Adelson) (C) Tilt aftereffect. After viewing a tilted adapting grating (left), observers perceive a vertical test grating (right) to be tilted away from the adaptor. (D) Effects of spatial attention on contrast appearance (Carrasco et al. Reference Carrasco, Ling and Read2004). An attended grating appears to have higher contrast than the same grating when it is unattended. (E) Effects of action affordances on perceptual judgments (Witt Reference Witt2011). Observers judge an object to be closer (far white circle compared to near white circle) relative to the distance between two landmark objects (red circles) when they are holding a tool that allows them to reach that object than when they have no tool.

When biases, context, or other factors lead to something looking dramatically different from its physical reality, we might call it a visual illusion. A classic example is the brightness illusion (Fig. 9B) in which two squares on a checkerboard appear to be different shades of gray even though they actually have the same luminance (Adelson Reference Adelson1993). Perceptual illusions persist even when the observer knows about the illusion and even after thousands of trials of exposure (Gold et al. Reference Gold, Murray, Bennett and Sekuler2000).

Some illusions are difficult to reconcile with existing theories of optimal perception. Anderson et al. (Reference Anderson, O'Vari and Barth2011), for example, reported strong percepts of illusory surfaces that were improbable according to optimal frameworks for contour synthesis. In the size-weight illusion, smaller objects are perceived as heavier than larger objects of the same weight, even though the prior expectation is that smaller objects are lighter (Brayanov & Smith Reference Brayanov and Smith2010; Peters et al. Reference Peters, Ma and Shams2016).

3.8.2. Adaptation

Adaptation is a widespread phenomenon in sensory systems in which responsiveness to prolonged or repeated stimuli is reduced (Webster Reference Webster2015). As some researchers have discussed (Wei & Stocker Reference Wei and Stocker2015), adaptation could be seen as suboptimal from a Bayesian perspective because subsequent perceptual estimates tend to diverge from rather than conform to the prior stimulus. For example, after prolonged viewing of a line tilted slightly away from vertical, a vertical line looks tilted in the opposite direction (the “tilt aftereffect,” Fig. 9C; Gibson & Radner Reference Gibson and Radner1937). Or, after viewing motion in a certain direction, a stationary stimulus appears to drift in the opposite direction (Wohlgemuth Reference Wohlgemuth1911). After adapting to a certain color, perception is biased toward the complementary color (Sabra Reference Sabra1989; Turnbull Reference Turnbull1961), and after adapting to a specific face, another face appears more different from that face than it would have otherwise (Webster & MacLeod Reference Webster and MacLeod2011; Webster et al. Reference Webster, Kaping, Mizokami and Duhamel2004). In all of these examples, perception is repelled away from the prior stimulus, which, at least on the surface, appears suboptimal (but see sect. 3.8.5).

3.8.3. Appearance changes with visual attention

The same physical stimulus can also be perceived in different ways depending on the state of visual attention. Directing spatial attention to a stimulus can make it appear larger (Anton-Erxleben et al. Reference Anton-Erxleben, Henrich and Treue2007), faster (Anton-Erxleben et al. Reference Anton-Erxleben, Herrmann and Carrasco2013; Fuller et al. Reference Fuller, Park and Carrasco2009; Turatto et al. Reference Turatto, Vescovi and Valsecchi2007), and brighter (Tse Reference Tse2005), and to have higher spatial frequency (Abrams et al. Reference Abrams, Barbot and Carrasco2010; Gobell & Carrasco Reference Gobell and Carrasco2005) and higher contrast (Fig. 9D; Carrasco et al. Reference Carrasco, Ling and Read2004; Liu et al. Reference Liu, Abrams and Carrasco2009; Störmer et al. Reference Störmer, Mcdonald and Hillyard2009) than it would otherwise. Often attention improves performance on a visual task, but sometimes it makes performance worse (Ling & Carrasco Reference Ling and Carrasco2006; Yeshurun & Carrasco Reference Yeshurun and Carrasco1998), demonstrating inflexibility in the system.

3.8.4. Cognition-based biases

Other studies have documented visual biases associated with more cognitive factors, including action affordances (Witt Reference Witt2011), motivation (Balcetis Reference Balcetis2016), and language (Lupyan Reference Lupyan2012). For example, when people reach for an object with a tool that allows them to reach farther, they report the object as looking closer than when they reach without the tool (Fig. 9E; Witt et al. Reference Witt, Proffitt and Epstein2005). In the linguistic domain, calling an object a “triangle” leads observers to report the object as having more equal sides than when the object is called “three sided” (Lupyan Reference Lupyan2017). How much these more cognitive factors affect perception per se, as opposed to post-perceptual judgments, and to what extent the observed visual biases are mediated by attention remain controversial questions (Firestone & Scholl Reference Firestone and Scholl2016).

3.8.5. Explaining these other examples of apparent suboptimality

Why are people prone to certain biases and illusions? Some biases and illusions have been explained as arising from priors in the visual system [prior]. Misperceptions of motion direction (Weiss et al. Reference Weiss, Simoncelli and Adelson2002) and biases in reporting the speed of low-contrast stimuli (Stocker & Simoncelli Reference Stocker and Simoncelli2006a; Thompson Reference Thompson1982; Vintch & Gardner Reference Vintch and Gardner2014) have been explained as optimal percepts for a visual system with a prior for slow motion (Stocker & Simoncelli Reference Stocker and Simoncelli2006a; Weiss et al. Reference Weiss, Simoncelli and Adelson2002). Such a prior is motivated by the fact that natural objects tend to be still or move slowly but has been empirically challenged by subsequent research (Hammett et al. Reference Hammett, Champion, Thompson and Morland2007; Hassan & Hammett Reference Hassan and Hammett2015; Thompson et al. Reference Thompson, Brooks and Hammett2006; Vaziri-Pashkam & Cavanagh Reference Vaziri-Pashkam and Cavanagh2008). Priors have been invoked to explain many other biases and illusions (Brainard et al. Reference Brainard, Longère, Delahunt, Freeman, Kraft and Xiao2006; Girshick et al. Reference Girshick, Landy and Simoncelli2011; Glennerster et al. Reference Glennerster, Tcheang, Gilson, Fitzgibbon and Parker2006; Raviv et al. Reference Raviv, Ahissar and Loewenstein2012). The suggestion is that these priors have been made stable over a lifetime and influence perception even when they do not apply (e.g., in a laboratory task).

Optimal decoding of sensory representations in one task can be accompanied by suboptimal biases in another task using the same stimuli. For example, in a fine-motion discrimination task, observers seem to weight the neurons tuned away from the discrimination boundary more strongly, because these neurons distinguish best between the two possible stimuli. This weighting could explain why motion direction judgments in an interleaved estimation task are biased away from the boundary (Jazayeri & Movshon Reference Jazayeri and Movshon2007). Another interpretation of these results is in terms of an improper decision rule (Zamboni et al. Reference Zamboni, Ledgeway, McGraw and Schluppeck2016). Specifically, observers may discard sensory information related to the rejected decision outcome [decision rule] (Bronfman et al. Reference Bronfman, Brezis, Moran, Tsetsos, Donner and Usher2015; Fleming et al. Reference Fleming, Maloney and Daw2013; Luu & Stocker Reference Luu and Stocker2016), an effect known as self-consistency bias (Stocker & Simoncelli Reference Stocker, Simoncelli, Platt, Koller, Singer and Roweis2008).

Various efforts have been made to explain adaptation in the framework of Bayesian optimality (Grzywacz & Balboa Reference Grzywacz and Balboa2002; Hohwy et al. Reference Hohwy, Roepstorff and Friston2008; Schwiedrzik et al. Reference Schwiedrzik, Ruff, Lazar, Leitner, Singer and Melloni2014; Snyder et al. Reference Snyder, Schwiedrzik, Vitela and Melloni2015). One of the most well-developed lines of work explains the repulsive effects of adaptation as a consequence of efficient coding [likelihood function] (Stocker & Simoncelli Reference Stocker, Simoncelli, Weiss, Schölkopf and Platt2006b). In this framework, a sensory system adapts to maximize its dynamic range around the value of previous input. This change in coding does not affect the prior (as might be expected in a Bayesian treatment of adaptation) but rather affects the likelihood function. Specifically, it skews new observations away from the adapted stimulus, giving rise to repulsive aftereffects. A similar principle has been suggested to explain why perceptual estimates are repelled from long-term priors, such as those determined by the statistics of natural images (Wei & Stocker Reference Wei, Stocker, Pereira, Burges, Bottou and Weinberger2013; Reference Wei and Stocker2015).

4. Assessing optimality: Not a useful goal in itself

The extensive review in the previous section demonstrates that general claims about the optimality of human perceptual decision making are empirically false. However, there are also theoretical reasons to turn away from assessing optimality as a primary research goal.

4.1. Challenges in defining optimality

Section 2 introduced a formal definition of optimality based on Bayesian decision theory. However, the question of what phenomena should be considered optimal versus suboptimal quickly becomes complicated in many actual applications. There are at least two issues that are not straightforward to address.

The first issue concerns the exact form of the cost function. Bayesian decision theory postulates that observers should minimize the expected loss. However, observers may reasonably prefer to minimize the maximum loss, minimize the variability of the losses, or optimize some other quantity. Therefore, behavior that is suboptimal according to standard Bayesian decision theory may be optimal according to other definitions. A related, and deeper, problem is that some observers may also try to minimize other quantities such as time spent, level of boredom, or metabolic energy expended (Lennie Reference Lennie2003). What appears to be a suboptimal decision on a specific task may be optimal when all of these other variables are taken into account (Beck et al. Reference Beck, Ma, Pitkow, Latham and Pouget2012; Bowers & Davis Reference Bowers and Davis2012a). Even the clearest cases of suboptimal decision rules (e.g., the self-consistency bias) could be construed as part of a broader optimality (e.g., being self-consistent may be important for other goals). In a Bayesian framework, taking into account extra variables requires that each of the LPCD components is defined over all of these variables. If one pursues this logic, it leads to a cost function that operates over our entire evolutionary history. We do not think efforts to explore such cost functions should be abandoned, but specifying them quantitatively is impossible given our current knowledge.

The second issue concerns whether optimality should depend on the likelihood, prior, and cost function adopted by the observer. In order to be able to review a large literature using consistent assumptions, we defined a set of standard assumptions and labeled any deviation from these assumptions as suboptimal. This approach is by no means uncontroversial. For example, priors based on a lifetime of experience may be inflexible, so one could consider the standard assumption about following the experimenter-defined prior overly restrictive. An alternative view could be that suboptimal behavior concerns only deviations from the experimenter-defined quantities that are under observers’ control (Tenenbaum & Griffiths Reference Tenenbaum and Griffiths2006; Yu & Cohen Reference Yu, Cohen, Koller, Schuurmans, Bengio and Bottou2009). The problem with this definition is that it introduces a new variable to consider – what exactly is truly under observers’ control – which is often hard to determine. A third approach is to define optimality exclusively in terms of the decision rule regardless of what likelihood, prior, and cost function the observer adopts. In this view, observers are under no obligation to follow the experimenter's instructions (e.g., they are free to bring in their own priors and cost function). The problem with this approach is that failing to adopt the proper prior or cost function can result in just as much missed objective reward as adopting an improper decision rule. Similar problems apply to “improper” likelihood functions: As an extreme example, a strategy in which the observer closes her eyes (resulting in a non-informative likelihood function) and chooses actions randomly has to be labeled “optimal” because the decision rule is optimal. The ambiguity regarding the role of the likelihood, prior, or cost function points to the difficulties in constructing a general-purpose definition of optimality.

In short, optimality is impossible to define in the abstract. It is only well defined in the context of a set of specific assumptions, rendering general statements about the optimality (or suboptimality) of human perceptual decisions meaningless.

4.2. Optimality claims in and of themselves have limited value

The current emphasis on optimality is fueled by the belief that demonstrating optimality in perception provides us with important insight. On the contrary, simply stating that observers are optimal is of limited value for two main reasons.

First, it is unclear when a general statement about the optimality of perceptual decisions is supposed to apply. Although most experimental work focuses on very simple tasks, it is widely recognized that the computational complexity of many real-world tasks makes optimality unachievable by the brain (Bossaerts & Murawski Reference Bossaerts and Murawski2017; Cooper Reference Cooper1990; Gershman et al. Reference Gershman, Horvitz and Tenenbaum2015; Tsotsos Reference Tsotsos1993; van Rooij Reference van Rooij2008). Further, in many situations, the brain cannot be expected to have complete knowledge of the likelihood function, which all but guarantees that the decision rule will be suboptimal (Beck et al. Reference Beck, Ma, Pitkow, Latham and Pouget2012). (Attempting to incorporate observers’ computational capacities or knowledge brings back the problems related to how one defines optimality discussed in sect. 4.1.) Therefore, general statements about optimality must be intended only for the simplest cases of perceptual decisions (although, as sect. 3 demonstrated, even for these cases, suboptimality is ubiquitous).

Second, even for a specific task, statements about optimality alone are insufficient to predict behavior. Instead, to predict future perceptual decisions, one needs to specify each part of the process underlying the decision. Within the Bayesian framework, for example, one needs to specify each LPCD component, which goes well beyond a statement that “observers are optimal.”

Is it useless to compare human performance to optimal performance? Absolutely not. Within the context of a specific model, demonstrating optimal or suboptimal performance is immensely helpful (Goodman et al. Reference Goodman, Frank, Griffiths, Tenenbaum, Battaglia and Hamrick2015; Tauber et al. Reference Tauber, Navarro, Perfors and Steyvers2017). Such demonstrations can support or challenge components of the model and suggest ways to alter the model to accommodate actual behavior. However, the critical part here is the model, not the optimality.

5. Toward a standard observer model

If there are so many empirical examples of suboptimality (sect. 3) and optimality can be challenging even to define (sect. 4), then what is the way forward?

5.1. Creating and testing observer models

Psychophysics has a long history of creating ideal observer models (Geisler Reference Geisler2011; Green & Swets Reference Green and Swets1966; Ulehla Reference Ulehla1966). These models specify a set of assumptions about how sensory information is represented internally and add an optimal decision rule in order to generate predictions about behavior. The motivation behind these models has been to test the collective set of assumptions incorporated into the model. However, over time, the “ideal” part of ideal observer models has become dominant, culminating in the current outsized emphasis on demonstrating the optimality of the decision rule – what we call the optimality approach. Even frameworks such as “bounded rationality” (Gigerenzer & Selten Reference Gigerenzer and Selten2002; Simon Reference Simon1957) or “computational rationality” (Gershman et al. Reference Gershman, Horvitz and Tenenbaum2015), which explicitly concern themselves with the limitations of the decision-making process, still place the greatest emphasis on the optimality of the decision rule.

The emphasis on the decision rule in the optimality approach has led to an overly flexible treatment of the other LPCD components (Bowers & Davis Reference Bowers and Davis2012a). This issue is especially problematic because of the inherent degeneracy of Bayesian decision theory (Acerbi Reference Acerbi2014): Different combinations of the likelihood, prior, cost function, and decision rule can lead to the same expected loss. Further, for any likelihood, cost function, and decision rule, a prior can be found for which that decision rule is optimal (complete class theorem) (Berger Reference Berger1985; Jaynes Reference Jaynes1957/2003).

To eliminate the flexibility of the optimality approach, the field should return to the original intention of building ideal observer models – namely, to test the collective set of assumptions incorporated into such models. To this end, we propose that researchers drop the “ideal” and shift emphasis to building, simply, “observer models.” Creating observer models should differ from the current optimality approach in two critical ways. First, whether or not the decision rule is optimal should be considered irrelevant. Second, the nature of the decision rule should not be considered more important than the nature of the other components.

These two simple changes address the pitfalls of the optimality approach. Within the optimality approach, a new finding is often modeled using flexibly chosen LPCD components (Bowers & Davis Reference Bowers and Davis2012a). Then, depending on the inferred decision rule, a conclusion is reached that observers are optimal (or suboptimal). At this point, the project is considered complete and a general claim is made about optimality (or suboptimality). As others have pointed out, this approach has led to many “just-so stories” (Bowers & Davis Reference Bowers and Davis2012a), because the assumptions of the model are not rigorously tested. On the contrary, when building observer models (e.g., in the Bayesian framework), a new finding is used to generate hypotheses about a particular LPCD component (Maloney & Mamassian Reference Maloney and Mamassian2009). Hypotheses about the likelihood, prior, or cost function are considered as important as hypotheses about the decision rule. Critically, unlike in the optimality approach, this step is considered just the beginning of the process! The hypotheses are then examined in detail while evidence is gathered for or against them. Researchers can formulate alternative hypotheses to explain a given data set and evaluate them using model comparison techniques. In addition, researchers can conduct follow-up experiments in which they test their hypotheses using different tasks, stimuli, and observers. There are researchers who already follow this approach, and we believe the field would benefit from adopting it as the standard practice. In Box 1, we list specific steps for implementing observer models within a Bayesian framework (the steps will be similar regardless of the framework).

Box 1. Implementing observer models within a Bayesian framework

  1. 1. Describe the complete generative model, including assumptions about what information the observer is using to perform the task (e.g., stimulus properties, training, experimenter's instructions, feedback, explicit vs. implicit rewards, response time pressure, etc.).

  2. 2. Specify the assumed likelihood function, prior, and cost function. If multiple options are plausible, test them in different models.

  3. 3. Derive both the optimal decision rule and plausible alternative decision rules. Compare their abilities to fit the data.

  4. 4. Interpret the results with respect to what has been learned about each LPCD component, not optimality per se. Specify how the conclusions depend on the assumptions about the other LPCD components.

  5. 5. Most importantly, follow up on any new hypotheses about LPCD components with additional studies in order to avoid “just-so stories.”

  6. 6. New hypotheses that prove to be general eventually become part of the standard observer model (see sect. 5.2).

Two examples demonstrate the process of implementing observer models. A classic example concerns the existence of Gaussian variability in the measurement distribution. This assumption has been extensively tested for decades (Green & Swets Reference Green and Swets1966; Macmillan & Creelman Reference Macmillan and Creelman2005), thus eventually earning its place among the standard assumptions in the field. A second example comes from the literature on speed perception (sect. 3.8.5). A classic finding is that reducing the contrast of a slow-moving stimulus reduces its apparent speed (Stone and Thompson Reference Stone and Thompson1992; Thompson Reference Thompson1982). A popular Bayesian explanation for this effect is that most objects in natural environments are stationary, so the visual system has a prior for slow speeds. Consequently, when sensory information is uncertain, as occurs at low contrasts, slow-biased speed perception could be considered “optimal” (Weiss et al. Reference Weiss, Simoncelli and Adelson2002). Importantly, rather than stopping at this claim, researchers have investigated the hypothetical slow motion prior in follow-up studies. One study quantitatively inferred observers’ prior speed distributions under the assumption of a Bayesian decision rule (Stocker & Simoncelli Reference Stocker and Simoncelli2006a). Other researchers tested the slow motion prior and found that, contrary to its predictions, high-speed motion at low contrast can appear to move faster than its physical speed (Hammett et al. Reference Hammett, Champion, Thompson and Morland2007; Hassan & Hammett Reference Hassan and Hammett2015; Thompson et al. Reference Thompson, Brooks and Hammett2006). These latter studies challenged the generality of the slow motion prior hypothesis.

5.2. Creating a standard observer model

We believe that an overarching goal of the practice of creating and testing observer models is the development of a standard observer model that predicts observers’ behavior on a wide variety of perceptual tasks. Such a model would be a significant achievement for the science of perceptual decision making. It is difficult – perhaps impossible – to anticipate what form the standard observer model will take. It may be a Bayesian model (Maloney & Mamassian Reference Maloney and Mamassian2009), a “bag of tricks” (Ramachandran Reference Ramachandran and Blakemore1990), a neural network (Yamins et al. Reference Yamins, Hong, Cadieu, Solomon, Seibert and DiCarlo2014), and so forth. However, regardless of the framework in which they were originally formulated, hypotheses with overwhelming empirical support will become part of the standard observer model. In this context, perhaps the most damaging aspect of the current outsized emphasis on optimality is that although it has generated many hypotheses, few of them have received sufficient subsequent attention to justify inclusion in (or exclusion from) the eventual standard observer model.

We suggest that immediate progress can be made by a concerted effort to test the hypotheses that have already been proposed to explain suboptimal decisions. To facilitate this effort, here we compile the hypotheses generated in the course of explaining the findings from section 3. Within a Bayesian framework, these hypotheses relate to the likelihood function, prior, cost function, or decision rule (the LPCD components). Further, a few of them are general and apply to several LPCD components, and a few are methodological considerations. In some cases, essentially the same hypothesis was offered in the context of several different empirical effects. We summarize these hypotheses in Table 1. Note that the table by no means exhaustively covers all existing hypotheses that deserve to be thoroughly tested.

Table 1. Summary of hypotheses proposed to account for suboptimal decisions.

Table 1 classifies instances of deficient learning as methodological issues. This choice is not to downplay the problem of learning. Questions of how observers acquire their priors and cost functions are of utmost importance, and meaningful progress has already been made on this front (Acerbi et al. Reference Acerbi, Wolpert and Vijayakumar2012; Reference Acerbi, Vijayakumar and Wolpert2014b; Beck et al. Reference Beck, Ma, Pitkow, Latham and Pouget2012; Geisler & Najemnik Reference Geisler and Najemnik2013; Gekas et al. Reference Gekas, Chalk, Seitz and Series2013; Seriès & Seitz Reference Seriès and Seitz2013). Here we categorize deficient learning as a methodological issue when, because of the experimental setup, an observer cannot acquire the relevant knowledge even though she has the capacity to do so.

Future research should avoid the methodological issues from Table 1. In particular, great care must be taken to ensure that observers’ assumptions in performing a task match exactly the assumptions implicit in the analysis.

We have stated the hypotheses in Table 1 at a fairly high level to succinctly capture the broad categories from our review. Much of the work ahead will be to break each high-level hypothesis down into multiple, specific hypotheses and incorporate these hypotheses into observer models. For example, statements about “inappropriate priors” or “capacity limitations” prompt more fine-grained hypotheses about specific priors or limitations whose ability to predict behavior can be tested. Some hypotheses, like capacity limitations, have already been investigated extensively – for example, in studies of attention and working memory (e.g., Carrasco Reference Carrasco2011; Cowan Reference Cowan2005). Turning our existing knowledge of these phenomena into concrete observer models that predict perceptual decisions is an exciting direction for the field. Other hypotheses, like placing a premium on accuracy, have not been tested extensively and therefore should still be considered “just-so stories” (Bowers & Davis Reference Bowers and Davis2012a). Hence, the real work ahead lies in verifying, rejecting, and expanding the hypotheses generated from findings of suboptimal perceptual decisions.

5.3. Implications of abandoning the optimality approach

Abandoning the optimality approach has at least two immediate implications for research practices.

First, researchers should stop focusing on optimality. What should be advertised in the title and abstract of a paper is not the optimality but what is learned about the components of the perceptual process. One of the central questions in perceptual decision making is how best to characterize the sources that corrupt decisions (Beck et al. Reference Beck, Ma, Pitkow, Latham and Pouget2012; Drugowitsch et al. Reference Drugowitsch, Wyart, Devauchelle and Koechlin2016; Hanks & Summerfield Reference Hanks and Summerfield2017; Wyart & Koechlin Reference Wyart and Koechlin2016). By shifting attention away from optimality, the effort to build complete observer models sharpens the focus on this question.

Second, new model development should not unduly emphasize optimal models. According to some Bayesian theorists, models that assume optimal behavior are intrinsically preferable to models that do not. This preference stems from the argument that because people can approximate optimal behavior on some tasks, they must possess the machinery for fully optimal decisions (Drugowitsch & Pouget Reference Drugowitsch and Pouget2012). Many models have been judged positively for supporting optimal decision rules: probabilistic population codes for allowing optimal cue combination (Ma et al. Reference Ma, Beck, Latham and Pouget2006), neural sampling models for allowing marginalization (which is needed in many optimal decision rules) (Fiser et al. Reference Fiser, Berkes, Orbán and Lengyel2010), and drift diffusion models for allowing optimal integration of information across time (Bogacz Reference Bogacz2007). The large body of findings of suboptimality reviewed here, however, should make this reasoning suspect: If the brain is built to make optimal decisions, then why does it produce so many suboptimal ones? It is also important to remember that close-to-optimal behavior can also be produced by suboptimal decision rules (Bowers & Davis Reference Bowers and Davis2012a; Maloney & Mamassian Reference Maloney and Mamassian2009; Shen & Ma Reference Shen and Ma2016). Influential theories postulate that evolutionary pressures produced heuristic but useful, rather than normative, behavior (Gigerenzer and Brighton Reference Gigerenzer and Brighton2009; Juslin et al. Reference Juslin, Nilsson and Winman2009; Simon Reference Simon1956). Therefore, a model should be judged only on its ability to describe actual behavior, not on its ability to support optimal decision rules.

6. Conclusion

Are perceptual decisions optimal? A substantial body of research appears to answer this question in the affirmative. Here we showed instead that every category of perceptual tasks that lends itself to optimality analysis features numerous findings of suboptimality. Perceptual decisions cannot therefore be claimed to be optimal in general. In addition, independent of the empirical case against optimality, we questioned whether a focus on optimality per se can lead to any real progress. Instead, we advocated for a return to building complete observer models with an equal focus on all model components. Researchers should aim for their models to capture all of the systematic weirdness of human behavior rather than preserve an aesthetic ideal. To facilitate this effort, we compiled the hypotheses generated in the effort to explain the findings of suboptimality reviewed here. The real work ahead lies in testing these hypotheses, with the ultimate goal of developing a standard observer model of perceptual decision making.

Acknowledgments

We thank Luigi Acerbi, William Adler, Stephanie Badde, Michael Landy, Wei Ji Ma, Larry Maloney, Brian Maniscalco, Jonathan Winawer, and five reviewers for many helpful comments and discussions. D. Rahnev was supported by a start-up grant from Georgia Institute of Technology. R. N. Denison was supported by the National Institutes of Health National Eye Institute grants F32 EY025533 to R.N.D. and T32 EY007136 to New York University.

Footnotes

Authors D. Rahnev and R. N. Denison contributed equally to this work.

References

Abrahamyan, A., Luz Silva, L., Dakin, S. C., Carandini, M. & Gardner, J. L. (2016) Adaptable history biases in human perceptual decisions. Proceedings of the National Academy of Sciences of the United States of America 113(25):E3548–57. Available at: http://www.pnas.org/lookup/doi/10.1073/pnas.1518786113.Google Scholar
Abrams, J., Barbot, A. & Carrasco, M. (2010) Voluntary attention increases perceived spatial frequency. Attention, Perception, & Psychophysics 72(6):1510–21. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=20675797&retmode=ref&cmd=prlinks.Google Scholar
Acerbi, L. (2014) Complex internal representations in sensorimotor decision making: A Bayesian investigation. University of Edinburgh. Available at: https://www.era.lib.ed.ac.uk/bitstream/handle/1842/16233/Acerbi2015.pdf?sequence=1&isAllowed=y.Google Scholar
Acerbi, L., Vijayakumar, S. & Wolpert, D. M. (2014b) On the origins of suboptimality in human probabilistic inference. PLoS Computational Biology 10(6):e1003661. Available at: https://doi.org/10.1371/journal.pcbi.1003661.Google Scholar
Acerbi, L., Wolpert, D. M. & Vijayakumar, S. (2012) Internal representations of temporal statistics and feedback calibrate motor-sensory interval timing. PLoS Computational Biology 8(11):e1002771. Available at: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002771.Google Scholar
Ackermann, J. F. & Landy, M. S. (2015) Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards. Attention, Perception & Psychophysics 77(2):638–58. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25366822.Google Scholar
Adams, J. K. (1957) A confidence scale defined in terms of expected percentages. American Journal of Psychology 70(3):432–36.Google Scholar
Adams, W. J. (2016) The development of audio-visual integration for temporal judgements. PLOS Computational Biology 12(4):e1004865. Available at: http://dx.plos.org/10.1371/journal.pcbi.1004865.Google Scholar
Adelson, E. H. (1993) Perceptual organization and the judgment of brightness. Science 262(5142):2042–44.Google Scholar
Adler, W. T. & Ma, W. J. (2018a) Comparing Bayesian and non-Bayesian accounts of human confidence reports. PLoS Computational Biology. https://doi.org/10.1371/journal.pcbi.1006572.Google Scholar
Adler, W. T. & Ma, W. J. (2018b) Limitations of proposed signatures of Bayesian confidence. Neural Computation 30(12):3327–54. https://www.mitpressjournals.org/doi/abs/10.1162/neco_a_01141.Google Scholar
Ais, J., Zylberberg, A., Barttfeld, P. & Sigman, M. (2015) Individual consistency in the accuracy and distribution of confidence judgments. Cognition 146:377–86. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26513356.Google Scholar
Aitchison, L., Bang, D., Bahrami, B. & Latham, P. E. (2015) Doubly Bayesian analysis of confidence in perceptual decision-making. PLoS Computational Biology 11(10):e1004519.Google Scholar
Alais, D. & Burr, D. (2004) The ventriloquist effect results from near-optimal bimodal integration. Current Biology 14(3):257–62. doi:10.1016/j.cub.2004.01.029.Google Scholar
Allen, M., Frank, D., Schwarzkopf, D. S., Fardo, F., Winston, J. S., Hauser, T. U. & Rees, G. (2016) Unexpected arousal modulates the influence of sensory noise on confidence. eLife 5:e18103. Available at: http://elifesciences.org/lookup/doi/10.7554/eLife.18103.Google Scholar
Anderson, B. L., O'Vari, J. & Barth, H. (2011) Non-Bayesian contour synthesis. Current Biology 21(6):492–96. Available at: http://linkinghub.elsevier.com/retrieve/pii/S0960982211001746.Google Scholar
Anton-Erxleben, K., Henrich, C. & Treue, S. (2007) Attention changes perceived size of moving visual patterns. Journal of Vision 7(11):5.19. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=17997660&retmode=ref&cmd=prlinks.Google Scholar
Anton-Erxleben, K., Herrmann, K. & Carrasco, M. (2013) Independent effects of adaptation and attention on perceived speed. Psychological Science 24(2):150–59. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=23241456&retmode=ref&cmd=prlinks.Google Scholar
Balcetis, E. (2016) Approach and avoidance as organizing structures for motivated distance perception. Emotion Review 8(2):115–28. Available at: https://doi.org/10.1177/1754073915586225.Google Scholar
Balcı, F., Simen, P., Niyogi, R., Saxe, A., Hughes, J. A., Holmes, P. & Cohen, J. D. (2011b) Acquisition of decision making criteria: Reward rate ultimately beats accuracy. Attention, Perception & Psychophysics 73(2):640–57. Retrieved September 11, 2015. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3383845&tool=pmcentrez&rendertype=abstract.Google Scholar
Bang, J. W. & Rahnev, D. (2017) Stimulus expectation alters decision criterion but not sensory signal in perceptual decision making. Scientific Reports 7:17072. Available at: http://www.nature.com/articles/s41598-017-16885-2.Google Scholar
Bang, J. W., Shekhar, M. & Rahnev, D. (in press) Sensory noise increases metacognitive efficiency. Journal of Experimental Psychology. http://dx.doi.org/10.1037/xge0000511.Google Scholar
Baranski, J. V. & Petrusic, W. M. (1994) The calibration and resolution of confidence in perceptual judgments. Perception & Psychophysics 55(4):412–28. Available at: http://www.ncbi.nlm.nih.gov/pubmed/8036121.Google Scholar
Baranski, J. V. & Petrusic, W. M. (1995) On the calibration of knowledge and perception. Canadian Journal of Experimental Psychology 49(3):397407. Available at: http://www.ncbi.nlm.nih.gov/pubmed/9183984.Google Scholar
Baranski, J. V. & Petrusic, W. M. (1999) Realism of confidence in sensory discrimination. Perception & Psychophysics 61(7):1369–83. Available at: http://www.ncbi.nlm.nih.gov/pubmed/10572465.Google Scholar
Barlow, H. B. (1961) Possible principles underlying the transformation of sensory messages. In: Sensory communication, ed. Rosenblith, W. A., pp. 217–34. MIT Press.Google Scholar
Barlow, H. B. (1990) A theory about the functional role and synaptic mechanism of visual after-effects. In: Vision: Coding and efficiency, ed. Blakemore, C., pp. 363–75. Cambridge University Press. Available at: http://books.google.com/books?hl=en&lr=&id=xGJ_DxN3eygC&oi=fnd&pg=PA363&dq=a+theory+about+the+functional+role+and+synaptic+mechanism+of+visual+after+effects&ots=VsSUzK0vpB&sig=lZX28LU68XpGk9T8zoLwY8WOJBs.Google Scholar
Battaglia, P. W., Jacobs, R. A. & Aslin, R. N. (2003) Bayesian integration of visual and auditory signals for spatial localization. Journal of the Optical Society of America A, Optics and Image Science 20(7):1391–97.Google Scholar
Battaglia, P. W., Kersten, D. & Schrater, P. R. (2011) How haptic size sensations improve distance perception. PLoS Computational Biology 7(6):e1002080. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=21738457&retmode=ref&cmd=prlinks.Google Scholar
Bays, P. M. & Dowding, B. A. (2017) Fidelity of the representation of value in decision-making. PLoS Computational Biology 13(3):e1005405. Available at: http://www.ncbi.nlm.nih.gov/pubmed/28248958.Google Scholar
Beck, J. M., Ma, W. J., Pitkow, X., Latham, P. E. & Pouget, A. (2012) Not noisy, just wrong: The role of suboptimal inference in behavioral variability. Neuron 74(1):3039. Available at: https://doi.org/10.1016/j.neuron.2012.03.016.Google Scholar
Berger, J. O. (1985) Statistical decision theory and Bayesian analysis. Springer.Google Scholar
Berliner, J. E. & Durlach, N. I. (1973) Intensity perception. IV. Resolution in roving-level discrimination. Journal of the Acoustical Society of America 53(5):1270–87. Available at: http://www.ncbi.nlm.nih.gov/pubmed/4712555.Google Scholar
Björkman, M., Juslin, P. & Winman, A. (1993) Realism of confidence in sensory discrimination: The underconfidence phenomenon. Perception & Psychophysics 54(1):7581. Available at: http://www.ncbi.nlm.nih.gov/pubmed/8351190.Google Scholar
Bogacz, R. (2007) Optimal decision-making theories: Linking neurobiology with behaviour. Trends in Cognitive Sciences 11(3):118–25. Available at: http://www.sciencedirect.com/science/article/pii/S1364661307000290.Google Scholar
Bogacz, R., Brown, E., Moehlis, J., Holmes, P. & Cohen, J. D. (2006) The physics of optimal decision making: A formal analysis of models of performance in two-alternative forced-choice tasks. Psychological Review 113(4):700–65. Available at: http://doi.apa.org/getdoi.cfm?doi=10.1037/0033-295X.113.4.700.Google Scholar
Bogacz, R., Hu, P. T., Holmes, P. J. & Cohen, J. D. (2010) Do humans produce the speed-accuracy trade-off that maximizes reward rate? Quarterly Journal of Experimental Psychology 63(5):863–91. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2908414&tool=pmcentrez&rendertype=abstract.Google Scholar
Bohil, C. J. & Maddox, W. T. (2001) Category discriminability, base-rate, and payoff effects in perceptual categorization. Perception & Psychophysics 63(2):361–76.Google Scholar
Bohil, C. J. & Maddox, W. T. (2003a) On the generality of optimal versus objective classifier feedback effects on decision criterion learning in perceptual categorization. Memory & Cognition 31(2):181–98.Google Scholar
Bohil, C. J. & Maddox, W. T. (2003b) A test of the optimal classifier's independence assumption in perceptual categorization. Perception & Psychophysics 65(3):478–93.Google Scholar
Bossaerts, P. & Murawski, C. (2017) Computational complexity and human decision-making. Trends in Cognitive Sciences 21(12):917–29. Available at: http://dx.doi.org/10.1016/j.tics.2017.09.005.Google Scholar
Bowers, J. S. & Davis, C. J. (2012a) Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin 138(3):389414.Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=22545686&retmode=ref&cmd=prlinks.Google Scholar
Bowers, J. S. & Davis, C. J. (2012b) Is that what Bayesians believe? Reply to Griffiths, Chater, Norris, and Pouget (2012) Psychological Bulletin 138(3):423–26. Available at: http://doi.apa.org/getdoi.cfm?doi=10.1037/a0027750.Google Scholar
Brainard, D. H., Longère, P., Delahunt, P. B., Freeman, W. T., Kraft, J. M. & Xiao, B. (2006) Bayesian model of human color constancy. Journal of Vision 6(11):1267–81.Google Scholar
Brayanov, J. B. & Smith, M. A. (2010) Bayesian and ‘anti-Bayesian’ biases in sensory integration for action and perception in the size-weight illusion. Journal of Neurophysiology 103(3):1518–31. Available at: http://jn.physiology.org/cgi/doi/10.1152/jn.00814.2009.Google Scholar
Brenner, N., Bialek, W. & de Ruyter van Steveninck, R. (2000) Adaptive rescaling maximizes information transmission. Neuron 26(3):695702. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=10896164&retmode=ref&cmd=prlinks.Google Scholar
Bronfman, Z. Z., Brezis, N., Moran, R., Tsetsos, K., Donner, T. & Usher, M. (2015) Decisions reduce sensitivity to subsequent information. Proceedings of the Royal Society B: Biological Sciences 282(1810):20150228. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26108628.Google Scholar
Brooke, J. B. & MacRae, A. W. (1977) Error patterns in the judgment and production of numerical proportions. Perception & Psychophysics 21(4):336–40. Available at: http://www.springerlink.com/index/10.3758/BF03199483.Google Scholar
Bülthoff, H. H. & Mallot, H. A. (1988) Integration of depth modules: Stereo and shading. Journal of the Optical Society of America A, Optics and Image Science 5(10):1749–58. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=3204438&retmode=ref&cmd=prlinks.Google Scholar
Burr, D., Banks, M. S. & Morrone, M. C. (2009) Auditory dominance over vision in the perception of interval duration. Experimental Brain Research 198(1):4957. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=19597804&retmode=ref&cmd=prlinks.Google Scholar
Busemeyer, J. R. & Myung, I. J. (1992) An adaptive approach to human decision making: Learning theory, decision theory, and human performance. Journal of Experimental Psychology: General 121(2):177–94. Available at: http://psycnet.apa.org/journals/xge/121/2/177.html.Google Scholar
Busse, L., Ayaz, A., Dhruv, N. T., Katzner, S., Saleem, A. B., Schölvinck, M. L., Zaharia, A. D. & Carandini, M. (2011) The detection of visual contrast in the behaving mouse. Journal of Neuroscience 31(31):11351–61. Available at: http://www.jneurosci.org/cgi/doi/10.1523/JNEUROSCI.6689-10.2011.Google Scholar
Carandini, M. & Heeger, D. J. (2012) Normalization as a canonical neural computation. Nature Reviews Neuroscience 13(1):5162. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3273486&tool=pmcentrez&rendertype=abstract.Google Scholar
Carrasco, M. (2011) Visual attention: The past 25 years. Vision Research 51(13):1484–525. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3390154&tool=pmcentrez&rendertype=abstract.Google Scholar
Carrasco, M., Ling, S. & Read, S. (2004) Attention alters appearance. Nature Neuroscience 7(3):308–13. Available at: http://www.ncbi.nlm.nih.gov/pubmed/14966522.Google Scholar
Charles, L., Van Opstal, F., Marti, S. & Dehaene, S. (2013) Distinct brain mechanisms for conscious versus subliminal error detection. NeuroImage 73:8094. Available at: http://www.ncbi.nlm.nih.gov/pubmed/23380166.Google Scholar
Cheadle, S., Wyart, V., Tsetsos, K., Myers, N., de Gardelle, V., Castañón, S. H. & Summerfield, C. (2014) Adaptive gain control during human perceptual choice. Neuron 81(6):1429–41. Available at: http://www.cell.com/article/S0896627314000518/fulltext.Google Scholar
Chen, C.-C. & Tyler, C. W. (2015) Shading beats binocular disparity in depth from luminance gradients: Evidence against a maximum likelihood principle for cue combination. PLoS ONE 10(8):e0132658. Available at: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0132658.Google Scholar
Chiang, T.-C., Lu, R.-B., Hsieh, S., Chang, Y.-H. & Yang, Y.-K. (2014) Stimulation in the dorsolateral prefrontal cortex changes subjective evaluation of percepts. PLoS ONE 9(9):e106943. Available at: http://dx.plos.org/10.1371/journal.pone.0106943.Google Scholar
Clark, J. J. & Yullie, A. L. (1990) Data fusion for sensory information processing. Kluwer Academic.Google Scholar
Cooper, G. F. (1990) The computational complexity of probabilistic inference using Bayesian belief networks. Artificial Intelligence 42(2–3):393405.Google Scholar
Cowan, N. (2005) Working memory capacity. Psychology Press.Google Scholar
Creelman, C. D. & Macmillan, N. A. (1979) Auditory phase and frequency discrimination: A comparison of nine procedures. Journal of Experimental Psychology: Human Perception and Performance 5(1):146–56. Available at: http://www.ncbi.nlm.nih.gov/pubmed/528924.Google Scholar
Dakin, S. C. (2001) Information limit on the spatial integration of local orientation signals. Journal of the Optical Society of America A, Optics and Image Science 18(5):1016–26. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=11336204&retmode=ref&cmd=prlinks.Google Scholar
Davis-Stober, C. P., Park, S., Brown, N. & Regenwetter, M. (2016) Reported violations of rationality may be aggregation artifacts. Proceedings of the National Academy of Sciences of the United States of America 113(33):E4761–63. Available at: http://www.ncbi.nlm.nih.gov/pubmed/27462103.Google Scholar
Dawes, R. M. (1980) Confidence in intellectual vs. confidence in perceptual judgments. In: Similarity and choice: Papers in honor of Clyde Coombs, ed. Lantermann, E. D. & Feger, H., pp. 327–45. Han Huber.Google Scholar
Dayan, P. (2014) Rationalizable irrationalities of choice. Topics in Cognitive Science 6(2):204–28. Available at: http://doi.wiley.com/10.1111/tops.12082.Google Scholar
de Gardelle, V. & Mamassian, P. (2015) Weighting mean and variability during confidence judgments. PLoS ONE 10(3):e0120870. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4368758&tool=pmcentrez&rendertype=abstract.Google Scholar
de Gardelle, V. & Summerfield, C. (2011) Robust averaging during perceptual judgment. Proceedings of the National Academy of Sciences of the United States of America 108(32):13341–46. doi:10.1073/pnas.1104517108.Google Scholar
Dekker, T. M., Ban, H., van der Velde, B., Sereno, M. I., Welchman, A. E. & Nardini, M. (2015) Late development of cue integration is linked to sensory fusion in cortex. Current Biology 25(21): 2856–61. Available at: https://doi.org/10.1016/j.cub.2015.09.043.Google Scholar
de Lange, F. P., Rahnev, D., Donner, T. H. & Lau, H. (2013) Prestimulus oscillatory activity over motor cortex reflects perceptual expectations. Journal of Neuroscience 33(4):1400–10. Available at: http://www.ncbi.nlm.nih.gov/pubmed/23345216.Google Scholar
Del Cul, A., Dehaene, S., Reyes, P., Bravo, E. & Slachevsky, A. (2009) Causal role of prefrontal cortex in the threshold for access to consciousness. Brain 132(Pt. 9):2531–40. Available at: http://www.ncbi.nlm.nih.gov/pubmed/19433438.Google Scholar
Denison, R. N., Adler, W. T., Carrasco, M. & Ma, W. J. (2018) Humans incorporate attention-dependent uncertainty into perceptual decisions and confidence. Proceedings of the National Academy of Sciences of the United States of America 115(43):11090–95. doi: 10.1073/pnas.1717720115.Google Scholar
Drugowitsch, J., DeAngelis, G. C., Angelaki, D. E. & Pouget, A. (2015) Tuning the speed-accuracy trade-off to maximize reward rate in multisensory decision-making. eLife 4:e06678. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26090907.Google Scholar
Drugowitsch, J., DeAngelis, G. C., Klier, E. M., Angelaki, D. E. & Pouget, A. (2014a) Optimal multisensory decision-making in a reaction-time task. eLife 3:e03005. Available at: http://elifesciences.org/content/early/2014/06/14/eLife.03005.abstract.Google Scholar
Drugowitsch, J., Moreno-Bote, R., Churchland, A. K., Shadlen, M. N. & Pouget, A. (2012) The cost of accumulating evidence in perceptual decision making. Journal of Neuroscience 32(11):3612–28. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3329788&tool=pmcentrez&rendertype=abstract.Google Scholar
Drugowitsch, J., Moreno-Bote, R. & Pouget, A. (2014b) Relation between belief and performance in perceptual decision making. PLoS ONE 9(5):e96511. Available at: http://dx.plos.org/10.1371/journal.pone.0096511.Google Scholar
Drugowitsch, J. & Pouget, A. (2012) Probabilistic vs. non-probabilistic approaches to the neurobiology of perceptual decision-making. Current Opinion in Neurobiology 22(6):963–69. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3513621&tool=pmcentrez&rendertype=abstract.Google Scholar
Drugowitsch, J., Wyart, V., Devauchelle, A.-D. & Koechlin, E. (2016) Computational precision of mental inference as critical source of human choice suboptimality. Neuron 92(6):1398–411. Available at: http://dx.doi.org/10.1016/j.neuron.2016.11.005.Google Scholar
Eberhardt, F. & Danks, D. (2011) Confirmation in the cognitive sciences: The problematic case of Bayesian models. Minds and Machines 21(3):389410. Available at: http://link.springer.com/10.1007/s11023-011-9241-3.Google Scholar
Eckstein, M. P. (2011) Visual search: A retrospective. Journal of Vision 11(5):14. Available at: http://www.journalofvision.org/content/11/5/14.abstract.Google Scholar
Ernst, M. O. & Banks, M. S. (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415(6870):429–33. Available at: http://dx.doi.org/10.1038/415429a.Google Scholar
Evans, K. K., Birdwell, R. L. & Wolfe, J. M. (2013) If you don't find it often, you often don't find it: Why some cancers are missed in breast cancer screening. PLoS ONE 8(5):e64366. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3667799&tool=pmcentrez&rendertype=abstract.Google Scholar
Evans, K. K., Tambouret, R. H., Evered, A., Wilbur, D. C. & Wolfe, J. M. (2011) Prevalence of abnormalities influences cytologists’ error rates in screening for cervical cancer. Archives of Pathology & Laboratory Medicine 135(12):1557–60. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3966132&tool=pmcentrez&rendertype=abstract.Google Scholar
Fechner, G. T. (1860) Elemente der psychophysik. Breitkopf und Härtel.Google Scholar
Feng, S., Holmes, P., Rorie, A. & Newsome, W. T. (2009) Can monkeys choose optimally when faced with noisy stimuli and unequal rewards? PLoS Computational Biology 5(2):e1000284. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2631644&tool=pmcentrez&rendertype=abstract.Google Scholar
Fetsch, C. R., Pouget, A., Deangelis, G. C. & Angelaki, D. E. (2012) Neural correlates of reliability-based cue weighting during multisensory integration. Nature Neuroscience 15(1):146–54. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=22101645&retmode=ref&cmd=prlinks.Google Scholar
Firestone, C. & Scholl, B. J. (2016) Cognition does not affect perception: Evaluating the evidence for “top-down” effects. Behavioral and Brain Sciences 39:e229. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26189677.Google Scholar
Fischer, J. & Whitney, D. (2014) Serial dependence in visual perception. Nature Neuroscience 17(5):738–43. Available at: http://dx.doi.org/10.1038/nn.3689.Google Scholar
Fiser, J., Berkes, P., Orbán, G. & Lengyel, M. (2010) Statistically optimal perception and learning: From behavior to neural representations. Trends in Cognitive Sciences 14(3):119–30. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2939867&tool=pmcentrez&rendertype=abstract.Google Scholar
Fitts, P. M. (1966) Cognitive aspects of information processing: III. Set for speed versus accuracy. Journal of Experimental Psychology 71(6):849–57.Google Scholar
Fleming, S. M. & Daw, N. D. (2017) Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation. Psychological Review 124(1):91114. http://doi.org/10.1037/rev0000045.Google Scholar
Fleming, S. M. & Lau, H. (2014) How to measure metacognition. Frontiers in Human Neuroscience 8:443. Available at: http://journal.frontiersin.org/Journal/10.3389/fnhum.2014.00443/abstract.Google Scholar
Fleming, S. M., Maloney, L. T. & Daw, N. D. (2013) The irrationality of categorical perception. Journal of Neuroscience 33(49):19060–70. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=24305804&retmode=ref&cmd=prlinks.Google Scholar
Fleming, S. M., Maniscalco, B. & Ko, Y. (2015) Action-specific disruption of perceptual confidence. Psychological Science 26(1):8998. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25425059.Google Scholar
Fleming, S. M., Massoni, S., Gajdos, T. & Vergnaud, J.-C. (2016) Metacognition about the past and future: Quantifying common and distinct influences on prospective and retrospective judgments of self-performance. Neuroscience of Consciousness 2016(1):niw018. Available at: https://academic.oup.com/nc/article-lookup/doi/10.1093/nc/niw018.Google Scholar
Fleming, S. M., Ryu, J., Golfinos, J. G. & Blackmon, K. E. (2014) Domain-specific impairment in metacognitive accuracy following anterior prefrontal lesions. Brain 137(10):2811–22. Available at: http://brain.oxfordjournals.org/content/early/2014/08/06/brain.awu221.long.Google Scholar
Forstmann, B. U., Ratcliff, R. & Wagenmakers, E.-J. (2016) Sequential sampling models in cognitive neuroscience: Advantages, applications, and extensions. Annual Review of Psychology 67:641–66. Available at: http://www.annualreviews.org/eprint/2stAyEdsCkSk9MpsHMDV/full/10.1146/annurev-psych-122414-033645.Google Scholar
Fritsche, M., Mostert, P. & de Lange, F. P. (2017) Opposite effects of recent history on perception and decision. Current Biology 27(4):590–95. Available at: http://dx.doi.org/10.1016/j.cub.2017.01.006.Google Scholar
Frund, I., Wichmann, F. A. & Macke, J. H. (2014) Quantifying the effect of intertrial dependence on perceptual decisions. Journal of Vision 14(7):9. doi:10.1167/14.7.9.Google Scholar
Fuller, S., Park, Y. & Carrasco, M. (2009) Cue contrast modulates the effects of exogenous attention on appearance. Vision Research 49(14):1825–37. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=19393260&retmode=ref&cmd=prlinks.Google Scholar
Ganmor, E., Landy, M. S. & Simoncelli, E. P. (2015) Near-optimal integration of orientation information across saccades. Journal of Vision 15(16):8. Available at: http://jov.arvojournals.org/article.aspx?doi=10.1167/15.16.8.Google Scholar
Garcia, S. E., Jones, P. R., Reeve, E. I., Michaelides, M., Rubin, G. S. & Nardini, M. (2017) Multisensory cue combination after sensory loss: Audio-visual localization in patients with progressive retinal disease. Journal of Experimental Psychology: Human Perception and Performance 43(4):729–40. Available at: http://doi.apa.org/getdoi.cfm?doi=10.1037/xhp0000344.Google Scholar
García-Pérez, M. A. & Alcalá-Quintana, R. (2010) The difference model with guessing explains interval bias in two-alternative forced-choice detection procedures. Journal of Sensory Studies 25(6):876–98. Available at: http://doi.wiley.com/10.1111/j.1745-459X.2010.00310.x.Google Scholar
García-Pérez, M. A. & Alcalá-Quintana, R. (2011) Interval bias in 2AFC detection tasks: Sorting out the artifacts. Attention, Perception, & Psychophysics 73(7):2332–52. Available at: http://www.springerlink.com/index/10.3758/s13414-011-0167-x.Google Scholar
Geisler, W. S. (2011) Contributions of ideal observer theory to vision research. Vision Research 51(7):771–81.Google Scholar
Geisler, W. S. & Najemnik, J. (2013) Optimal and non-optimal fixation selection in visual search. Perception ECVP Abstract 42:226. Available at: http://www.perceptionweb.com/abstract.cgi?id=v130805.Google Scholar
Gekas, N., Chalk, M., Seitz, A. R. & Series, P. (2013) Complexity and specificity of experimentally-induced expectations in motion perception. Journal of Vision 13(4):8. Available at: http://jov.arvojournals.org/article.aspx?articleid=2121832.Google Scholar
Gepshtein, S., Burge, J., Ernst, M. O. & Banks, M. S. (2005) The combination of vision and touch depends on spatial proximity. Journal of Vision 5(11):1013–23. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=16441199&retmode=ref&cmd=prlinks.Google Scholar
Gershman, S. J., Horvitz, E. J. & Tenenbaum, J. B. (2015) Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science 349(6245):273–78. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26185246.Google Scholar
Gibson, J. J. & Radner, M. (1937) Adaptation, after-effect and contrast in the perception of tilted lines. I. Quantitative studies. Journal of Experimental Psychology 20(5):453–67. Available at: http://doi.apa.org/getdoi.cfm?doi=10.1037/h0059826.Google Scholar
Gigerenzer, G. & Brighton, H. (2009) Homo heuristicus: Why biased minds make better inferences. Topics in Cognitive Science 1(1):107–43.Google Scholar
Gigerenzer, G., Hoffrage, U. & Kleinbölting, H. (1991) Probabilistic mental models: A Brunswikian theory of confidence. Psychological Review 98(4):506–28. Available at: http://www.ncbi.nlm.nih.gov/pubmed/1961771.Google Scholar
Gigerenzer, G. & Selten, R. (2002) Bounded rationality. MIT Press.Google Scholar
Girshick, A. R., Landy, M. S. & Simoncelli, E. P. (2011) Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics. Nature Neuroscience 14(7):926–32. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3125404&tool=pmcentrez&rendertype=abstract.Google Scholar
Glennerster, A., Tcheang, L., Gilson, S. J., Fitzgibbon, A. W. & Parker, A. J. (2006) Humans ignore motion and stereo cues in favor of a fictional stable world. Current Biology 16(4):428–32. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=16488879&retmode=ref&cmd=prlinks.Google Scholar
Gobell, J. & Carrasco, M. (2005) Attention alters the appearance of spatial frequency and gap size. Psychological Science 16(8):644–51. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=16102068&retmode=ref&cmd=prlinks.Google Scholar
Gold, J. M., Murray, R. F., Bennett, P. J. & Sekuler, A. B. (2000) Deriving behavioural receptive fields for visually completed contours. Current Biology 10(11):663–66. Available at: http://www.sciencedirect.com/science/article/pii/S0960982200005236.Google Scholar
Goodman, N. D., Frank, M. C. & Griffiths, T. L., Tenenbaum, J. B., Battaglia, P. W. & Hamrick, J. B. (2015) Relevant and robust: A response to Marcus and Davis (2013) Psychological Science 26(4):539–41. Available at: http://pss.sagepub.com/lookup/doi/10.1177/0956797614559544.Google Scholar
Gorea, A., Caetta, F. & Sagi, D. (2005) Criteria interactions across visual attributes. Vision Research 45(19):2523–32. Available at: http://www.ncbi.nlm.nih.gov/pubmed/15950255.Google Scholar
Gorea, A. & Sagi, D. (2000) Failure to handle more than one internal representation in visual detection tasks. Proceedings of the National Academy of Sciences of the United States of America 97(22):12380–84. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=17350&tool=pmcentrez&rendertype=abstract.Google Scholar
Gorea, A. & Sagi, D. (2001) Disentangling signal from noise in visual contrast discrimination. Nature Neuroscience 4(11):1146–50. Available at: http://www.ncbi.nlm.nih.gov/pubmed/11687818.Google Scholar
Gorea, A. & Sagi, D. (2002) Natural extinction: A criterion shift phenomenon. Visual Cognition 9(8):913–36.Google Scholar
Gori, M., Del Viva, M., Sandini, G. & Burr, D. C. (2008) Young children do not integrate visual and haptic form information. Current Biology 18(9):694–98. Available at: https://doi.org/10.1016/j.cub.2008.04.036.Google Scholar
Green, D. M. & Swets, J. A. (1966) Signal detection theory and psychophysics. John Wiley & Sons.Google Scholar
Griffin, D. & Tversky, A. (1992) The weighing of evidence and the determinants of confidence. Cognitive Psychology 24(3):411–35. Available at: http://www.sciencedirect.com/science/article/pii/001002859290013R.Google Scholar
Griffiths, T. L., Chater, N., Norris, D. & Pouget, A. (2012) How the Bayesians got their beliefs (and what those beliefs actually are): Comment on Bowers and Davis (2012) Psychological Bulletin 138(3):415–22. Available at: https://doi.org/10.1037/a0026884.Google Scholar
Griffiths, T. L., Lieder, F. & Goodman, N. D. (2015) Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in Cognitive Science 7(2):217–29. Available at: http://doi.wiley.com/10.1111/tops.12142.Google Scholar
Grzywacz, N. M. & Balboa, R. M. (2002) A Bayesian framework for sensory adaptation. Neural Computation 14(3):543–59. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=11860682&retmode=ref&cmd=prlinks.Google Scholar
Gu, Y., Angelaki, D. E. & DeAngelis, G. C. (2008) Neural correlates of multisensory cue integration in macaque MSTd. Nature Neuroscience 11(10):1201–10. Available at: http://www.nature.com/doifinder/10.1038/nn.2191.Google Scholar
Hammett, S. T., Champion, R. A., Thompson, P. G. & Morland, A. B. (2007) Perceptual distortions of speed at low luminance: Evidence inconsistent with a Bayesian account of speed encoding. Vision Research 47(4):564–68. Available at: http://www.ncbi.nlm.nih.gov/pubmed/17011014.Google Scholar
Hanks, T. D., Mazurek, M. E., Kiani, R., Hopp, E. & Shadlen, M. N. (2011) Elapsed decision time affects the weighting of prior probability in a perceptual decision task. Journal of Neuroscience 31(17):6339–52. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3356114&tool=pmcentrez&rendertype=abstract.Google Scholar
Hanks, T. D. & Summerfield, C. (2017) Perceptual decision making in rodents, monkeys, and humans. Neuron 93(1):1531. Available at: http://dx.doi.org/10.1016/j.neuron.2016.12.003.Google Scholar
Harvey, N. (1997) Confidence in judgment. Trends in Cognitive Sciences 1(2):7882. Available at: http://www.ncbi.nlm.nih.gov/pubmed/21223868.Google Scholar
Hassan, O. & Hammett, S. T. (2015) Perceptual biases are inconsistent with Bayesian encoding of speed in the human visual system. Journal of Vision 15(2):9. Available at: http://jov.arvojournals.org/article.aspx?articleid=2213273.Google Scholar
Hawkins, G. E., Forstmann, B. U., Wagenmakers, E.-J., Ratcliff, R. & Brown, S. D. (2015) Revisiting the evidence for collapsing boundaries and urgency signals in perceptual decision-making. Journal of Neuroscience 35(6):2476–84. Available at: http://www.jneurosci.org/content/35/6/2476.full.Google Scholar
Healy, A. F. & Kubovy, M. (1981) Probability matching and the formation of conservative decision rules in a numerical analog of signal detection. Journal of Experimental Psychology: Human Learning and Memory 7(5):344–54.Google Scholar
Heitz, R. P. (2014) The speed-accuracy tradeoff: History, physiology, methodology, and behavior. Frontiers in Neuroscience 8:150. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4052662&tool=pmcentrez&rendertype=abstract.Google Scholar
Helmholtz, H. L. F. (1856) Treatise on physiological optics. Thoemmes Continuum.Google Scholar
Henriques, J. B., Glowacki, J. M. & Davidson, R. J. (1994) Reward fails to alter response bias in depression. Journal of Abnormal Psychology 103(3):460–66. Available at: http://www.ncbi.nlm.nih.gov/pubmed/7930045.Google Scholar
Hillis, J. M., Ernst, M. O., Banks, M. S. & Landy, M. S. (2002) Combining sensory information: Mandatory fusion within, but not between, senses. Science 298(5598):1627–30. Available at: http://www.sciencemag.org/cgi/doi/10.1126/science.1075396.Google Scholar
Hohwy, J., Roepstorff, A. & Friston, K. (2008) Predictive coding explains binocular rivalry: An epistemological review. Cognition 108(3):687701. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=18649876&retmode=ref&cmd=prlinks.Google Scholar
Holmes, P. & Cohen, J. D. (2014) Optimality and some of its discontents: Successes and shortcomings of existing models for binary decisions. Topics in Cognitive Science 6(2):258–78. Available at: http://www.ncbi.nlm.nih.gov/pubmed/24648411.Google Scholar
Jack, C. E. & Thurlow, W. R. (1973) Effects of degree of visual association and angle of displacement on the “ventriloquism” effect. Perceptual and Motor Skills 37(3):967–79. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=4764534&retmode=ref&cmd=prlinks.Google Scholar
Jacobs, R. A. (1999) Optimal integration of texture and motion cues to depth. Vision Research 39(21):3621–29. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=10746132&retmode=ref&cmd=prlinks.Google Scholar
Jastrow, J. (1892) Studies from the University of Wisconsin: On the judgment of angles and positions of lines. American Journal of Psychology 5(2):214–48. Available at: http://www.jstor.org/stable/1410867?origin=crossref.Google Scholar
Jaynes, E. (1957/2003) Probability theory: The logic of science. (Original lectures published 1957). Available at: http://www.med.mcgill.ca/epidemiology/hanley/bios601/GaussianModel/JaynesProbabilityTheory.pdf. Cambridge University Press.Google Scholar
Jazayeri, M. & Movshon, J. A. (2007) A new perceptual illusion reveals mechanisms of sensory decoding. Nature 446(7138):912–15. Available at: http://www.nature.com/doifinder/10.1038/nature05739.Google Scholar
Jesteadt, W. (1974) Intensity and frequency discrimination in one- and two-interval paradigms. Journal of the Acoustical Society of America 55(6):1266–76. Available at: http://scitation.aip.org/content/asa/journal/jasa/55/6/10.1121/1.1914696.Google Scholar
Jolij, J. & Lamme, V. A. F. (2005) Repression of unconscious information by conscious processing: Evidence from affective blindsight induced by transcranial magnetic stimulation. Proceedings of the National Academy of Sciences of the United States of America 102(30):10747–51. Available at: http://www.pnas.org/content/102/30/10747.abstract.Google Scholar
Jones, M. & Love, B. C. (2011) Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences 34(4):169–88. Available at: http://www.journals.cambridge.org/abstract_S0140525X10003134.Google Scholar
Juslin, P., Nilsson, H. & Winman, A. (2009) Probability theory, not the very guide of life. Psychological Review 116(4):856–74. Available at: http://psycnet.apa.org/record/2009-18254-007Google Scholar
Kahneman, D. & Tversky, A. (1979) Prospect theory: An analysis of decision under risk. Econometrica 47(2):263–92. Retrieved March 11, 2017. Available at: http://www.jstor.org/stable/1914185?origin=crossref.Google Scholar
Kalenscher, T., Tobler, P. N., Huijbers, W., Daselaar, S. M. & Pennartz, C. (2010) Neural signatures of intransitive preferences. Frontiers in Human Neuroscience 4:49. Available at: http://journal.frontiersin.org/article/10.3389/fnhum.2010.00049/abstract.Google Scholar
Kaneko, Y. & Sakai, K. (2015) Dissociation in decision bias mechanism between probabilistic information and previous decision. Frontiers in Human Neuroscience 9:261. Available at: http://journal.frontiersin.org/article/10.3389/fnhum.2015.00261/abstract.Google Scholar
Keren, G. (1988) On the ability of monitoring non-veridical perceptions and uncertain knowledge: Some calibration studies. Acta Psychologica 67(2):95119. Available at: http://www.sciencedirect.com/science/article/pii/0001691888900078.Google Scholar
Kiani, R., Corthell, L. & Shadlen, M. N. (2014) Choice certainty is informed by both evidence and decision time. Neuron 84(6):1329–42. Available at: http://www.sciencedirect.com/science/article/pii/S0896627314010964.Google Scholar
Kiani, R. & Shadlen, M. N. (2009) Representation of confidence associated with a decision by neurons in the parietal cortex. Science 324(5928):759–64. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2738936&tool=pmcentrez&rendertype=abstract.Google Scholar
Kinchla, R. A. & Smyzer, F. (1967) A diffusion model of perceptual memory. Perception & Psychophysics 2(6):219–29. Available at: http://www.springerlink.com/index/10.3758/BF03212471.Google Scholar
Knill, D. C. & Pouget, A. (2004) The Bayesian brain: The role of uncertainty in neural coding and computation. Trends in Neurosciences 27(12):712–19. Available at: http://www.ncbi.nlm.nih.gov/pubmed/15541511.Google Scholar
Knill, D. C. & Saunders, J. A. (2003) Do humans optimally integrate stereo and texture information for judgments of surface slant? Vision Research 43(24):2539–58.Google Scholar
Koizumi, A., Maniscalco, B. & Lau, H. (2015) Does perceptual confidence facilitate cognitive control? Attention, Perception & Psychophysics 77(4):1295–306. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25737256.Google Scholar
Körding, K. P., Beierholm, U., Ma, W. J., Quartz, S., Tenenbaum, J. B. & Shams, L. (2007) Causal inference in multisensory perception. PLoS ONE 2(9):e943. Available at: http://dx.plos.org/10.1371/journal.pone.0000943.Google Scholar
Körding, K. P. & Wolpert, D. M. (2004) Bayesian integration in sensorimotor learning. Nature 427(6971):244–47. Available at: http://dx.doi.org/10.1038/nature02169 .Google Scholar
Körding, K. P. & Wolpert, D. M. (2006) Bayesian decision theory in sensorimotor control. Trends in Cognitive Sciences 10(7):319–26. Available at: https://doi.org/10.1016/j.tics.2006.05.003.Google Scholar
Koriat, A. (2011) Subjective confidence in perceptual judgments: A test of the self-consistency model. Journal of Experimental Psychology: General 140(1):117–39. Available at: http://www.ncbi.nlm.nih.gov/pubmed/2129932.Google Scholar
Landy, M. S., Banks, M. S. & Knill, D. C. (2011) Ideal-observer models of cue integration. In: Sensory cue integration, ed. Trommershäuser, J., Körding, K. P. & Landy, M. S., pp. 529. Oxford University Press.Google Scholar
Landy, M. S., Goutcher, R., Trommershäuser, J. & Mamassian, P. (2007) Visual estimation under risk. Journal of Vision 7(6):4. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2638507&tool=pmcentrez&rendertype=abstract.Google Scholar
Landy, M. S. & Kojima, H. (2001) Ideal cue combination for localizing texture-defined edges. Journal of the Optical Society of America A, Optics and Image Science 18(9):2307–20. Available at: http://www.cns.nyu.edu/~msl/papers/landykojima01.pdf.Google Scholar
Landy, M. S., Maloney, L., Johnston, E. B. & Young, M. (1995) Measurement and modeling of depth cue combination: In defense of weak fusion. Vision Research 35(3):389412.Google Scholar
Langer, M. S. & Bülthoff, H. H. (2001) A prior for global convexity in local shape-from-shading. Perception 30(4):403–10. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=11383189&retmode=ref&cmd=prlinks.Google Scholar
Lau, H. & Passingham, R. E. (2006) Relative blindsight in normal observers and the neural correlate of visual consciousness. Proceedings of the National Academy of Sciences of the United States of America 103(49):18763–68. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1693736&tool=pmcentrez&rendertype=abstract.Google Scholar
Lau, H. & Rosenthal, D. (2011) Empirical support for higher-order theories of conscious awareness. Trends in Cognitive Sciences 15(8):365–73. Available at: http://www.ncbi.nlm.nih.gov/pubmed/21737339.Google Scholar
Lennie, P. (2003) The cost of cortical computation. Current Biology 13(6):493–97. Available at: https://www.sciencedirect.com/science/article/pii/S0960982203001350.Google Scholar
Leshowitz, B. (1969) Comparison of ROC curves from one- and two-interval rating-scale procedures. Journal of the Acoustical Society of America 46(2B):399402. Available at: http://scitation.aip.org/content/asa/journal/jasa/46/2B/10.1121/1.1911703.Google Scholar
Liberman, A., Fischer, J. & Whitney, D. (2014) Serial dependence in the perception of faces. Current Biology 24(21):2569–74. doi:10.1016/j.cub.2014.09.025.Google Scholar
Ling, S. & Carrasco, M. (2006) When sustained attention impairs perception. Nature Neuroscience 9(10):1243–45. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=16964254&retmode=ref&cmd=prlinks.Google Scholar
Liu, T., Abrams, J. & Carrasco, M. (2009) Voluntary attention enhances contrast appearance. Psychological Science 20(3):354–62. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=19254239&retmode=ref&cmd=prlinks.Google Scholar
Lupyan, G. (2012) Linguistically modulated perception and cognition: The label-feedback hypothesis. Frontiers in Psychology 3:54. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=22408629&retmode=ref&cmd=prlinks.Google Scholar
Lupyan, G. (2017) The paradox of the universal triangle: Concepts, language, and prototypes. Quarterly Journal of Experimental Psychology 70(3):389412. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=26731302&retmode=ref&cmd=prlinks.Google Scholar
Luu, L. & Stocker, A. A. (2016) Choice-induced biases in perception. bioRxiv 043224. Available at: http://biorxiv.org/content/early/2016/04/01/043224.abstract.Google Scholar
Ma, W. J. (2010) Signal detection theory, uncertainty, and Poisson-like population codes. Vision Research 50(22):2308–19. Available at: http://www.sciencedirect.com/science/article/pii/S004269891000430X.Google Scholar
Ma, W. J., Beck, J. M., Latham, P. E. & Pouget, A. (2006) Bayesian inference with probabilistic population codes. Nature Neuroscience 9(11):1432–38. Available at: http://www.ncbi.nlm.nih.gov/pubmed/17057707.Google Scholar
Macmillan, N. A. & Creelman, C. D. (2005) Detection theory: A user's guide. 2nd edition. Erlbaum.Google Scholar
Maddox, W. T. (1995) Base-rate effects in multidimensional perceptual categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition 21(2):288301. Available at: http://www.ncbi.nlm.nih.gov/pubmed/7738501.Google Scholar
Maddox, W. T. (2002) Toward a unified theory of decision criterion learning in perceptual categorization. Journal of the Experimental Analysis of Behavior 78(3):567–95. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1284916/.Google Scholar
Maddox, W. T. & Bohil, C. J. (1998a) Base-rate and payoff effects in multidimensional perceptual categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition 24(6):1459–82.Google Scholar
Maddox, W. T. & Bohil, C. J. (1998b) Overestimation of base-rate differences in complex perceptual categories. Perception & Psychophysics 60(4):575–92.Google Scholar
Maddox, W. T. & Bohil, C. J. (2000) Costs and benefits in perceptual categorization. Memory & Cognition 28(4):597615.Google Scholar
Maddox, W. T. & Bohil, C. J. (2001) Feedback effects on cost-benefit learning in perceptual categorization. Memory & Cognition 29(4):598615.Google Scholar
Maddox, W. T. & Bohil, C. J. (2003) A theoretical framework for understanding the effects of simultaneous base-rate and payoff manipulations on decision criterion learning in perceptual categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition 29(2):307–20.Google Scholar
Maddox, W. T. & Bohil, C. J. (2004) Probability matching, accuracy maximization, and a test of the optimal classifier's independence assumption in perceptual categorization. Perception & Psychophysics 66(1):104–18.Google Scholar
Maddox, W. T. & Bohil, C. J. (2005) Optimal classifier feedback improves cost-benefit but not base-rate decision criterion learning in perceptual categorization. Memory & Cognition 33(2):303–19.Google Scholar
Maddox, W. T., Bohil, C. J. & Dodd, J. L. (2003) Linear transformations of the payoff matrix and decision criterion learning in perceptual categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition 29(6):1174–93.Google Scholar
Maddox, W. T. & Dodd, J. L. (2001) On the relation between base-rate and cost-benefit learning in simulated medical diagnosis. Journal of Experimental Psychology: Learning, Memory, and Cognition 27(6):1367–84. Available at: http://www.ncbi.nlm.nih.gov/pubmed/11713873.Google Scholar
Maiworm, M. & Röder, B. (2011) Suboptimal auditory dominance in audiovisual integration of temporal cues. Tsinghua Science & Technology 16(2):121–32.Google Scholar
Maloney, L. T. & Landy, M. S. (1989) A statistical framework for robust fusion of depth information. In: Proceedings of Society of Photo-Optical Instrumentation Engineers (SPIE) 1119, Visual Communications and Image Processing IV, ed. Pearlman, W. A., pp. 1154–63. SPIE. Available at: http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=1262206.Google Scholar
Maloney, L. T. & Mamassian, P. (2009) Bayesian decision theory as a model of human visual perception: Testing Bayesian transfer. Visual Neuroscience 26(1):147–55. Available at: https://doi.org/10.1017/S0952523808080905.Google Scholar
Maloney, L. T. & Zhang, H. (2010) Decision-theoretic models of visual perception and action. Vision Research 50(23):2362–74. Available at: http://www.ncbi.nlm.nih.gov/pubmed/20932856.Google Scholar
Maniscalco, B. & Lau, H. (2010) Comparing signal detection models of perceptual decision confidence. Journal of Vision 10(7):213. Available at: http://jov.arvojournals.org/article.aspx?articleid=2138292.Google Scholar
Maniscalco, B. & Lau, H. (2012) A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition 21(1):422–30. Available at: http://www.ncbi.nlm.nih.gov/pubmed/22071269.Google Scholar
Maniscalco, B. & Lau, H. (2015) Manipulation of working memory contents selectively impairs metacognitive sensitivity in a concurrent visual discrimination task. Neuroscience of Consciousness 2015(1):niv002. Available at: http://nc.oxfordjournals.org/content/2015/1/niv002.abstract.Google Scholar
Maniscalco, B. & Lau, H. (2016) The signal processing architecture underlying subjective reports of sensory awareness. Neuroscience of Consciousness 2016(1):niw002.Google Scholar
Maniscalco, B., Peters, M. A. K. & Lau, H. (2016) Heuristic use of perceptual evidence leads to dissociation between performance and metacognitive sensitivity. Attention, Perception & Psychophysics 78(3):923–37. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26791233.Google Scholar
Marcus, G. F. & Davis, E. (2013) How robust are probabilistic models of higher-level cognition? Psychological Science 24(12):2351–60. Available at: http://pss.sagepub.com/content/24/12/2351.abstract?ijkey=42fdf6a62d20a7c5e573d149a973e121f7ae2626&keytype2=tf_ipsecsha.Google Scholar
Marcus, G. F. & Davis, E. (2015) Still searching for principles: A response to Goodman et al. (2015) Psychological Science 26(4):542–44. Available at: http://pss.sagepub.com/lookup/doi/10.1177/0956797614568433.Google Scholar
Markman, A. B., Baldwin, G. C. & Maddox, W. T. (2005) The interaction of payoff structure and regulatory focus in classification. Psychological Science 16(11):852–55. Available at: http://www.ncbi.nlm.nih.gov/pubmed/16262768.Google Scholar
Markowitz, J. & Swets, J. A. (1967) Factors affecting the slope of empirical ROC curves: Comparison of binary and rating responses. Perception & Psychophysics 2(3):91100. Available at: http://www.springerlink.com/index/10.3758/BF03210301.Google Scholar
Massoni, S. (2014) Emotion as a boost to metacognition: How worry enhances the quality of confidence. Consciousness and Cognition 29:189–98. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25286128.Google Scholar
Massoni, S., Gajdos, T. & Vergnaud, J.-C. (2014) Confidence measurement in the light of signal detection theory. Frontiers in Psychology 5:1455. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25566135.Google Scholar
McCurdy, L. Y., Maniscalco, B., Metcalfe, J., Liu, K. Y., de Lange, F. P. & Lau, H. (2013) Anatomical coupling between distinct metacognitive systems for memory and visual perception. Journal of Neuroscience 33(5):1897–906. Available at: http://www.ncbi.nlm.nih.gov/pubmed/23365229.Google Scholar
Metcalfe, J. & Shimamura, A. P. (1994) Metacognition: Knowing about knowing. MIT Press.Google Scholar
Michael, E., de Gardelle, V., Nevado-Holgado, A. & Summerfield, C. (2015) Unreliable evidence: 2 Sources of uncertainty during perceptual choice. Cerebral Cortex 25(4):937–47. Available at: http://www.ncbi.nlm.nih.gov/pubmed/24122138.Google Scholar
Michael, E., de Gardelle, V. & Summerfield, C. (2014) Priming by the variability of visual information. Proceedings of the National Academy of Sciences of the United States of America 111(21):7873–78. Available at: http://www.ncbi.nlm.nih.gov/pubmed/24821803.Google Scholar
Morales, J., Solovey, G., Maniscalco, B., Rahnev, D., de Lange, F. P. & Lau, H. (2015) Low attention impairs optimal incorporation of prior knowledge in perceptual decisions. Attention, Perception & Psychophysics 77(6):2021–36. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25836765.Google Scholar
Mozer, M. C., Pashler, H. & Homaei, H. (2008) Optimal predictions in everyday cognition: The wisdom of individuals or crowds? Cognitive Science 32(7):1133–47.Google Scholar
Mueller, S. T. & Weidemann, C. T. (2008) Decision noise: An explanation for observed violations of signal detection theory. Psychonomic Bulletin & Review 15(3):465–94. Available at: http://www.springerlink.com/index/10.3758/PBR.15.3.465.Google Scholar
Nardini, M., Bedford, R. & Mareschal, D. (2010) Fusion of visual cues is not mandatory in children. Proceedings of the National Academy of Sciences of the United States of America 107(39):17041–46. Available at: https://doi.org/10.1073/pnas.1001699107.Google Scholar
Nardini, M., Jones, P., Bedford, R. & Braddick, O. (2008) Development of cue integration in human navigation. Current Biology 18(9):689–93. Available at: https://doi.org/10.1016/j.cub.2008.04.021.Google Scholar
Navajas, J., Hindocha, C., Foda, H., Keramati, M., Latham, P. E. & Bahrami, B. (2017) The idiosyncratic nature of confidence. Nature Human Behaviour 1(11):810–18. Available at: http://www.nature.com/articles/s41562-017-0215-1.Google Scholar
Navajas, J., Sigman, M. & Kamienkowski, J. E. (2014) Dynamics of visibility, confidence, and choice during eye movements. Journal of Experimental Psychology: Human Perception and Performance 40(3):1213–27. Available at: http://www.ncbi.nlm.nih.gov/pubmed/24730743.Google Scholar
Norton, E. H., Fleming, S. M., Daw, N. D. & Landy, M. S. (2017) Suboptimal criterion learning in static and dynamic environments. PLoS Computational Biology 13(1):e1005304.Google Scholar
Odegaard, B., Wozny, D. R. & Shams, L. (2015) Biases in visual, auditory, and audiovisual perception of space. PLoS Computational Biology 11(12):e1004649. Available at: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004649.Google Scholar
Olzak, L. A. (1985) Interactions between spatially tuned mechanisms: Converging evidence. Journal of the Optical Society of America A, Optics and Image Science 2(9):1551–59.Google Scholar
Oruç, I., Maloney, L. T. & Landy, M. S. (2003) Weighted linear cue combination with possibly correlated error. Vision Research 43(23):2451–68. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=12972395&retmode=ref&cmd=prlinks.Google Scholar
Osgood, C. E. (1953) Method and theory in experimental psychology. Oxford University Press.Google Scholar
Oud, B., Krajbich, I., Miller, K., Cheong, J. H., Botvinick, M. & Fehr, E. (2016) Irrational time allocation in decision-making. Proceedings of the Royal Society B: Biological Sciences 283(1822):20151439. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26763695.Google Scholar
Peters, M. A. K., Ma, W. J. & Shams, L. (2016) The size-weight illusion is not anti-Bayesian after all: A unifying Bayesian account. PeerJ 4:e2124. Available at: http://www.ncbi.nlm.nih.gov/pubmed/27350899.Google Scholar
Petrini, K., Remark, A., Smith, L. & Nardini, M. (2014) When vision is not an option: Children's integration of auditory and haptic information is suboptimal. Developmental Science 17(3):376–87. Available at: http://onlinelibrary.wiley.com/doi/10.1111/desc.12127/full.Google Scholar
Petzschner, F. H. & Glasauer, S. (2011) Iterative Bayesian estimation as an explanation for range and regression effects: A study on human path integration. Journal of Neuroscience 31(47):17220–29. Available at: http://www.jneurosci.org/content/31/47/17220.Google Scholar
Plaisier, M. A., van Dam, L. C. J., Glowania, C. & Ernst, M. O. (2014) Exploration mode affects visuohaptic integration of surface orientation. Journal of Vision 14(13):22. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25413627.Google Scholar
Pleskac, T. J. & Busemeyer, J. R. (2010) Two-stage dynamic signal detection: A theory of choice, decision time, and confidence. Psychological Review 117(3):864901. Available at: http://www.ncbi.nlm.nih.gov/pubmed/20658856.Google Scholar
Prsa, M., Gale, S. & Blanke, O. (2012) Self-motion leads to mandatory cue fusion across sensory modalities. Journal of Neurophysiology 108(8):2282–91. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=22832567&retmode=ref&cmd=prlinks.Google Scholar
Pynn, C. T. (1972) Intensity perception. III. Resolution in small-range identification. Journal of the Acoustical Society of America 51(2B):559–66. Available at: http://scitation.aip.org/content/asa/journal/jasa/51/2B/10.1121/1.1912878.Google Scholar
Rahnev, D., Bahdo, L., de Lange, F. P. & Lau, H. (2012a) Prestimulus hemodynamic activity in dorsal attention network is negatively associated with decision confidence in visual perception. Journal of Neurophysiology 108(5):1529–36. Available at: http://www.ncbi.nlm.nih.gov/pubmed/22723670.Google Scholar
Rahnev, D., Koizumi, A., McCurdy, L. Y., D'Esposito, M. & Lau, H. (2015) Confidence leak in perceptual decision making. Psychological Science 26(11):1664–80. Available at: http://pss.sagepub.com/lookup/doi/10.1177/0956797615595037.Google Scholar
Rahnev, D., Kok, P., Munneke, M., Bahdo, L., de Lange, F. P. & Lau, H. (2013) Continuous theta burst transcranial magnetic stimulation reduces resting state connectivity between visual areas. Journal of Neurophysiology 110(8):1811–21. Available at: http://www.ncbi.nlm.nih.gov/pubmed/23883858.Google Scholar
Rahnev, D., Lau, H. & de Lange, F. P. (2011a) Prior expectation modulates the interaction between sensory and prefrontal regions in the human brain. Journal of Neuroscience 31(29):10741–48.Google Scholar
Rahnev, D., Maniscalco, B., Graves, T., Huang, E., de Lange, F. P. & Lau, H. (2011b) Attention induces conservative subjective biases in visual perception. Nature Neuroscience 14(12):1513–15. Available at: http://www.ncbi.nlm.nih.gov/pubmed/22019729.Google Scholar
Rahnev, D., Maniscalco, B., Luber, B., Lau, H. & Lisanby, S. H. (2012b) Direct injection of noise to the visual cortex decreases accuracy but increases decision confidence. Journal of Neurophysiology 107(6):1556–63. Available at: http://www.ncbi.nlm.nih.gov/pubmed/22170965.Google Scholar
Rahnev, D., Nee, D. E., Riddle, J., Larson, A. S. & D'Esposito, M. (2016) Causal evidence for frontal cortex organization for perceptual decision making. Proceedings of the National Academy of Sciences of the United States of America 113(20):6059–64. Available at: http://www.pnas.org/content/early/2016/05/04/1522551113.full?tab=metrics.Google Scholar
Ramachandran, V. (1990) Interactions between motion, depth, color and form: The utilitarian theory of perception. In: Vision: Coding and efficiency, ed. Blakemore, C., pp. 346–60. Cambridge University Press.Google Scholar
Ratcliff, R. & Starns, J. J. (2009) Modeling confidence and response time in recognition memory. Psychological Review 116(1):5983. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2693899&tool=pmcentrez&rendertype=abstract.Google Scholar
Rauber, H. J. & Treue, S. (1998) Reference repulsion when judging the direction of visual motion. Perception 27(4):393402. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=9797918&retmode=ref&cmd=prlinks.Google Scholar
Raviv, O., Ahissar, M. & Loewenstein, Y. (2012) How recent history affects perception: The normative approach and its heuristic approximation. PLoS Computational Biology 8(10):e1002731. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=23133343&retmode=ref&cmd=prlinks.Google Scholar
Reckless, G. E., Bolstad, I., Nakstad, P. H., Andreassen, O. A. & Jensen, J. (2013) Motivation alters response bias and neural activation patterns in a perceptual decision-making task. Neuroscience 238:135–47. Available at: http://www.ncbi.nlm.nih.gov/pubmed/23428623.Google Scholar
Reckless, G. E., Ousdal, O. T., Server, A., Walter, H., Andreassen, O. A. & Jensen, J. (2014) The left inferior frontal gyrus is involved in adjusting response bias during a perceptual decision-making task. Brain and Behavior 4(3):398407. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4055190&tool=pmcentrez&rendertype=abstract.Google Scholar
Regenwetter, M., Cavagnaro, D. R., Popova, A., Guo, Y., Zwilling, C., Lim, S. H. & Stevens, J. R. (2017) Heterogeneity and parsimony in intertemporal choice. Decision 5(2):6394. Available at: http://doi.apa.org/getdoi.cfm?doi=10.1037/dec0000069.Google Scholar
Regenwetter, M., Dana, J. & Davis-Stober, C. P. (2010) Testing transitivity of preferences on two-alternative forced choice data. Frontiers in Psychology 1:148. Available at: http://journal.frontiersin.org/article/10.3389/fpsyg.2010.00148/abstract.Google Scholar
Regenwetter, M., Dana, J., Davis-Stober, C. P. & Guo, Y. (2011) Parsimonious testing of transitive or intransitive preferences: Reply to Birnbaum (2011) Psychological Review 118(4):684–88. Available at: http://doi.apa.org/getdoi.cfm?doi=10.1037/a0025291.Google Scholar
Renart, A. & Machens, C. K. (2014) Variability in neural activity and behavior. Current Opinion in Neurobiology 25:211–20. Available at: http://dx.doi.org/10.1016/j.conb.2014.02.013.Google Scholar
Roach, N. W., Heron, J. & McGraw, P. V. (2006) Resolving multisensory conflict: A strategy for balancing the costs and benefits of audio-visual integration. Proceedings of the Royal Society B: Biological Sciences 273(1598):2159–68. doi:10.1098/rspb.2006.3578.Google Scholar
Rosas, P., Wagemans, J., Ernst, M. O. & Wichmann, F. A. (2005) Texture and haptic cues in slant discrimination: Reliability-based cue weighting without statistically optimal cue combination. Journal of the Optical Society of America A, Optics and Image Science 22(5):801809.Google Scholar
Rosas, P. & Wichmann, F. A. (2011) Cue combination: Beyond optimality. In: Sensory cue integration, ed. Trommershäuser, J., Körding, K. P. & Landy, M. S., pp. 144–52. Oxford University Press.Google Scholar
Rosas, P., Wichmann, F. A. & Wagemans, J. (2007) Texture and object motion in slant discrimination: Failure of reliability-based weighting of cues may be evidence for strong fusion. Journal of Vision 7(6):3. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=17685786&retmode=ref&cmd=prlinks.Google Scholar
Saarela, T. P. & Landy, M. S. (2015) Integration trumps selection in object recognition. Current Biology 25(7):920–27. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25802154.Google Scholar
Sabra, A. I. (1989) The optics of Ibn Al-Haytham, Books I–III: On direct vision. Warburg Institute.Google Scholar
Samaha, J., Barrett, J. J., Sheldon, A. D., LaRocque, J. J. & Postle, B. R. (2016) Dissociating perceptual confidence from discrimination accuracy reveals no influence of metacognitive awareness on working memory. Frontiers in Psychology 7:851. Available at: http://journal.frontiersin.org/Article/10.3389/fpsyg.2016.00851/abstract.Google Scholar
Sanders, J. I., Hangya, B. & Kepecs, A. (2016) Signatures of a statistical computation in the human sense of confidence. Neuron 90(3):499506. Available at: http://www.cell.com/article/S0896627316300162/fulltext.Google Scholar
Schulman, A. I. & Mitchell, R. R. (1966) Operating characteristics from yes-no and forced-choice procedures. Journal of the Acoustical Society of America 40(2):473–77. Available at: http://www.ncbi.nlm.nih.gov/pubmed/5911357.Google Scholar
Schurger, A., Kim, M.-S. & Cohen, J. D. (2015) Paradoxical interaction between ocular activity, perception, and decision confidence at the threshold of vision. PLoS ONE 10(5):e0125278. Available at: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0125278.Google Scholar
Schwiedrzik, C. M., Ruff, C. C., Lazar, A., Leitner, F. C., Singer, W. & Melloni, L. (2014) Untangling perceptual memory: Hysteresis and adaptation map into separate cortical networks. Cerebral Cortex 24(5):1152–64. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=23236204&retmode=ref&cmd=prlinks.Google Scholar
See, J. E., Warm, J. S., Dember, W. N. & Howe, S. R. (1997) Vigilance and signal detection theory: An empirical evaluation of five measures of response bias. Human Factors 39(1):1429. Available at: http://hfs.sagepub.com/cgi/doi/10.1518/001872097778940704.Google Scholar
Seriès, P. & Seitz, A. R. (2013) Learning what to expect (in visual perception). Frontiers in Human Neuroscience 7:668. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=24187536&retmode=ref&cmd=prlinks.Google Scholar
Shen, S. & Ma, W. J. (2016) A detailed comparison of optimality and simplicity in perceptual decision making. Psychological Review 123(4):452–80. Available at: http://www.ncbi.nlm.nih.gov/pubmed/27177259.Google Scholar
Sherman, M. T., Seth, A. K., Barrett, A. B. & Kanai, R. (2015) Prior expectations facilitate metacognition for perceptual decision. Consciousness and Cognition 35:5365. Available at: http://www.sciencedirect.com/science/article/pii/S1053810015000926.Google Scholar
Simen, P., Contreras, D., Buck, C., Hu, P., Holmes, P. & Cohen, J. D. (2009) Reward-rate optimization in two-alternative decision making: Empirical tests of theoretical predictions. Journal of Experimental Psychology: Human Perception and Performance 35:1865–97. Available at: http://dx.doi.org/10.1037/a0016926.Google Scholar
Simon, H. A. (1956) Rational choice and the structure of the environment. Psychological Review 63(2):129–38.Google Scholar
Simon, H. A. (1957) A behavioral model of rational choice. In: Models of man, social and rational: Mathematical essays on rational human behavior in a social setting, pp. 99118. Wiley.Google Scholar
Snyder, J. S., Schwiedrzik, C. M., Vitela, A. D. & Melloni, L. (2015) How previous experience shapes perception in different sensory modalities. Frontiers in Human Neuroscience 9:594. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=26582982&retmode=ref&cmd=prlinks.Google Scholar
Solovey, G., Graney, G. G. & Lau, H. (2015) A decisional account of subjective inflation of visual perception at the periphery. Attention, Perception & Psychophysics 77(1):258–71. Available at: http://www.ncbi.nlm.nih.gov/pubmed/25248620.Google Scholar
Song, A., Koizumi, A. & Lau, H. (2015) A behavioral method to manipulate metacognitive awareness independent of stimulus awareness. In: Behavioral methods in consciousness research, ed. Overgaard, M., pp. 7785. Oxford University Press.Google Scholar
Song, C., Kanai, R., Fleming, S. M., Weil, R. S., Schwarzkopf, D. S. & Rees, G. (2011) Relating inter-individual differences in metacognitive performance on different perceptual tasks. Consciousness and Cognition 20(4):1787–92. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3203218&tool=pmcentrez&rendertype=abstract.Google Scholar
Spence, M. L., Dux, P. E. & Arnold, D. H. (2016) Computations underlying confidence in visual perception. Journal of Experimental Psychology: Human Perception and Performance 42(5):671–82. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26594876.Google Scholar
Starns, J. J. & Ratcliff, R. (2010) The effects of aging on the speed–accuracy compromise: Boundary optimality in the diffusion model. Psychology and Aging 25(2):377–90. Available at: http://dx.doi.org/10.1037/a0018022.Google Scholar
Starns, J. J. & Ratcliff, R. (2012) Age-related differences in diffusion model boundary optimality with both trial-limited and time-limited tasks. Psychonomic Bulletin & Review 19(1):139–45. Available at: http://www.ncbi.nlm.nih.gov/pubmed/22144142.Google Scholar
Stocker, A. A. & Simoncelli, E. P. (2006a) Noise characteristics and prior expectations in human visual speed perception. Nature Neuroscience 9(4):578–85. Available at: http://dx.doi.org/10.1038/nn1669.Google Scholar
Stocker, A. A. & Simoncelli, E. P. (2006b) Sensory adaptation within a Bayesian framework for perception. In: Advances in neural information processing systems 18 (proceedings from the conference, Neural Information Processing Systems 2005), ed. Weiss, Y. & Schölkopf, B. & Platt, J. C.. Available at: https://papers.nips.cc/book/advances-in-neural-information-processing-systems-18-2005.Google Scholar
Stocker, A. A. & Simoncelli, E. P. (2008) A Bayesian model of conditioned perception. In: Advances in neural information processing systems 20 (proceedings from the conference, Neural Information Processing Systems 2007), ed. Platt, J. C., Koller, D., Singer, Y. & Roweis, S.. Available at: https://papers.nips.cc/paper/3369-a-bayesian-model-of-conditioned-perception.Google Scholar
Stone, L. S. & Thompson, P. (1992) Human speed perception is contrast dependent. Vision Research 32(8):1535–49. Available at: http://www.ncbi.nlm.nih.gov/pubmed/1455726.Google Scholar
Störmer, V. S., Mcdonald, J. J. & Hillyard, S. A. (2009) Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli. Proceedings of the National Academy of Sciences of the United States of America 106(52):22456–61. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=20007778&retmode=ref&cmd=prlinks.Google Scholar
Summerfield, C. & Koechlin, E. (2010) Economic value biases uncertain perceptual choices in the parietal and prefrontal cortices. Frontiers in Human Neuroscience 4:208. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3024559&tool=pmcentrez&rendertype=abstract.Google Scholar
Summerfield, C. & Tsetsos, K. (2015) Do humans make good decisions? Trends in Cognitive Sciences 19(1):2734. Available at: https://doi.org/10.1007/s11103-011-9767-z.Google Scholar
Sun, J. & Perona, P. (1997) Shading and stereo in early perception of shape and reflectance. Perception 26(4):519–29. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=9404497&retmode=ref&cmd=prlinks.Google Scholar
Swets, J. A. & Green, D. M. (1961) Sequential observations by human observers of signals in noise. In: Information theory: Proceedings of the fourth London symposium, ed. Cherry, C., pp. 177–95. Butterworth.Google Scholar
Swets, J. A., Tanner, W. P. & Birdsall, T. G. (1961) Decision processes in perception. Psychological Review 68(5):301–40. Available at: http://www.ncbi.nlm.nih.gov/pubmed/13774292.Google Scholar
Tanner, T. A., Haller, R. W. & Atkinson, R. C. (1967) Signal recognition as influenced by presentation schedules. Perception & Psychophysics 2(8):349–58. Available at: http://www.springerlink.com/index/10.3758/BF03210070.Google Scholar
Tanner, W. P. (1956) Theory of recognition. Journal of the Acoustical Society of America 28:882–88.Google Scholar
Tanner, W. P. (1961) Physiological implications of psychophysical data. Annals of the New York Academy of Sciences 89:752–65. Available at: http://www.ncbi.nlm.nih.gov/pubmed/13775211.Google Scholar
Tauber, S., Navarro, D. J., Perfors, A. & Steyvers, M. (2017) Bayesian models of cognition revisited: Setting optimality aside and letting data drive psychological theory. Psychological Review 124(4):410–41.Google Scholar
Taylor, S. F., Welsh, R. C., Wagner, T. D., Phan, K. L., Fitzgerald, K. D. & Gehring, W. J. (2004) A functional neuroimaging study of motivation and executive function. NeuroImage 21(3):1045–54. Available at: http://www.ncbi.nlm.nih.gov/pubmed/15006672.Google Scholar
Tenenbaum, J. B. & Griffiths, T. L. (2006) Optimal predictions in everyday cognition. Psychological Science 17(9):767–73.Google Scholar
Tenenbaum, J. B., Kemp, C., Griffiths, T. L. & Goodman, N. D. (2011) How to grow a mind: Statistics, structure, and abstraction. Science 331(6022):1279–85. Available at: http://www.ncbi.nlm.nih.gov/pubmed/21393536.Google Scholar
Thompson, P. (1982) Perceived rate of movement depends on contrast. Vision Research 22(3):377–80. Available at: http://www.ncbi.nlm.nih.gov/pubmed/7090191.Google Scholar
Thompson, P., Brooks, K. & Hammett, S. T. (2006) Speed can go up as well as down at low contrast: Implications for models of motion perception. Vision Research 46(6–7):782–86. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=16171842&retmode=ref&cmd=prlinks.Google Scholar
Thura, D., Beauregard-Racine, J., Fradet, C.-W. & Cisek, P. (2012) Decision making by urgency gating: Theory and experimental support. Journal of Neurophysiology 108(11):2912–30. Available at: http://www.ncbi.nlm.nih.gov/pubmed/22993260.Google Scholar
Treisman, M. & Faulkner, A. (1984) The setting and maintenance of criteria representing levels of confidence. Journal of Experimental Psychology: Human Perception and Performance 10(1):119–39. Available at: http://discovery.ucl.ac.uk/20033/.Google Scholar
Trommershäuser, J. (2009) Biases and optimality of sensory-motor and cognitive decisions. Progress in Brain Research 174:267–78. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=19477345&retmode=ref&cmd=prlinks.Google Scholar
Trommershäuser, J., Körding, K. P. & Landy, M. S., eds. (2011) Sensory cue integration. Oxford University Press.Google Scholar
Tse, P. U. (2005) Voluntary attention modulates the brightness of overlapping transparent surfaces. Vision Research 45(9):1095–98. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=15707917&retmode=ref&cmd=prlinks.Google Scholar
Tsetsos, K., Moran, R., Moreland, J., Chater, N., Usher, M. & Summerfield, C. (2016a) Economic irrationality is optimal during noisy decision making. Proceedings of the National Academy of Sciences of the United States of America 113(11):3102–107. Available at: http://www.pnas.org/content/early/2016/02/24/1519157113.long.Google Scholar
Tsetsos, K., Pfeffer, T., Jentgens, P. & Donner, T. H. (2015) Action planning and the timescale of evidence accumulation. PLoS ONE 10(6):e0129473.Google Scholar
Tsotsos, J. K. (1993) The role of computational complexity in perceptual theory. Advances in Psychology 99:261–96.Google Scholar
Turatto, M., Vescovi, M. & Valsecchi, M. (2007) Attention makes moving objects be perceived to move faster. Vision Research 47(2):166–78. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=17116314&retmode=ref&cmd=prlinks.Google Scholar
Turnbull, W. H. (1961) The correspondence of Isaac Newton. Vol. 3, 1688–1694. Cambridge University Press.Google Scholar
Ulehla, Z. J. (1966) Optimality of perceptual decision criteria. Journal of Experimental Psychology 71(4):564–69. Available at: http://www.ncbi.nlm.nih.gov/pubmed/5909083.Google Scholar
van Beers, R. J., Sittig, A. C. & van der Gon Denier, J. J. (1996) How humans combine simultaneous proprioceptive and visual position information. Experimental Brain Research 111(2):253–61.Google Scholar
van den Berg, R., Yoo, A. H. & Ma, W. J. (2017) Fechner's law in metacognition: A quantitative model of visual working memory confidence. Psychological Review 124(2):197214.Google Scholar
vandormael, H., Castañón, S. H., Balaguer, J., Li, V. & Summerfield, C. (2017) Robust sampling of decision information during perceptual choice. Proceedings of the National Academy of Sciences of the United States of America 114(10):2771–76. Available at: http://www.pnas.org/lookup/doi/10.1073/pnas.1613950114.Google Scholar
van Rooij, I. (2008) The tractable cognition thesis. Cognitive Science 32(6):939–84. Available at: http://doi.wiley.com/10.1080/03640210801897856.Google Scholar
van Wert, M. J., Horowitz, T. S. & Wolfe, J. M. (2009) Attention, Perception & Psychophysics 71(3):541–53. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2701252&tool=pmcentrez&rendertype=abstract.Google Scholar
Varey, C. A., Mellers, B. A. & Birnbaum, M. H. (1990) Judgments of proportions. Journal of Experimental Psychology: Human Perception and Performance 16(3):613–25. Available at: http://www.ncbi.nlm.nih.gov/pubmed/2144575.Google Scholar
Vaziri-Pashkam, M. & Cavanagh, P. (2008) Apparent speed increases at low luminance. Journal of Vision 8(16):9. Available at: http://www.ncbi.nlm.nih.gov/pubmed/19146275.Google Scholar
Vickers, D. (1979) Decision processes in visual perception. Academic Press.Google Scholar
Vickers, D. & Packer, J. (1982) Effects of alternating set for speed or accuracy on response time, accuracy and confidence in a unidimensional discrimination task. Acta Psychologica 50(2):179–97.Google Scholar
Viemeister, N. F. (1970) Intensity discrimination: Performance in three paradigms. Perception & Psychophysics 8(6):417–19. Available at: http://www.springerlink.com/index/10.3758/BF03207037.Google Scholar
Vincent, B. (2011) Covert visual search: Prior beliefs are optimally combined with sensory evidence. Journal of Vision 11(13):25.Google Scholar
Vintch, B. & Gardner, J. L. (2014) Cortical correlates of human motion perception biases. Journal of Neuroscience 34(7):2592–604. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=24523549&retmode=ref&cmd=prlinks.Google Scholar
Vlassova, A., Donkin, C. & Pearson, J. (2014) Unconscious information changes decision accuracy but not confidence. Proceedings of the National Academy of Sciences of the United States of America 111(45):16214–18. Available at: http://www.pnas.org/content/early/2014/10/24/1403619111.short.Google Scholar
von Winterfeldt, D. & Edwards, W. (1982) Costs and payoffs in perceptual research. Psychological Bulletin 91(3):609–22.Google Scholar
Vul, E., Goodman, N., Griffiths, T. L. & Tenenbaum, J. B. (2014) One and done? Optimal decisions from very few samples. Cognitive Science 38(4):599637. Available at: http://www.ncbi.nlm.nih.gov/pubmed/24467492.Google Scholar
Wainwright, M. J. (1999) Visual adaptation as optimal information transmission. Vision Research 39(23):3960–74. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=10748928&retmode=ref&cmd=prlinks.Google Scholar
Ward, L. M. & Lockhead, G. R. (1970) Sequential effects and memory in category judgments. Journal of Experimental Psychology 84(1):2734. Available at: https://scholars.duke.edu/display/pub651252.Google Scholar
Wark, B., Lundstrom, B. N. & Fairhall, A. (2007) Sensory adaptation. Current Opinion in Neurobiology 17(4):423–29. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=17714934&retmode=ref&cmd=prlinks.Google Scholar
Warren, D. H. & Cleaves, W. T. (1971) Visual-proprioceptive interaction under large amounts of conflict. Journal of Experimental Psychology 90(2):206–14. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=5134326&retmode=ref&cmd=prlinks.Google Scholar
Watson, C. S., Kellogg, S. C., Kawanishi, D. T. & Lucas, P. A. (1973) The uncertain response in detection-oriented psychophysics. Journal of Experimental Psychology 99(2):180–85.Google Scholar
Webster, M. A. (2015) Visual adaptation. Annual Review of Vision Science 1:547–67. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=26858985&retmode=ref&cmd=prlinks.Google Scholar
Webster, M. A., Kaping, D., Mizokami, Y. & Duhamel, P. (2004) Adaptation to natural facial categories. Nature 428(6982):557–61. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=15058304&retmode=ref&cmd=prlinks.Google Scholar
Webster, M. A. & MacLeod, D. I. A. (2011) Visual adaptation and face perception. Philosophical Transactions of the Royal Society B: Biological Sciences 366(1571):1702–25. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=21536555&retmode=ref&cmd=prlinks.Google Scholar
Wei, K. & Körding, K. P. (2011) Causal inference in sensorimotor learning and control. In: Sensory cue integration, ed. Trommershäuser, J., Körding, K. & Landy, M. S., pp. 3045. Oxford University Press.Google Scholar
Wei, X.-X. & Stocker, A. A. (2013) Efficient coding provides a direct link between prior and likelihood in perceptual Bayesian inference. In: Advances in neural information processing systems 25 (proceedings from the conference, Neural Information Processing Systems 2012), ed. Pereira, F., Burges, C. J. C., Bottou, L. & Weinberger, K. Q.. Available at: https://papers.nips.cc/paper/4489-efficient-coding-provides-a-direct-link-between-prior-and-likelihood-in-perceptual-bayesian-inference.Google Scholar
Wei, X.-X. & Stocker, A. A. (2015) A Bayesian observer model constrained by efficient coding can explain “anti-Bayesian” percepts. Nature Neuroscience 18:1509–17. Available at: http://dx.doi.org/10.1038/nn.4105.Google Scholar
Weil, L. G., Fleming, S. M., Dumontheil, I., Kilford, E. J., Weil, R. S., Rees, G., Dolan, R. J., Blakemore, S.-J. (2013) The development of metacognitive ability in adolescence. Consciousness and Cognition 22(1):264–71. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3719211&tool=pmcentrez&rendertype=abstract.Google Scholar
Weiskrantz, L. (1996) Blindsight revisited. Current Opinion in Neurobiology 6(2):215–20. Available at: http://www.ncbi.nlm.nih.gov/pubmed/8725963.Google Scholar
Weiss, Y., Simoncelli, E. P. & Adelson, E. H. (2002) Motion illusions as optimal percepts. Nature Neuroscience 5(6):598604. Available at: http://www.nature.com/neuro/journal/v5/n6/full/nn858.html.Google Scholar
Whiteley, L. & Sahani, M. (2008) Implicit knowledge of visual uncertainty guides decisions with asymmetric outcomes. Journal of Vision 8(3):2.115. Available at: http://www.journalofvision.org/content/8/3/2.Google Scholar
Whiteley, L. & Sahani, M. (2012) Attention in a Bayesian framework. Frontiers in Human Neuroscience 6:100. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=22712010&retmode=ref&cmd=prlinks%5Cnpapers3://publication/doi/10.3389/fnhum.2012.00100.Google Scholar
Wilimzig, C., Tsuchiya, N., Fahle, M., Einhäuser, W. & Koch, C. (2008) Spatial attention increases performance but not subjective confidence in a discrimination task. Journal of Vision 8(5):110. Available at: http://www.ncbi.nlm.nih.gov/pubmed/18842078.Google Scholar
Winman, A. & Juslin, P. (1993) Calibration of sensory and cognitive judgments: Two different accounts. Scandinavian Journal of Psychology 34(2):135–48. Available at: http://doi.wiley.com/10.1111/j.1467-9450.1993.tb01109.x.Google Scholar
Witt, J. K. (2011) Action's effect on perception. Current Directions in Psychological Science 20(3):201206. Available at: http://cdp.sagepub.com/content/20/3/201.short.Google Scholar
Witt, J. K., Proffitt, D. R. & Epstein, W. (2005) Tool use affects perceived distance, but only when you intend to use it. Journal of Experimental Psychology: Human Perception and Performance 31(5):880–88. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=16262485&retmode=ref&cmd=prlinks.Google Scholar
Wohlgemuth, A. (1911) On the after-effect of seen movement. Cambridge University Press. Available at: https://books.google.com/books?id=Z6AhAQAAIAAJ.Google Scholar
Wolfe, J. M., Brunelli, D. N., Rubinstein, J. & Horowitz, T. S. (2013) Prevalence effects in newly trained airport checkpoint screeners: Trained observers miss rare targets, too. Journal of Vision 13(3):33. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3848386&tool=pmcentrez&rendertype=abstract.Google Scholar
Wolfe, J. M., Horowitz, T. S. & Kenner, N. M. (2005) Cognitive psychology: Rare items often missed in visual searches. Nature 435(7041):439–40. Available at: http://dx.doi.org/10.1038/435439a.Google Scholar
Wolfe, J. M., Horowitz, T. S., Van Wert, M. J., Kenner, N. M., Place, S. S. & Kibbi, N. (2007) Low target prevalence is a stubborn source of errors in visual search tasks. Journal of Experimental Psychology: General 136(4):623–38. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2662480&tool=pmcentrez&rendertype=abstract.Google Scholar
Wolfe, J. M. & Van Wert, M. J. (2010) Varying target prevalence reveals two dissociable decision criteria in visual search. Current Biology 20(2):121–24. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2818748&tool=pmcentrez&rendertype=abstract.Google Scholar
Wyart, V. & Koechlin, E. (2016) Choice variability and suboptimality in uncertain environments. Current Opinion in Behavioral Sciences 11:109–15. Available at: http://dx.doi.org/10.1016/j.cobeha.2016.07.003.Google Scholar
Wyart, V., Myers, N. E. & Summerfield, C. (2015) Neural mechanisms of human perceptual choice under focused and divided attention. Journal of Neuroscience 35(8):3485–98. Available at: http://www.jneurosci.org/content/35/8/3485.abstract?etoc.Google Scholar
Yamins, D. L. K., Hong, H., Cadieu, C. F., Solomon, E. A., Seibert, D. & DiCarlo, J. J. (2014) Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences of the United States of America 111(23):8619–24. Available at: http://www.ncbi.nlm.nih.gov/pubmed/24812127.Google Scholar
Yeshurun, Y. & Carrasco, M. (1998) Attention improves or impairs visual performance by enhancing spatial resolution. Nature 396(6706):7275. Available at: http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=9817201&retmode=ref&cmd=prlinks.Google Scholar
Yeshurun, Y., Carrasco, M. & Maloney, L. T. (2008) Bias and sensitivity in two-interval forced choice procedures: Tests of the difference model. Vision Research 48(17):1837–51. Available at: http://www.sciencedirect.com/science/article/pii/S0042698908002599.Google Scholar
Yeung, N. & Summerfield, C. (2012) Metacognition in human decision-making: Confidence and error monitoring. Philosophical Transactions of the Royal Society of London: Series B, Biological Sciences 367(1594):1310–21. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3318764&tool=pmcentrez&rendertype=abstract.Google Scholar
Yu, A. J. & Cohen, J. D. (2009) Sequential effects: Superstition or rational behavior? In: Advances in neural information processing systems 21 (proceedings from the conference, Neural Information Processing Systems 2008), ed. Koller, D., Schuurmans, D., Bengio, Y. & Bottou, L.. Available at: https://papers.nips.cc/book/advances-in-neural-information-processing-systems-21-2008.Google Scholar
Zacksenhouse, M., Bogacz, R. & Holmes, P. (2010) Robust versus optimal strategies for two-alternative forced choice tasks. Journal of Mathematical Psychology 54(2):230–46. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3505075&tool=pmcentrez&rendertype=abstract.Google Scholar
Zak, I., Katkov, M., Gorea, A. & Sagi, D. (2012) Decision criteria in dual discrimination tasks estimated using external-noise methods. Attention, Perception & Psychophysics 74(5):1042–55. Available at: http://www.ncbi.nlm.nih.gov/pubmed/22351481.Google Scholar
Zamboni, E., Ledgeway, T., McGraw, P. V. & Schluppeck, D. (2016) Do perceptual biases emerge early or late in visual processing? Decision-biases in motion perception. Proceedings of the Royal Society B: Biological Sciences 283(1833):20160263. Available at: http://rspb.royalsocietypublishing.org/content/283/1833/20160263.Google Scholar
Zhang, H. & Maloney, L. T. (2012) Ubiquitous log odds: A common representation of probability and frequency distortion in perception, action, and cognition. Frontiers in Neuroscience 6:1. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3261445&tool=pmcentrez&rendertype=abstract.Google Scholar
Zhang, H., Morvan, C. & Maloney, L. T. (2010) Gambling in the visual periphery: A conjoint-measurement analysis of human ability to judge visual uncertainty. PLoS Computational Biology 6(12):1001023. Available at: http://dx.plos.org/10.1371/journal.pcbi.1001023.Google Scholar
Zylberberg, A., Barttfeld, P. & Sigman, M. (2012) The construction of confidence in a perceptual decision. Frontiers in Integrative Neuroscience 6:79. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3448113&tool=pmcentrez&rendertype=abstract.Google Scholar
Zylberberg, A., Roelfsema, P. R. & Sigman, M. (2014) Variance misperception explains illusions of confidence in simple perceptual decisions. Consciousness and Cognition 27:246–53. Available at: http://www.sciencedirect.com/science/article/pii/S1053810014000865.Google Scholar
Figure 0

Figure 1. Graphical depiction of Bayesian inference. An observer is deciding between two possible stimuli – s1 (e.g., leftward motion) and s2 (e.g., rightward motion) – which produce Gaussian measurement distributions of internal responses. The observer's internal response varies from trial to trial, depicted by the three yellow circles for three example trials. On a given trial, the likelihood function is equal to the height of each of the two measurement densities at the value of the observed internal response (lines drawn from each yellow circle) – that is, the likelihood of each stimulus given an internal response. For illustrative purposes, a different experimenter-provided prior and cost function are assumed on each trial. The action ai corresponds to choosing stimulus si. We obtain the expected cost of each action by multiplying the likelihood, prior, and cost corresponding to each stimulus and then summing the costs associated with the two possible stimuli. The optimal decision rule is to choose the action with the lower cost (the bar with less negative values). In trial 1, the prior and cost function are unbiased, so the optimal decision depends only on the likelihood function. In trial 2, the prior is biased toward s2, making a2 the optimal choice even though s1 is slightly more likely. In trial 3, the cost function favors a1, but the much higher likelihood of s2 makes a2 the optimal choice.

Figure 1

Figure 2. Depiction of the measurement distributions (colored curves) and optimal criteria (equivalent to the decision rules) in two-choice tasks. The upper panel depicts the case when the two stimuli produce the same internal variability (σ1 = σ2, where σ is the standard deviation of the Gaussian measurement distribution). The gray vertical line represents the location of the optimal criterion. The lower panel shows the location of the optimal criterion when the variability of the two measurement distributions differs (σ1 < σ2, in which case the optimal criterion results in a higher proportion of s1 responses).

Figure 2

Figure 3. Depiction of a failure to maintain a stable criterion. The optimal criterion is shown in Figure 2, but observers often fail to maintain that criterion over the course of the experiment, resulting in a criterion that effectively varies across trials. Colored curves show measurement distributions.

Figure 3

Figure 4. Depiction of optimal adjustment of choice criteria. In addition to the s1 and s2 measurement distributions (in thin red and blue lines), the figure shows the corresponding posterior probabilities as a function of x assuming uniform prior (in thick red and blue lines). The vertical criteria depict optimal criterion locations on x (thin gray lines) and correspond to the horizontal thresholds (thick yellow lines). Optimal criterion and threshold for equal prior probabilities and payoffs are shown in dashed lines. If unequal prior probability or unequal payoff is provided such that s1 ought to be chosen three times as often as s2, then the threshold would optimally be shifted to 0.75, corresponding to a shift in the criterion such that the horizontal threshold and vertical criterion intersect on the s2 posterior probability function. The y-axis is probability density for the measurement distributions and probability for the posterior probability functions; the y-axis ticks refer to the posterior probability.

Figure 4

Figure 5. (A) Depiction of one possible accuracy/reaction time (RT) curve. Percent correct responses increases monotonically as a function of RT and asymptotes at 90%. (B) The total reward/RT curve for the accuracy/RT curve from panel A with the following additional assumptions: (1) observers complete as many trials as possible within a 30-minute window, (2) completing a trial takes 1.5 seconds on top of the RT (because of stimulus presentation and between-trial breaks), and (3) each correct answer results in 1 point, whereas incorrect answers result in 0 points. The optimal RT – the one that maximizes the total reward – is depicted with dashed lines.

Figure 5

Figure 6. Depiction of how an observer should give confidence ratings. Similar to Figure 4, both the measurement distributions and posterior probabilities as a function of x assuming uniform prior are depicted. The confidence thresholds (depicted as yellow lines) correspond to criteria defined on x (depicted as gray lines). The horizontal thresholds and vertical criteria intersect on the posterior probability functions. The y-axis is probability density for the measurement distributions and probability for the posterior probability functions; the y-axis ticks refer to the posterior probability.

Figure 6

Figure 7. Depiction of the relationship between one-stimulus and two-stimulus tasks. Each axis corresponds to a one-stimulus task (e.g., Fig. 2). The three sets of concentric circles represent two-dimensional circular Gaussian distributions corresponding to presenting two stimuli in a row (e.g., s2,s1 means that s2 was the first stimulus and s1 was the second stimulus). If the discriminability between s1 and s2 is d′ (one-stimulus task; gray lines in triangle), then the Pythagorean theorem gives us the expected discriminability between s1,s2 and s2,s1 (two-stimulus task; blue line in triangle).

Figure 7

Figure 8. Optimal cue combination. Two cues that give independent information about the value of a sensory feature (red and blue curves) are combined to form a single estimate of the feature value (yellow curve). For Gaussian cue distributions, the combined cue distribution is narrower than both individual cue distributions, and its mean is closer to the mean of the distribution of the more reliable cue.

Figure 8

Figure 9. Examples of illusions and biases. (A) Cardinal repulsion. A nearly vertical (or horizontal) line looks more tilted away from the cardinal axis than it is. (B) Adelson's checkerboard brightness illusion. Square B appears brighter than square A, even though the two squares have the same luminance. (Image ©1995, Edward H. Adelson) (C) Tilt aftereffect. After viewing a tilted adapting grating (left), observers perceive a vertical test grating (right) to be tilted away from the adaptor. (D) Effects of spatial attention on contrast appearance (Carrasco et al. 2004). An attended grating appears to have higher contrast than the same grating when it is unattended. (E) Effects of action affordances on perceptual judgments (Witt 2011). Observers judge an object to be closer (far white circle compared to near white circle) relative to the distance between two landmark objects (red circles) when they are holding a tool that allows them to reach that object than when they have no tool.

Figure 9

Table 1. Summary of hypotheses proposed to account for suboptimal decisions.