Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-05T23:13:46.306Z Has data issue: false hasContentIssue false

Perceptual suboptimality: Bug or feature?

Published online by Cambridge University Press:  10 January 2019

Christopher Summerfield
Affiliation:
Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom. christopher.summerfield@psy.ox.ac.ukchui.li@psy.ox.ac.uk
Vickie Li
Affiliation:
Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom. christopher.summerfield@psy.ox.ac.ukchui.li@psy.ox.ac.uk

Abstract

Rahnev & Denison (R&D) argue that whether people are “optimal” or “suboptimal” is not a well-posed question. We agree. However, we argue that the critical question is why humans make suboptimal perceptual decisions in the first place. We suggest that perceptual distortions have a normative explanation – that they promote efficient coding and computation in biological information processing systems.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2018 

Rahnev & Denison (R&D) argue that psychologists and neuroscientists are unduly concerned with the question of whether perceptual decisions are “optimal” or “suboptimal.” They suggest that this question is ill posed, and that researchers should instead use observer models to provide an idealised benchmark against which to compare human behaviour.

In large part, we agree. Nevertheless, we suggest that the article rather sidesteps the major conceptual issue that underpins this debate from the standpoint of cognitive science, neuroscience, and machine learning: Why do these suboptimalities occur in the first place? Here, we argue that paradoxically, perceptual distortions observed in the lab often have a sound normative basis. In other words, perceptual “suboptimality” is best seen as a “feature” rather than a “bug” in the neural source code that guides our behaviour.

The authors discuss how suboptimal behaviours arise from distortions in the prior or likelihood functions, or misconceptions about the relevant cost function or decision rule. As they show, the Bayesian framework offers an elegant means to characterise the sources of bias or variance that corrupt decisions. However, it does not offer principled insights into why perceptual distortions might occur. To illustrate why this question is pressing, consider the perspective of a researcher attempting to build an artificial brain. She needs to know whether a given behavioural phenomenon – for example, the sequential decision bias that R&D discuss – is something that the artificial system should embrace or eschew. Only by knowing why biological systems display this phenomenon can this question be addressed.

Over recent years, advances have been made towards addressing the “why” of perceptual distortion. One elegant example pertains to the oblique effect (Appelle Reference Appelle1972), which (as R&D allude to) can be brought under the umbrella of Bayesian inference by considering human priors over the natural statistics of visual scenes, in which cardinal orientations predominate. But here, the Bayesian notion of a “prior” is an oversimplification that does not explain how or why the effect arises. In fact, the oblique effect can be understood by considering the optimisation principle that allows visual representations to be formed in the first place. Various classes of unsupervised learning rule, such as Hebbian learning, encourage neural systems to form representations whose statistics match those of the external world (Simoncelli Reference Simoncelli2003). This gives rise to an efficiency principle: Neural coding is distributed in a way that ensures maximal resources to be devoted to those features that are most likely to be encountered in natural environments (Girshick et al. Reference Girshick, Landy and Simoncelli2011; Wei & Stocker Reference Wei and Stocker2015; Reference Wei and Stocker2017). The “why” of the oblique effect has an answer: It arises because of a neural coding scheme that has evolved to be maximally efficient.

Another way of understanding why perceptual distortions might arise is via consideration of the sources of uncertainty that corrupt decisions. When judging visual stimuli, noise arising during sensory encoding limits performance – for example, low-contrast stimuli are hard to see. However, for a capacity-limited system (such as a biological agent), noise that arises “late” – that is, during inference itself – place a further constraint on the fidelity of information processing (Drugowitsch et al. Reference Drugowitsch, Wyart, Devauchelle and Koechlin2016). Recently, categorisation tasks that require the integration of information in space and time have revealed perceptual distortions in humans, such as the “robust averaging” of visual features to which R&D refer (de Gardelle & Summerfield Reference de Gardelle and Summerfield2011). The compressive nonlinearity that produces this effect leads to performance reductions for an observer with limitless capacity. However, simulations show that distorted transduction can paradoxically maximise reward when decisions are corrupted by “late” noise – that is, noise that arises during inference, rather than at the level of sensory encoding. This is again because of an efficiency principle – when computational resources are limited, the best policy may be to transduce perceptual information nonlinearly, allowing gain to be allocated by preference to some features over others (Li et al. Reference Li, Herce Castanon, Solomon, Vandormael and Summerfield2017). In fact, the precise form of the reward-maximising distortion varies according to the overall distribution of stimuli observed, and both behavioural and neural data suggest that humans shift from a compressive to an anticompressive form of distortion in a way that consistently maximises their performance (Spitzer et al. Reference Spitzer, Waschke and Summerfield2017).

Although the details are only emerging, we think it is likely that the wide range of perceptual “suboptimalities” that R&D highlight – sequential trial history effects, central tendency biases, sluggish belief change, and adaptation and/or normalisation processes – are all hallmarks of a cognitive system that has evolved to perform efficient computation in natural environments that exhibit stereotyped statistics and autocorrelation both in space and time. Indeed, a similar efficiency principle has been shown to hold for decisions in other domains, including the well-described decision biases in economic tasks, such as deciding among prospects with differing value. In one example, policies that lead to violations of axiomatic rationality can be shown to be optimal under late noise, providing an “optimal” explanation for economic irrationality (Tsetsos et al. Reference Tsetsos, Moran, Moreland, Chater, Usher and Summerfield2016a).

More generally, biological information processing systems have intrinsic costs and constraints that place a premium on computational efficiency. In other words, brains have evolved to minimise both a behavioural cost function (maximising reward) and a neural cost function (minimising computational load). Rather than being ad hoc failure modes in biological brains, “suboptimalities” in perception expose how computation has adapted efficiently to the structure of the natural environments in which biological organisms exist. A principled research agenda, rather than merely documenting deviations from optimality, should attempt to explain them.

Acknowledgment

This work was supported by an ERC Consolidator Grant to C.S.

References

Appelle, S. (1972) Perception and discrimination as function of stimulus orientation. Psychological Bulletin 78:266–78.Google Scholar
de Gardelle, V. & Summerfield, C. (2011) Robust averaging during perceptual judgment. Proceedings of the National Academy of Sciences of the United States of America 108(32):13341–46. doi:10.1073/pnas.1104517108.Google Scholar
Drugowitsch, J., Wyart, V., Devauchelle, A.-D. & Koechlin, E. (2016) Computational precision of mental inference as critical source of human choice suboptimality. Neuron 92(6):1398–411. Available at: http://dx.doi.org/10.1016/j.neuron.2016.11.005.Google Scholar
Girshick, A. R., Landy, M. S. & Simoncelli, E. P. (2011) Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics. Nature Neuroscience 14(7):926–32. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3125404&tool=pmcentrez&rendertype=abstract.Google Scholar
Li, V., Herce Castanon, S., Solomon, J. A., Vandormael, H. & Summerfield, C. (2017) Robust averaging protects decisions from noise in neural computations. PLoS Computational Biology 13(8):e1005723. doi:10.1371/journal.pcbi.1005723.Google Scholar
Simoncelli, E. P. (2003) Vision and the statistics of the visual environment. Current Opinion in Neurobiology 13(2):144–49.Google Scholar
Spitzer, B., Waschke, L. & Summerfield, C. (2017) Selective overweighting of larger magnitudes during numerical comparison. Nature Human Behaviour 1:0145. doi:10.1038/s41562-017-0145.Google Scholar
Tsetsos, K., Moran, R., Moreland, J., Chater, N., Usher, M. & Summerfield, C. (2016a) Economic irrationality is optimal during noisy decision making. Proceedings of the National Academy of Sciences of the United States of America 113(11):3102–107. Available at: http://www.pnas.org/content/early/2016/02/24/1519157113.long.Google Scholar
Wei, X.-X. & Stocker, A. A. (2015) A Bayesian observer model constrained by efficient coding can explain “anti-Bayesian” percepts. Nature Neuroscience 18:1509–17. Available at: http://dx.doi.org/10.1038/nn.4105.Google Scholar
Wei, X. X. & Stocker, A. A. (2017) Lawful relation between perceptual bias and discriminability. Proceedings of the National Academy of Sciences of the United States of America 114(38):10244–49.Google Scholar