Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-05T23:02:05.683Z Has data issue: false hasContentIssue false

Supra-optimality may emanate from suboptimality, and hence optimality is no benchmark in multisensory integration

Published online by Cambridge University Press:  10 January 2019

Jean-Paul Noel*
Affiliation:
Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37240. jean-paul.noel@vanderbilt.eduhttp://jeanpaulnoel.com/

Abstract

Within a multisensory context, “optimality” has been used as a benchmark evidencing interdependent sensory channels. However, “optimality” does not truly bifurcate a spectrum from suboptimal to supra-optimal – where optimal and supra-optimal, but not suboptimal, indicate integration – as supra-optimality may result from the suboptimal integration of a present unisensory stimuli and an absent one (audio = audio + absence of vision).

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2018 

Arguably the study of multisensory integration was born from the recording of spikes in the feline superior colliculus (Stein & Meredith Reference Stein and Meredith1993). These early studies presented animals with simple visual (V) flashes and auditory (A) beeps and held the occurrence of supra-additive responses (i.e., audiovisual [AV] responses greater than the sum of auditory and visual responses) as the hallmark for multisensory integration. However, this phenomenon is not common in the neocortical mantle (vs. subcortex; Frens & Van Opstal Reference Frens and Van Opstal1998), nor when multisensory integration is indexed via behavior or by measuring ensembles of neurons (e.g., local field potentials, electroencephalography [EEG], functional magnetic resonance imaging [fMRI]; Beauchamp Reference Beauchamp2005). Hence, over the last two decades there has been a greater appreciation for sub-additive responses as equally demonstrating an interesting transformation from input (i.e., A + V) to output (i.e., AV), and thus highlighting the synthesis of information across senses. That is, arguably the classic study of multisensory integration has grown to conceive of sub- and supra-additivity as being on extremes of a spectrum where both ends are interesting and informative.

In parallel, the originally described “principles of multisensory integration” (e.g., information that is in close spatial and temporal proximity will be integrated) have been translated to a computational language that is seemingly applicable throughout the cortex and widely observed in behavior. As Rahnev & Denison (R&D) underline in their review, this computational framework dictating much of the current work within the multisensory field is that of Bayesian decision theory. Indeed, among others, audiovisual (Alais & Burr Reference Alais and Burr2004), visuo-tactile (Ernst & Banks Reference Ernst and Banks2002), visuo-vestibular (Fetsch et al. Reference Fetsch, Turner, DeAngelis and Angelaki2009), and visuo-proprioceptive (van Beers et al. Reference van Beers, Sittig and Denier van der Gon1999) pairings have been demonstrated to abide by maximum likelihood estimation (MLE) – the weighting of likelihoods by relative reliabilities and concurrent reduction in integrated (vs. unisensory) variance. Given this extensive body of literature, I believe the gut reaction of many multisensory researchers – mine included – to this review and the thesis that assessing optimality is not useful was that we must acknowledge the limitations of solely considering “optimality” without examining the underlying components (e.g., prior, cost function), but that this construct is nevertheless valuable. If subjects behave optimally (i.e., reduction of uncertainty), then at minimum, there is evidence for interdependent channels. Namely, the reduction of variance in multisensory cases (vs. unisensory) is evidence for the fact that at some point, unisensory components are fused; the next step is to understand exactly how these channels are fused. Furthering this argument, it could be conceived that supra- and suboptimality exist on a continuum where evidence for supra-optimality or optimality is evidence for multisensory integration (admittedly without providing much mechanistic insight given the points raised by R&D), while suboptimality does not bear evidence of a synthesis across the senses. In other words, indexing optimality as a benchmark for integration is useful because Bayesian computations are ubiquitous in the brain and behavior, and in that it reduces the state space of integration from “anything apart from linear summation” (i.e., from sub-additive to supra-additive excluding additive) to “anything greater than or equal to optimal” (i.e., from optimal to supra-optimal but not suboptimal).

However, upon further consideration, I believe this reasoning to be erroneous (and therefore I agree with the thesis put forward by R&D). In short, contrarily to the case of additivity, optimality does not lie on a spectrum from sub- to supra-optimal, and hence optimality per se is no benchmark.

Traditionally, supra-optimality (an apparent impossibility) within multisensory systems has been hypothesized to emerge from a process of “active sensing” (Schroeder et al. Reference Schroeder, Wilson, Radman, Scharfman and Lakatos2010). That is, the presence of a second sensory stimulus (e.g., A) may sharpen the representation of a first unisensory stimulus (e.g., V) so that when these are combined (e.g., AV), sharper unisensory estimates than originally considered are combined, resulting in apparently supra-optimality. Nonetheless, as Shalom and Zaidel (Reference Shalom and Zaidel2018) have recently highlighted, somewhat paradoxically, it could additionally be the case that supra-optimality results from suboptimal integration. Namely, researchers typically take unisensory likelihoods at face value. However, within a multisensory (e.g., AV) context, the presentation of auditory stimuli alone is in fact not auditory alone (e.g., A), but instead the presence of auditory information and the absence of visual information (e.g., A + no V). Therefore, in this example, researchers are underestimating the reliability of the auditory channel (which is truly A-likelihood + a flat visual likelihood), which will ultimately result in claims of supra-optimal multisensory integration. This second observation (by Shalom & Zaidel Reference Shalom and Zaidel2018) is similar to the case of active sensing, in that the sharpness of unisensory likelihoods is underestimated. However, the perspective is quite different in that supra-optimality is not the result of cross-modal feedback enhancing unisensory representation solely when presented in a multisensory context, but in fact, in this latter case, supra-optimality is merely an experimental construct that results from the erroneous underestimation of a unisensory likelihoods; the world is by nature multisensory, and hence unisensory estimates are impoverished estimates wherein a cue has been artificially removed. That is, supra-optimality can result from the non-optimal integration of a signal (e.g., A) and noise (e.g., a non-present V signal). In turn, there is no true gradient between supra- and suboptimality, and hence positioning optimality as a benchmark bifurcating between multisensory fusion and fission is ill advised. Instead, as highlighted by R&D, we ought to conceive of (multisensory) perception as a dynamic system where likelihoods, priors, cost functions, and decision criteria all interact interdependently in both feedforward and feedback manners.

References

Alais, D. & Burr, D. (2004) The ventriloquist effect results from near-optimal bimodal integration. Current Biology 14(3):257–62. doi:10.1016/j.cub.2004.01.029.Google Scholar
Beauchamp, M. S. (2005) Statistical criteria in fMRI studies of multisensory integration. Neuroinformatics 3(2):93113. Available at: http://doi.org/10.1385/NI.Google Scholar
Ernst, M. O. & Banks, M. S. (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415(6870):429–33. Available at: http://dx.doi.org/10.1038/415429a.Google Scholar
Fetsch, C. R., Turner, A. H., DeAngelis, G. C. & Angelaki, D. E. (2009) Dynamic reweighting of visual and vestibular cues during self-motion perception. Journal of Neuroscience 29:15601–12.Google Scholar
Frens, M. A. & Van Opstal, A. J. (1998) Visual-auditory interactions modulate saccade-related activity in monkey superior colliculus. Brain Research Bulletin 46:211–24.Google Scholar
Schroeder, C. E., Wilson, D. A., Radman, T., Scharfman, H. & Lakatos, P. (2010) Dynamics of active sensing and perceptual selection. Current Opinion in Neurobiology 20:172–76. Available at: https://doi.org/10.1016/j.conb.2010.02.010Google Scholar
Shalom, S. & Zaidel, A. (2018) Better than optimal. Neuron 97(3):484–87. Available at: https://doi.org/10.1016/j.neuron.2018.01.041.Google Scholar
Stein, B. E. & Meredith, M. A. (1993) The merging of the senses. MIT Press.Google Scholar
van Beers, R. J., Sittig, A. C. & Denier van der Gon, J. J. (1999) Integration of proprioceptive and visual position-information: An experimentally supported model. Journal of Neurophysiology 81:1355–64.Google Scholar