Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-07T04:47:39.879Z Has data issue: false hasContentIssue false

Convergent evidence for top-down effects from the “predictive brain”1

Published online by Cambridge University Press:  05 January 2017

Claire O'Callaghan
Affiliation:
Behavioural and Clinical Neuroscience Institute, University of Cambridge, Cambridge CB2 3EB, United Kingdom. co365@cam.ac.uk
Kestutis Kveraga
Affiliation:
Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02129. kestas@nmr.mgh.harvard.edu
James M. Shine
Affiliation:
School of Psychology, Stanford University, Stanford, CA 94305. macshine@stanford.edu
Reginald B. Adams Jr.
Affiliation:
Department of Psychology, The Pennsylvania State University, University Park, PA 16801. radams@psu.edu
Moshe Bar
Affiliation:
Gonda Center for Brain Research, Bar-Ilan University, Ramat Gan 5290002, Israel. moshe.bar@biu.ac.il

Abstract

Modern conceptions of brain function consider the brain as a “predictive organ,” where learned regularities about the world are utilised to facilitate perception of incoming sensory input. Critically, this process hinges on a role for cognitive penetrability. We review a mechanism to explain this process and expand our previous proposals of cognitive penetrability in visual recognition to social vision and visual hallucinations.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2016 

A neural mechanism for cognitive penetrability in visual perception

In their target article, Firestone & Scholl (F&S) readily dismiss the extensive presence of descending neural pathways (Angelucci et al. Reference Angelucci, Levitt, Walton, Hupe, Bullier and Lund2002; Bullier Reference Bullier2001), claiming they have no necessary implications for cognitive penetrability. Yet it is precisely this architecture of feedforward and feedback projections, beginning in the primary visual cortex, ascending through dorsal or ventral visual pathways, dominated respectively by magnocellular and parvocellular cells (Goodale & Milner Reference Goodale and Milner1992; Ungerleider & Mishkin Reference Ungerleider, Mishkin, Ingle, Goodale and Mansfield1982), and matched with reciprocal feedback connections (Felleman & Van Essen Reference Felleman and Van Essen1991; Salin & Bullier Reference Salin and Bullier1995), that provides the starting point for evidence in favour of top-down effects on visual perception.

Numerous studies capitalised on inherent differences in the speed and content of magnocellular (M) versus parvocellular (P) processing to reveal their role in top-down effects (Panichello et al. Reference Panichello, Cheung and Bar2012). Early work using functional magnetic resonance imaging (fMRI) suggested that formation of top-down expectations, signalled by a gradually increasing ventrotemporal lobe activity, facilitated the recognition of previously unseen objects (Bar et al. Reference Bar, Tootell, Schacter, Greve, Fischl, Mendola, Rosen and Dale2001). In subsequent studies using both intact line drawings and achromatic, low spatial frequency (LSF) stimuli (which preferentially recruit M pathways), early activity was evident in the orbitofrontal cortex (OFC) ~130 ms after stimulus presentation, well before object recognition–related activity peaks in the ventrotemporal cortex (Bar et al. Reference Bar, Kassam, Ghuman, Boshyan, Schmid, Dale, Hämäläinen, Marinkovic, Schacter and Rosen2006). An fMRI study using dynamic causal modelling later confirmed that M-biased stimuli specifically activated a pathway from the occipital cortex to OFC, which then initiated top-down feedback to the fusiform gyrus. This connectivity pattern was different from that evoked by stimuli activating the P pathway, where only feedforward flow increased between the occipital cortex and fusiform gyrus (Kveraga et al. Reference Kveraga, Boshyan and Bar2007a). OFC activity predicted recognition of M, but not P, stimuli, and resulted in faster recognition of M stimuli by ~100 ms. Another fMRI study showed that this OFC facilitation of object recognition was triggered for meaningful LSF images exclusively: Only meaningful images, but not meaningless images (from which predictions could not be generated), revealed increased functional connectivity between the lateral OFC and ventral visual pathway (Chaumon et al. Reference Chaumon, Kveraga, Barrett and Bar2013). We argue not only that these results demonstrate the importance of descending neural pathways, which F&S do not dispute (cf. sect. 2.2), but also that these recurrent connections penetrate bottom-up perception and facilitate perception via feedback of activated information from the OFC.

This top-down activity does not merely reflect recognition or “back-end” memory-based processes, as F&S suggest are commonly conflated with top-down effects. Instead, the rapid onset of OFC activation and subsequent coupling with visual cortical regions indicate that these top-down processes affect perception proper, which we suggest occurs in the form of predictions that constrain the ongoing perceptual process. These predictions categorise ambiguous visual input into a narrow set of most probable alternatives based on all available information. As a richly connected association region, receiving inputs from sensory, visceral and limbic modalities, the OFC is ideally situated to integrate crossmodal information and generate expectations based on previous experience that can be compared with incoming sensory input. Predictive information from the OFC is then back-propagated to inferior temporal regions and integrated with high spatial frequency information. Thus, by constraining the number of possible interpretations, the OFC provides a signal that guides continued, low-level visual processing, resulting in a refined visual percept that is identified faster (Trapp & Bar Reference Trapp and Bar2015).

Another aspect of this top-down guidance process that can penetrate bottom-up visual perceptual processing involves constraints imposed by the stimulus context. In the model that emerged from these data, “gist” information is extracted from LSFs in the visual input, and predictions are generated about the most probable interpretation of the input, given the current context (Bar Reference Bar2004). When bottom-up visual input is ambiguous, the same object can be perceived as a hair dryer or a drill, depending on whether it appears in a bathroom or a workshop context (e.g., Bar Reference Bar2004, Box 1). A network sensitive to contextual information, which also includes the parahippocampal, retrosplenial, and medial orbitofrontal cortices (Aminoff et al. Reference Aminoff, Gronau and Bar2007; Bar & Aminoff Reference Bar and Aminoff2003), has been implicated in computing this context signal. Crucially, this process is not simply influencing better guesswork. Using magnetoencephalography and phase synchrony analyses, these top-down contextual influences are shown to occur during the formation stages of a visual percept, extending all the way back to early visual cortex (Kveraga et al. Reference Kveraga, Ghuman, Kassam, Aminoff, Hämäläinen, Chaumon and Bar2011).

The emerging picture from this work suggests that ongoing visual perception is directly and rapidly influenced by previously learnt information about the world. This is undoubtedly a highly adaptive mechanism, promoting more efficient processing amidst the barrage of complex visual input that our brains receive. In the next section, we extend this model by incorporating ecologically valid examples of how top-down effects on visual perception facilitate complex human interactions, and the ramifications when the delicate balance between prediction and sensory input is lost in clinical disorders.

Cognitive penetrability in broader contexts – visual hallucinations and social vision

Top-down influences on visual perception are also observable in clinical disorders that manifest visual hallucinations, including schizophrenia, psychosis, and Parkinson's disease. Most strikingly, the perceptual content of visual hallucinations can be determined by autobiographical memories; familiar people or animals are a common theme (Barnes & David Reference Barnes and David2001). Frequency and severity of visual hallucinations is exacerbated by mood and physiological states (e.g., stress, depression, and fatigue), with mood also playing an important role in determining the content of hallucinations (e.g., when the image of a deceased spouse is perceived during a period of bereavement) (Waters et al. Reference Waters, Collerton, ffytche, Jardri, Pins, Dudley, Blom, Mosimann, Eperjesi and Ford2014). Such phenomenological enquiry into visual hallucinations suggests their content is influenced in a top-down manner, by stored memories and current emotional state. Anecdotal report is mirrored by experimental confirmation that the psychosis spectrum is associated with overreliance on prior knowledge, or predictive processing, when interpreting ambiguous visual stimuli (Teufel et al. Reference Teufel, Subramaniam, Dobler, Perez, Finnemann, Mehta, Goodyer and Fletcher2015). Together, these mechanisms are consistent with a framework in which top-down influences tend to dominate visual processing in hallucinations. Important for theories of cognitive penetrability, visual hallucinations typically involve a hallucinated object being perceived as embedded within the actual scenery, such that the hallucination is thoroughly integrated with sensory input (Macpherson Reference Macpherson, Zeimbekis and Raftopoulos2015). Existing neural frameworks for visual hallucinations account for an imbalance between bottom-up sensory information and top-down signals. These frameworks implicate overactivity in regions supplying top-down information during normal visual perception, including the medial temporal and prefrontal sites in the model outlined above. Abnormal activity in these regions, and in their connectivity with the visual cortex, plays a causative role in creating the hallucinatory percepts that effectively hijack visual perception (Shine et al. Reference Shine, O'Callaghan, Halliday and Lewis2014). Electrical stimulation studies targeting these regions independently confirm that abnormal activity in temporal lobe and midline areas is capable of generating complex visual hallucinations (Selimbeyoglu & Parvizi Reference Selimbeyoglu and Parvizi2010).

Visual information conveying social cues is some of the subtlest, yet richest, perceptual input we receive – consider the abundance of information delivered in a sidelong glance or a furrowed brow. Top-down influences allowing us to recognise patterns in our social environment and interpret them rapidly are a cornerstone of adaptive social behaviour (de Gelder & Tamietto Reference de Gelder, Tamietto, Adams, Ambady, Nakayama and Shimojo2011). Available evidence suggests that social visual processing leverages precisely the same neural mechanism described above for object and scene recognition. However, because of typically greater ambiguity in social cues, social vision must rely on top-down expectations to an even greater extent than object recognition. Social cues, including eye gaze, gender, culture, and race are found to directly influence the perception and neural response to facial emotion (Adams & Kleck Reference Adams and Kleck2005; Adams et al. Reference Adams, Gordon, Baird, Ambady and Kleck2003; Reference Adams, Hess and Kleck2015) and exert increasing influence with increasing ambiguity in the given expression (Graham & LaBar Reference Graham and LaBar2012). Critically, these effects are also modulated by individual differences such as trait anxiety and progesterone levels in perceptual tasks (Conway et al. Reference Conway, Jones, DeBruine, Welling, Smith, Perrett, Sharp and Al-Dujaili2007; Fox et al. Reference Fox, Mathews, Calder and Yiend2007) and in amygdala response to threat cues (Ewbank et al. Reference Ewbank, Fox and Calder2010). Dovetailing with the model outlined above, fusiform cortex activation is found to track closely with objective gradations between morphed male and female faces, whereas OFC responses track with categorical perceptions of face gender (Freeman et al. Reference Freeman, Rule, Adams and Ambady2010). As in object recognition, OFC may be categorising continuously varying social stimuli into a limited set of alternative interpretations. Social vision therefore provides an important example of cognitive penetrability in visual perception that utilises stored memories and innate templates to makes sense of perceptual input.

Conclusion

The convergent experimental and ecological evidence we have outlined suggests a visual processing system profoundly influenced by top-down effects. The model we describe fits with a “predictive brain” harnessing previous experience to hone sensory perception. In the face of evidence reviewed here, it seems difficult to categorically argue that cognitive penetrability in visual perception is yet to be convincingly demonstrated.

Footnotes

1.

Claire O'Callaghan and Kestutis Kveraga are co-first authors of this commentary.

References

Adams, R. B., Gordon, H. L., Baird, A. A., Ambady, N. & Kleck, R. E. (2003) Effects of gaze on amygdala sensitivity to anger and fear faces. Science 300(5625):1536.Google Scholar
Adams, R. B., Hess, U. & Kleck, R. E. (2015) The intersection of gender-related facial appearance and facial displays of emotion. Emotion Review 7(1):513.Google Scholar
Adams, R. B. & Kleck, R. E. (2005) Effects of direct and averted gaze on the perception of facially communicated emotion. Emotion 5(1):3.CrossRefGoogle ScholarPubMed
Aminoff, E., Gronau, N. & Bar, M. (2007) The parahippocampal cortex mediates spatial and nonspatial associations. Cerebral Cortex 17(7):1493–503.Google Scholar
Angelucci, A., Levitt, J. B., Walton, E. J., Hupe, J.-M., Bullier, J. & Lund, J. S. (2002) Circuits for local and global signal integration in primary visual cortex. The Journal of Neuroscience 22(19):8633–46.Google Scholar
Bar, M. (2004) Visual objects in context. Nature Reviews Neuroscience 5(8):617–29.Google Scholar
Bar, M. & Aminoff, E. (2003) Cortical analysis of visual context. Neuron 38(2):347–58.Google Scholar
Bar, M., Kassam, K. S., Ghuman, A. S., Boshyan, J., Schmid, A. M., Dale, A. M., Hämäläinen, M., Marinkovic, K., Schacter, D. & Rosen, B. (2006) Top-down facilitation of visual recognition. Proceedings of the National Academy of Sciences USA 103(2):449–54.CrossRefGoogle ScholarPubMed
Bar, M., Tootell, R. B., Schacter, D. L., Greve, D. N., Fischl, B., Mendola, J. D., Rosen, B. R. & Dale, A. M. (2001) Cortical mechanisms specific to explicit visual object recognition. Neuron 29(2):529–35.Google Scholar
Barnes, J. & David, A. S. (2001) Visual hallucinations in Parkinson's disease: A review and phenomenological survey. Journal of Neurology, Neurosurgery, and Psychiatry 70(6):727–33. doi:10.1136/jnnp.70.6.727.Google Scholar
Bullier, J. (2001) Integrated model of visual processing. Brain Research Reviews 36(2):96107.Google Scholar
Chaumon, M., Kveraga, K., Barrett, L. F. & Bar, M. (2013) Visual predictions in the orbitofrontal cortex rely on associative content. Cerebral Cortex 24(11):2899–907. doi:10.1093/cercor/bht146.Google Scholar
Conway, C., Jones, B., DeBruine, L., Welling, L., Smith, M. L., Perrett, D., Sharp, M. A. & Al-Dujaili, E. A. (2007) Salience of emotional displays of danger and contagion in faces is enhanced when progesterone levels are raised. Hormones and Behavior 51(2):202206.Google Scholar
de Gelder, B. & Tamietto, M. (2011) Faces, bodies, social vision as agent vision, and social consciousness. In: The science of social vision, ed. Adams, R. B., Ambady, N., Nakayama, K. & Shimojo, S., pp. 5174. Oxford University Press.Google Scholar
Ewbank, M. P., Fox, E. & Calder, A. J. (2010) The interaction between gaze and facial expression in the amygdala and extended amygdala is modulated by anxiety. Frontiers in Human Neuroscience 4(56):111.Google ScholarPubMed
Felleman, D. J. & Van Essen, D. C. (1991) Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex 1(1):147.CrossRefGoogle ScholarPubMed
Fox, E., Mathews, A., Calder, A. J. & Yiend, J. (2007) Anxiety and sensitivity to gaze direction in emotionally expressive faces. Emotion 7(3):478.Google Scholar
Freeman, J. B., Rule, N. O., Adams, R. B. & Ambady, N. (2010) The neural basis of categorical face perception: Graded representations of face gender in fusiform and orbitofrontal cortices. Cerebral Cortex 20(6):1314–22.CrossRefGoogle ScholarPubMed
Goodale, M. A. & Milner, A. D. (1992) Separate visual pathways for perception and action. Trends in Neurosciences 15(1):2025.Google Scholar
Graham, R. & LaBar, K. S. (2012) Neurocognitive mechanisms of gaze-expression interactions in face processing and social attention. Neuropsychologia 50(5):553–66.Google Scholar
Kveraga, K., Boshyan, J. & Bar, M. (2007a) Magnocellular projections as the trigger of top-down facilitation in recognition. The Journal of Neuroscience 27(48):13232–40. doi:10.1523/jneurosci.3481-07.2007.CrossRefGoogle ScholarPubMed
Kveraga, K., Ghuman, A. S., Kassam, K. S., Aminoff, E. A., Hämäläinen, M. S., Chaumon, M. & Bar, M. (2011) Early onset of neural synchronization in the contextual associations network. Proceedings of the National Academy of Sciences USA 108(8):3389–94.Google Scholar
Macpherson, F. (2015) Cognitive penetration and nonconceptual content. In: The cognitive penetrability of perception: New philosophical perspectives, ed. Zeimbekis, J. & Raftopoulos, A.. pp. 331–59. Oxford University Press.Google Scholar
Panichello, M. F., Cheung, O. S. & Bar, M. (2012) Predictive feedback and conscious visual experience. Frontiers in Psychology 3(620):18.Google Scholar
Salin, P.-A. & Bullier, J. (1995) Corticocortical connections in the visual system: Structure and function. Physiological Reviews 75(1):107–54.Google Scholar
Selimbeyoglu, A. & Parvizi, J. (2010) Electrical stimulation of the human brain: Perceptual and behavioral phenomena reported in the old and new literature. Frontiers in Human Neuroscience 4(46):111.Google Scholar
Shine, J. M., O'Callaghan, C., Halliday, G. M. & Lewis, S. J. G. (2014) Tricks of the mind: Visual hallucinations as disorders of attention. Progress in Neurobiology 116:5865. Available at: http://dx.doi.org/10.1016/j.pneurobio.2014.01.004.Google Scholar
Teufel, C., Subramaniam, N., Dobler, V., Perez, J., Finnemann, J., Mehta, P. R., Goodyer, I. M. & Fletcher, P. C. (2015) Shift toward prior knowledge confers a perceptual advantage in early psychosis and psychosis-prone healthy individuals. Proceedings of the National Academy of Sciences USA 112(43):13401–406.CrossRefGoogle ScholarPubMed
Trapp, S. & Bar, M. (2015) Prediction, context and competition in visual recognition. Annals of the New York Academy of Sciences 1339:190–98. doi:10.1111/nyas.12680.Google Scholar
Ungerleider, L. & Mishkin, M. (1982) Two cortical visual systems. In: Analysis of visual behavior, ed. Ingle, D., Goodale, M. & Mansfield, R., pp. 549–86. The MIT Press.Google Scholar
Waters, F., Collerton, D., ffytche, D. H., Jardri, R., Pins, D., Dudley, R., Blom, J. D., Mosimann, U. P., Eperjesi, F. & Ford, S. (2014) Visual hallucinations in the psychosis spectrum and comparative information from neurodegenerative disorders and eye disease. Schizophrenia Bulletin 40(Suppl 4):S233–45Google Scholar