Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-06T02:49:53.145Z Has data issue: false hasContentIssue false

Prospection does not imply predictive processing

Published online by Cambridge University Press:  19 June 2020

Piotr Litwin
Affiliation:
Faculty of Psychology, University of Warsaw, 00-189Warszawa, Poland. piotr.litwin@psych.uw.edu.pl Institute of Philosophy and Sociology, Polish Academy of Sciences, 00-330Warszawa, Poland. mmilkows@ifispan.waw.plhttp://marcinmilkowski.pl/en/
Marcin Miłkowski
Affiliation:
Institute of Philosophy and Sociology, Polish Academy of Sciences, 00-330Warszawa, Poland. mmilkows@ifispan.waw.plhttp://marcinmilkowski.pl/en/

Abstract

Predictive processing models of psychopathologies are not explanatorily consistent with the present account of abstract thought. These models are based on latent variables probabilistically mapping the structure of the world. As such, they cannot be informed by representational ontology based on mental objects and states. What actually is the case is merely some terminological affinity between subjective and informational uncertainty.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press

Gilead et al. propose an ontology of mental representations and argue that it may have important implications for predictive processing models of psychiatric disorders. Indeed, the proposed account and predictive processing seem to share some similarities: In both, representations form a hierarchical, tree-like structure organized along the continuum of abstractness, and this organization serves the goal of uncertainty mitigation.

Unfortunately, the authors do not specify how exactly their representational account could inform predictive processing models of psychopathology – and these models could certainly use some guidance. Let us consider models of schizophrenia. Phenomena such as diminished oddball effects or susceptibility to visual illusions were proposed to arise from the failure to attenuate sensory precision (Friston et al. Reference Friston, Brown, Siemerkus and Stephan2016), that is, weak low-level (perceptual) priors (Sterzer et al. Reference Sterzer, Adams, Fletcher, Frith, Lawrie, Muckli and Corlett2018), which fail to constrain sensory input in accordance with the brain's expectations. Nonetheless, hallucinations should emerge because of strong priors exerting disproportionate influence on perceptual inference, producing percepts out of thin air (Corlett et al. Reference Corlett, Horga, Fletcher, Alderson-Day, Schmack and Powers2019). Low-level priors were found to be both reduced in delusion-prone patients (Schmack et al. Reference Schmack, Schnack, Priller and Sterzer2015) and enhanced in hallucination-prone healthy individuals (Powers et al. Reference Powers, Mathys and Corlett2017). Even more perplexingly, recent studies showed that the derivation of perceptual priors from natural scene statistics (Kaliuzhna et al. Reference Kaliuzhna, Stein, Rusch, Sekutowicz, Sterzer and Seymour2019) or the susceptibility to a wider range of visual illusions (Grzeczkowski et al. Reference Grzeczkowski, Roinishvili, Chkonia, Brand, Mast, Herzog and Shaqiri2018) is actually unaffected in schizophrenic individuals.

These glaring inconsistencies tend to be explained as disturbances in global predictive dynamics. Accordingly, inefficient lower-level priors are compensated by higher-order, semantic prior beliefs, simultaneously driving hallucinations through the facilitation of sensory activations consistent with delusional beliefs (Corlett et al. Reference Corlett, Horga, Fletcher, Alderson-Day, Schmack and Powers2019; Sterzer et al. Reference Sterzer, Adams, Fletcher, Frith, Lawrie, Muckli and Corlett2018; Reference Sterzer, Voss, Schlagenhauf and Heinz2019). However, this contradicts the core assumption of predictive processing that all cognitive processes arise from computations performed at various levels of a single, homogeneous representational hierarchy, with only adjacent layers interacting directly (Williams Reference Williams2018). The assumption of adjacency is shared by virtually all contemporary computational psychiatry models, regardless of their exact implementation – be it a Deep Boltzmann Machine (Corlett et al. Reference Corlett, Horga, Fletcher, Alderson-Day, Schmack and Powers2019), belief propagation algorithm (Denève & Jardri Reference Denève and Jardri2016), or hierarchical Bayesian inference as envisioned by vanilla predictive processing. Altered global dynamics could possibly give rise to phenomena observed in psychiatry, but it does not entail unmediated interaction between non-adjacent layers, as it cannot occur in hierarchical architectures. Finally, it is rather unlikely that higher-order priors could enhance signaling in sensory cortices. Top-down connections are taken to be inhibitory rather than excitatory in predictive processing (Denève & Jardri Reference Denève and Jardri2016), as their main job is to suppress prediction errors. Thus, predictive processing accounts of schizophrenia themselves seem to be like the robot Herbie (mentioned in the article, p. 52), as they “cannot abide by some of [their] imperatives without breaking others.”

We actually believe that the representational hierarchy introduced by Gilead et al. could alleviate some of the problems that haunt predictive processing models of psychopathology. The account specifies relations (e.g., the relative position in the hierarchy) between abstracta and the degree of “abstraction saturation” of representations. Thus, it provides a clear definition of how continuum of abstractness orders the hierarchy, which remains an unsolved problem for predictive coders (see Williams Reference Williams2018). However, how could it discharge the inconsistencies discussed above? The proposed hierarchy of representations is expressed in classic ontological categories of mental objects and states (and interactions between these symbolic representations). It is unclear how it could inform a sophisticated take on representation in predictive processing: It is assumed to resemble the causal-probabilistic structure of the environment in the form of latent probabilistic variables occupying various levels of inferential hierarchy (Gładziejewski Reference Gładziejewski2015). The causal matrix of external causes of a system's activations is represented probabilistically in the form of posterior distributions determining prior distributions on plausible parameter (posterior) values at the subordinate level. One cannot even discuss the exact problems of predictive processing without this probabilistic language, not to mention providing ailments for them. In particular, some of the proposed abstract representations cannot be easily placed in a single, homogeneous hierarchy (e.g., predicators which serve as representational entities to be filled by other abstract representations).

Gilead et al. advance the case of “scruffies,” arguing that both kinds of representations (symbolic and structural/probabilistic) may play a significant role in cognition. We see it becoming more common in the debate, as if being a “scruffy” absolved one of the sins of providing disconnected or piecemeal explanations. Yet, the presented account remains silent about how symbolic and subsymbolic representations could interact. A few simple tricks, such as (1) a recourse to higher-order processes (conceptualization, inner speech, or communication, which were traditionally taken to arise from symbolic processes), (2) rephrasing them in predictive processing terms, and (3) providing a Bayesian “just-so” story on how emerging beliefs may lead to psychological transformation, will not do. They are not enough to show that the account can guide further development of predictive processing models in psychiatry.

Thus, we consider the “important implications” for predictive processing merely declarative and founded on mere terminological similarities between prospection and prediction. The lure of equivocation is strong, but there are actually major differences that cannot be overlooked. The presented account focuses on particular cognitive phenomena (e.g., future-oriented mental time travel), proposing how representational, abstractness-organized hierarchy allows humans to mitigate subjective, emotionally-laden uncertainty stemming from psychological distance. In contrast, predictive processing claims that the brains attempt to mitigate informational uncertainty, and that computational processes that serve this purpose underlie all cognition. But, why connect these accounts? In our opinion, one does not have to relate to a dominant explanatory paradigm at all costs in order to justify one's own account's explanatory potential/value. We believe that the presented theory is easily self-standing.

Acknowledgments

The authors were supported by the National Science Centre (Poland) research grant under the decision DEC-2014/14/E/HS1/00803.

References

Corlett, P. R., Horga, G., Fletcher, P. C., Alderson-Day, B., Schmack, K. & Powers, A. R. (2019) Hallucinations and strong priors. Trends in Cognitive Sciences 23(2):114–27. https://doi.org/10.1016/j.tics.2018.12.001.CrossRefGoogle ScholarPubMed
Denève, S. & Jardri, R. (2016) Circular inference: Mistaken belief, misplaced trust. Current Opinion in Behavioral Sciences 11:4048. https://doi.org/10.1016/j.cobeha.2016.04.001.CrossRefGoogle Scholar
Friston, K., Brown, H. R., Siemerkus, J. & Stephan, K. E. (2016) The dysconnection hypothesis (2016). Schizophrenia Research 176(2–3):8394. https://doi.org/10.1016/j.schres.2016.07.014.CrossRefGoogle Scholar
Gładziejewski, P. (2015) Predictive coding and representationalism. Synthese 124. doi: 10.1007/s11229-015-0762-9.Google Scholar
Grzeczkowski, L., Roinishvili, M., Chkonia, E., Brand, A., Mast, F. W., Herzog, M. H. & Shaqiri, A. (2018) Is the perception of illusions abnormal in schizophrenia? Psychiatry Research 270:929–39. https://doi.org/10.1016/j.psychres.2018.10.063.CrossRefGoogle Scholar
Kaliuzhna, M., Stein, T., Rusch, T., Sekutowicz, M., Sterzer, P. & Seymour, K. J. (2019) No evidence for abnormal priors in early vision in schizophrenia. Schizophrenia Research 210:245–54. https://doi.org/10.1016/j.schres.2018.12.027.CrossRefGoogle Scholar
Powers, A. R., Mathys, C. & Corlett, P. R. (2017) Pavlovian conditioning-induced hallucinations result from overweighting of perceptual priors. Science 357(6351):596600. https://doi.org/10.1126/science.aan3458.CrossRefGoogle ScholarPubMed
Sterzer, P., Adams, R. A., Fletcher, P., Frith, C., Lawrie, S. M., Muckli, L.Corlett, P. R. (2018) The predictive coding account of psychosis. Biological Psychiatry 84(9):634–43. https://doi.org/10.1016/j.biopsych.2018.05.015.CrossRefGoogle Scholar
Sterzer, P., Voss, M., Schlagenhauf, F. & Heinz, A. (2019) Decision-making in schizophrenia: A predictive-coding perspective. NeuroImage 190:133–43. https://doi.org/10.1016/j.neuroimage.2018.05.074.CrossRefGoogle ScholarPubMed
Williams, D. (2018) Hierarchical Bayesian models of delusion. Consciousness and Cognition 61:129–47. https://doi.org/10.1016/j.concog.2018.03.003.CrossRefGoogle ScholarPubMed
Schmack, K., Schnack, A., Priller, J. & Sterzer, P. (2015) Perceptual instability in schizophrenia: Probing predictive coding accounts of delusions with ambiguous stimuli. Schizophrenia Research: Cognition 2(2):7277. doi: 10.1016/j.scog.2015.03.005CrossRefGoogle ScholarPubMed