Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-11T14:48:21.691Z Has data issue: false hasContentIssue false

What is exactly the problem with panpsychism?

Published online by Cambridge University Press:  23 March 2022

Cyriel M. A. Pennartz*
Affiliation:
Department of Cognitive and Systems Neuroscience, Faculty of Science, University of Amsterdam, 1098 XHAmsterdam, The Netherlands. c.m.a.pennartz@uva.nl; sils.uva.nl/content/research-groups/cognitive-and-systems-neuroscience/cognitive-and-systems-neuroscience.html?cb

Abstract

Merker et al.'s critique calls for a deeper analysis of panpsychism. In principle, the concept of integrated information can be applied to photodiodes and subatomic particles, but I suggest the main obstacle is the lack of any evidence to confirm the presence of consciousness. Also MRW's perspectivalist theory illustrates the difficulties in synthesizing a full-fledged theory of consciousness.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

Merker et al.'s critique on Tononi's integrated information theory (IIT), albeit not particularly new (cf. Aaronson, Reference Aaronson2014; Bayne, Reference Bayne2018; Pennartz, Reference Pennartz2015; Searle, Reference Searle2013), delivers a vigorous account of how a lack of constraints in IIT implies (partial) panpsychism. Reasoning from IIT, however, it may be argued that the mind-brain scientific community has been thinking too conservatively about the problem and we should embrace a more inclusive attitude as to which systems in nature may hold consciousness, however inanimate or primitive. As long as there is some degree of integration and differentiation in a multi-element system, there is some level of consciousness, even at the level of a photodiode, according to IIT (Tononi & Koch, Reference Tononi and Koch2015). This counterargument calls for a deeper analysis of (partial) panpsychism.

The natural sciences aim to provide rational explanations of phenomena we observe in nature. From daily-life experience with ice we infer the property of “frozenness” based on the solidity and other observable properties water assumes under low temperatures, and explain this property from rigid lattice structures H2O molecules assembled during cooling. This process is sufficient for explaining frozenness, and physics offers no reason to postulate a deeper form of frozenness at micro-scales (a single water molecule is not “frozen”). To this IIT could object that this reasoning does not apply to consciousness, because the concept of integrated information remains applicable even at the subatomic level. Here, it is instructive to consider electromagnetism. Macroscopically observable properties of electromagnetic phenomena, such as the repulsion between two ferromagnets, are ultimately derived from the fundamental interaction of electromagnetism in relation to the three other fundamental interactions recognized in particle physics. At first glance, this might reinforce the possibility of consciousness as a fundamental quantity at the subatomic level, but in the case of particle physics there is ample empirical evidence to confirm electromagnetic interactions at the subatomic level, based on experiments with Rutherford scattering and deep inelastic scattering in cloud chambers, and so on. There is no evidence for consciousness to reside within atoms or photodiodes, nor is there any evidence for some degree of consciousness in (networks of) stones, weather systems (Pennartz, Reference Pennartz2009, Reference Pennartz2015), or electric power grids (target article). For want of such evidence, consciousness becomes a meaningless term when applied to inanimate systems lacking macro-observable properties we associate with consciousness (e.g., behavioural signs of waking up and falling asleep; Pennartz, Farisco, & Evers, Reference Pennartz, Farisco and Evers2019). Thus, there must be sufficient empirical ground to attribute consciousness to an object, other than that a theory of consciousness simply postulates this to be the case. This aspect makes IIT essentially untestable; it should clarify how the implication of (partial) panpsychism can be tested. Following Kuhn (Reference Kuhn1962), science is not too lazy to embark on a paradigm shift, but there must be a good reason to do so.

There is a lot more to say about IIT than is captured by this fundamental argument or the target paper. First, the “conceptual structure” advanced by Tononi, Boly, Massimini, and Koch (Reference Tononi, Boly, Massimini and Koch2016) fails to clarify the hard problem – to explain how the redness of a tomato arises from neuronal networks with high Phi. Similarly, it remains enigmatic how IIT might explain intentionality (“aboutness”: the problem that Φ does not refer to anything beyond the system's state it describes; Pennartz et al., Reference Pennartz, Farisco and Evers2019). Third, IIT (Tononi, Reference Tononi2015) lacks a mechanism for the brain to achieve “reality-matching” at least for healthy, conscious perception (i.e., our percepts meaningfully relate to our interactions with, and understanding of, the environment in relation to our body). Finally, IIT postulates that, next to active neurons, also inactive neurons can contribute to conscious experience, contradicting the notion that neurons can only be causally efficacious in affecting other neurons by some form of active signalling.

Simultaneously, these objections underscore how difficult it is to forge a well-constrained theory of consciousness. If Merker et al.'s perspectivalist “egocentre” theory is measured against the same yardstick as they are keen to rigorously apply to IIT, several arguments appear unfriendly towards their theory as well. Merker's “egocentre” results from an integration of multiple sensory modalities, but whether non-visual modalities conform to such a construct is highly debatable. Does our bodily sense of touch conform to a cyclopean midpoint between the eyes? Do we not locate taste in our mouths and smell around our nose? When we face a blank, white wall, and then walk around and face another, identical wall, our visual perspective remains the same, but our conscious experience changes along with allocentric location. If one now would admit to a multiplicity of “egocentres,” the question returns how these could be integrated to yield a unified experience. The authors' projective consciousness model is heuristically instructive, but leaves the door to the hard problem wide open: How is its 4D-projective geometry compatible with the qualitative properties of a scene we subjectively perceive? Instead of integrating multimodal information into a supramodal three-dimensional space, I suggest it is more productive to think of conscious experience as a multimodally rich survey of one own's situation (including one's body), where distinctions between modalities constitute a key element (Pennartz, Reference Pennartz2009). Further challenges for a perspectivalist framework arise when trying to neurally implement “the brain's simulation of reality.” Strides have been made in neurobiologically modelling action-dependent and action-independent predictive processing (cf. FitzGerald, Dolan, & Friston, Reference FitzGerald, Dolan and Friston2015; Rao & Ballard, Reference Rao and Ballard1999; Spratling, Reference Spratling2017), but how brain systems synthesize and update modally specific, situational representations remains to be uncovered. Recognizing the value of IIT as a heuristic, quantitative tool to figure out what a theory of consciousness could look like, I am all with Merker and colleagues in recognizing the thorny issues surrounding IIT. If anything, their review clarifies how important scientific rigour is in developing a full-fledged theory consciousness, and why it is promising to exchange panpsychism for a conceptualization of consciousness as an emergent phenomenon by which the brain makes sense, every second, of the myriads of sensory inputs it receives.

Financial support

This study was supported by the European Union's Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).

Conflict of interest

The author declares no conflict of interest.

References

Bayne, T. (2018). On the axiomatic foundations of the integrated information theory of consciousness. Neuroscience of Consciousness, 2018(1), niy007. doi:10.1093/nc/niy007CrossRefGoogle ScholarPubMed
FitzGerald, T. H. B., Dolan, R. J., & Friston, K. (2015). Dopamine, reward learning, and active inference. Frontiers in Computational Neuroscience, 9(136), 116. doi:10.3389/fncom.2015.00136CrossRefGoogle ScholarPubMed
Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.Google Scholar
Pennartz, C. M. A. (2009). Identification and integration of sensory modalities: Neural basis and relation to consciousness. Consciousness and Cognition, 18(3), 718739. doi:10.1016/j.concog.2009.03.003CrossRefGoogle ScholarPubMed
Pennartz, C. M. A. (2015). The brain's representational power – On consciousness and the integration of modalities. MIT Press.CrossRefGoogle Scholar
Pennartz, C. M. A., Farisco, M., & Evers, K. (2019). Indicators and criteria of consciousness in animals and intelligent machines: An inside-out approach. Frontiers in Systems Neuroscience, 13, 25. doi:10.3389/fnsys.2019.00025CrossRefGoogle Scholar
Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 7987. doi:10.1038/4580CrossRefGoogle ScholarPubMed
Searle, J. R. (2013). Can information theory explain consciousness? New York Review of Books, 5458.Google Scholar
Spratling, M. W. (2017). A review of predictive coding algorithms. Brain and Cognition, 112, 9297. doi:10.1016/j.bandc.2015.11.003CrossRefGoogle ScholarPubMed
Tononi, G. (2015). Integrated information theory. Scholarpedia, 10(1), 4164. doi:http://www.scholarpedia.org/article/Integrated_information_theoryCrossRefGoogle Scholar
Tononi, G., Boly, M., Massimini, M., & Koch, C. (2016). Integrated information theory: From consciousness to its physical substrate. Nature Reviews Neuroscience, 17(7), 450461. doi:10.1038/nrn.2016.44CrossRefGoogle ScholarPubMed
Tononi, G., & Koch, C. (2015). Consciousness: Here, there and everywhere? Philosophical Transactions of the Royal Society of London B: Biological Sciences, 370(1668), 118. doi:10.1098/rstb.2014.0167CrossRefGoogle Scholar