Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-11T15:04:27.677Z Has data issue: false hasContentIssue false

To be or to know? Information in the pristine present

Published online by Cambridge University Press:  23 March 2022

Larissa Albantakis*
Affiliation:
Department of Psychiatry, Center for Sleep and Consciousness, University of Wisconsin–Madison, Madison, WI53719, USA. albantakis@wisc.eduhttps://centerforsleepandconsciousness.psychiatry.wisc.edu/people/larissa-albantakis/

Abstract

To be true of every experience, the axioms of Integrated information theory (IIT) are necessarily basic properties and should not be “over-psychologized.” Information, for example, merely asserts that experience is specific, not generic. It does not require “access.” The information a system specifies about itself in its current state is revealed by its unfolded cause–effect structure and quantified by its integrated information.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

Integrated information theory (IIT) starts with the aim to account for the essential properties of phenomenal experience in physical terms. Essential means true of every conceivable experience, whether I am watching a movie or contemplating the nature of consciousness, whether I am self-absorbed, dreaming, or on psychedelics. Given the wide range of possible experiences we may undergo, the essential properties identified by the axioms of IIT are necessarily basic properties.

Self-awareness, and other feelings accompanying higher-order psychological functions are not deemed essential in this sense; they are contents of particular experiences. A loss of awareness of self, body, and time may be approximated, for example, through transcendental meditation techniques (Tononi et al., Reference Tononi, Boly, Gosseries and Laureys2016a).

What then is “information” in IIT and what is required of a system to specify information about itself?

First, as an axiom, information must refer to an essential property of each individual experience, here and now, not a comparison across experiences. What the information axiom asserts is that every experience is specific as opposed to generic, it is the particular way it is. Although this may seem like a truism, it nevertheless implies the possibility of a myriad of experiences that differ from the current one, all being the particular way they are, and it constrains the ways in which a physical system could account for phenomenal experience in concert with the other axioms (Barbosa, Marshall, Albantakis, & Tononi, Reference Barbosa, Marshall, Albantakis and Tononi2021).

Second, whether and to what extent a system complies with all postulates of IIT is evaluated and quantified by the integrated information (Φ) of that system. The notion of information assessed by the IIT formalism differs in important ways from that of Shannon's theory of communication (Tononi et al., Reference Tononi, Boly, Massimini and Koch2016b): IIT's integrated-information measure is causal, intrinsic, compositional, integrated, and definite. As indicated early on (Balduzzi & Tononi, Reference Balduzzi and Tononi2008; Tononi, Reference Tononi2008), it also must be state-dependent. In IIT, an experience corresponds to the specific cause–effect structure (CES) of the physical substrate in a particular state. To compute the CES and its value of integrated information, the investigator requires the system's complete transition probability matrix (TPM) – its causal model. The TPM does not reflect “knowledge” that the system has about itself. It describes what the system is in physical, operational terms (to see this, take a logical AND gate, e.g., the AND gate is what it is because it outputs “1” if and only if all of its inputs were “1”). Integrated information, thus, captures what the system specifies about itself by being that system in its particular state.

In this sense, IIT's formal framework evaluates precisely what Merker et al. say should be evaluated: How much information the system specifies from its own intrinsic perspective in a state-dependent manner (Albantakis & Tononi, Reference Albantakis and Tononi2019; Barbosa et al., Reference Barbosa, Marshall, Albantakis and Tononi2021; Oizumi, Albantakis, & Tononi, Reference Oizumi, Albantakis and Tononi2014). It does not evaluate information capacity (an average measure across all states of a probability distribution), as claimed in the target article. Moreover, in their own interpretation of informativeness, which “pertains to the significance of a given experience to the ‘haver’ of that experience, and, loosely speaking, to the system itself in that given state or state transition,” Merker et al. presuppose “access.” They then proceed to claim that (a) Markovian systems with memoryless constituents (as typically assumed by IIT) are not capable of memory, and (b) for this reason cannot “access” information about themselves. These are extraordinary claims from a physical, computational, and information-theoretical perspective. They are also extraneous: The information axiom does not imply and never meant to capture the capacity of the experiencer to “access” information about their past, present, or future experiences.

Viewed more broadly, the “access” issue exemplifies Merker et al.'s high-level, psychological approach to consciousness, detached from concerns about implementation in physical, mechanistic terms. Any memory a system has about its past states must be physically instantiated in the present. Recurrent connectivity allows assemblies of memoryless elements to form a Markovian system with memory, here and now. It is worth pointing out that the IIT formalism evaluates precisely when we can say that an assembly of interacting elements forms one integrated system (Albantakis, Reference Albantakis2018). The need for memory and context-dependency, moreover, provides an evolutionary incentive for systems with high information integration, which may explain why consciousness evolved according to IIT (Albantakis, Hintze, Koch, Adami, & Tononi, Reference Albantakis, Hintze, Koch, Adami and Tononi2014), and IIT's formalism has also been applied at higher levels of description to systems with macroscopic units (which may be non-Markovian because of incomplete information about the micro level) (Hoel, Albantakis, Marshall, & Tononi, Reference Hoel, Albantakis, Marshall and Tononi2016; Marshall, Albantakis, & Tononi, Reference Marshall, Albantakis and Tononi2018).

Finally, high integrated information indeed requires high state differentiation (Marshall, Gomez-Ramirez, & Tononi, Reference Marshall, Gomez-Ramirez and Tononi2016). However, this follows from the formalism as a whole, not any individual postulate.

What about higher-order contents such as a sense of self, or bodily awareness? The causal powers of an integrated set of interacting elements only become apparent when they are unfolded across all orders of its compositional structure, and they depend on the specific properties of the substrate. Of course, IIT's identity claim implies that the unfolded CES of a physical substrate should ultimately account for the specific properties of particular experiences, including a system's self-knowledge (see Haun & Tononi [Reference Haun and Tononi2019] for an initial account of spatial experience).

Given the basic nature of IIT's essential properties and the toy models used to evaluate its formal framework, an account of higher-level psychological features may seem quite a stretch. And yet IIT's explanatory power already approaches this domain by evaluating its basic properties on basic substrates (Haun & Tononi, Reference Haun and Tononi2019; Tononi et al., Reference Tononi, Boly, Massimini and Koch2016b). The ability to test a theory on simple model systems is an invaluable tool for exposing internal inconsistencies. Merker et al. should give it a try.

Financial support

L.A. is supported by a grant from the Templeton World Charity Foundation, Inc. (no. TWCF0526) and the Tiny Blue Dot Foundation (UW 133AAG3451).

Conflict of interest

The author declares no conflict of interest.

References

Albantakis, L. (2018). A tale of Two animats: What does It take to have goals? Springer, pp. 515.Google Scholar
Albantakis, L., Hintze, A., Koch, C., Adami, C., & Tononi, G. (2014). Evolution of integrated causal structures in animats exposed to environments of increasing complexity. PLoS Computational Biology 10, e1003966.10.1371/journal.pcbi.1003966CrossRefGoogle ScholarPubMed
Albantakis, L., & Tononi, G. (2019). Causal composition: Structural differences among dynamically equivalent systems. Entropy 21(10), 989.10.3390/e21100989CrossRefGoogle Scholar
Balduzzi, D., & Tononi, G. (2008). Integrated information in discrete dynamical systems: Motivation and theoretical framework. PLoS Computational Biology 4:e1000091.10.1371/journal.pcbi.1000091CrossRefGoogle ScholarPubMed
Barbosa, L. S., Marshall, W., Albantakis, L., & Tononi, G. (2021). Mechanism integrated information. Entropy 23, 362.10.3390/e23030362CrossRefGoogle ScholarPubMed
Haun, A., & Tononi, G. (2019). Why does space feel the way it does? Towards a principled account of spatial experience. Entropy 21, 1160.10.3390/e21121160CrossRefGoogle Scholar
Hoel, E. P., Albantakis, L., Marshall, W., & Tononi, G. (2016). Can the macro beat the micro? Integrated information across spatiotemporal scales. Neurosci Conscious 2016(1), niw012. https://doi.org/10.1093/nc/niw012CrossRefGoogle ScholarPubMed
Marshall, W., Albantakis, L., & Tononi, G. (2018). Black-boxing and cause–effect power. PLoS Computational Biology 14, e1006114.10.1371/journal.pcbi.1006114CrossRefGoogle ScholarPubMed
Marshall, W., Gomez-Ramirez, J., & Tononi, G. (2016). Integrated information and state differentiation. Frontiers in Psychology 7, 926.10.3389/fpsyg.2016.00926CrossRefGoogle ScholarPubMed
Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0. PLoS Computational Biology 10, e1003588.10.1371/journal.pcbi.1003588CrossRefGoogle Scholar
Tononi, G. (2008). Consciousness as integrated information: A provisional manifesto. Biological Bulletin 215, 216242.10.2307/25470707CrossRefGoogle ScholarPubMed
Tononi, G., Boly, M., Gosseries, O., & Laureys, S. (2016a). The neurology of consciousness: An overview. In S. Laureys, O. Gosseries, & G. Tononi (Eds.) The neurology of consciousness: Cognitive neuroscience and neuropathology (2nd Ed.) (pp. 407461). Elsevier.10.1016/B978-0-12-800948-2.00025-XCrossRefGoogle Scholar
Tononi, G., Boly, M., Massimini, M., & Koch, C. (2016b). Integrated information theory: From consciousness to its physical substrate. Nature Reviews Neuroscience 17, 450461.10.1038/nrn.2016.44CrossRefGoogle Scholar