“Information is one of the most confused notions in contemporary intellectual life,” Searle once remarked (Searle, Reference Searle2013). Merker et al. (sect. 3, para. 12) rightfully point out that integrated information theory (IIT) conflates informativeness in the ordinary sense (a semantic notion relative to a cognitive agent) with information capacity as defined by Shannon (a formal syntactic notion defined by the observer). A similar confusion is at play in the literature about “neural codes” (Brette, Reference Brette2019a).
Consider one of IIT's “immediately evident” claims: “an experience of pure darkness is what it is by differing […] from […] other possible experiences” (Oizumi, Albantakis, & Tononi, Reference Oizumi, Albantakis and Tononi2014, p. 2). Superficially, the claim has some intuitive appeal: We appreciate the particular quality of darkness because we can compare it to different experiences we have had before, for example, lightness. But the formalization of IIT makes it clear that “possible experiences” refer neither to memory nor to anticipation by the system, but to formal possibilities defined by the external observer, alternative realities that the observer envisions but are not actually realized. Is it immediately evident that our present experience depends on hypothetical experiences that we have never had or even imagined?
To see the difficulty with this proposition, consider the photodiode consisting of a sensor and a detector, which Tononi considers as minimally conscious (Tononi, Reference Tononi2008); in a later version of IIT, the photodiode is only conscious when the detector feeds back onto the sensor (Oizumi et al., Reference Oizumi, Albantakis and Tononi2014), but this makes no difference to the present discussion. Observing the state of the detector reduces the uncertainty about the previous state of the system by one bit, making the mechanism minimally conscious. This uncertainty reduction only occurs because we consider that the photodiode lives in a universe where dark and light exist with equal probabilities. But what if the photodiode actually lived in a sealed opaque box so that light is not a “possible experience”? In this context, the uncertainty reduction is exactly zero, because we know the system is always in the dark state. We, therefore, conclude that the photodiode is not conscious if it is permanently enclosed in a box, but conscious if it is not. We must also conclude that the photodiode is conscious if it is in a box that someone might remove, even if that event does not actually occur. Thus, consciousness as defined by IIT is not an intrinsic property of the system, contrary to previous claims (Tononi, Reference Tononi2008, p. 220).
Or are we to consider that, even in the box, the lighted state is still “theoretically” possible given the design of the mechanism? But, if we are allowed to count events that can never happen as possible experiences, then where do we stop? We conclude that the repertoire of possible events is defined by the imagination of the observer, and if different minds can imagine different repertoires, then this makes the definition incoherent. In any case, consciousness defined in this way is not an intrinsic property of the system. Even in the framework of property dualism (matter has conscious properties distinct from its physical properties, but linked to them by additional laws), experiences are not allowed to depend on scenarios living in the imagination of someone else, but only on the current state or ongoing processes of the system.
Defining consciousness as a function of hypothetical possibilities leads IIT proponents to conclude that inactive systems can be conscious, but only if they can become active again (Oizumi et al., Reference Oizumi, Albantakis and Tononi2014). Specifically, Tononi imagines that a gray stimulus that would not activate color neurons would be perceived as gray, whereas lesioning the color area would result in a loss of color experience, because the neurons cannot become active again (Tononi, Reference Tononi2015). But what if the experimenter decides instead to cool the color area, a reversible operation? Would the person still perceive color in that case? If the experimenter decided to cool that area permanently, would the person lose color perception from the moment the brain is cooled down? And what happens if the cooling apparatus breaks down and the brain recovers? Does imagining this possibility make the person perceive color all along?
The proposition that an inactive system can be conscious, perhaps in some sort of meditative state (Oizumi et al., Reference Oizumi, Albantakis and Tononi2014, p. 17), is also questionable. The system cannot itself distinguish between the experience of being inactive (or in any constant state) for a duration A and the same inactivity for a duration B, for that discrimination should be realized as different states. Therefore, the experience of being inactive for any amount of time must be the same as the experience of being inactive for no time at all: a non-existent experience. This is precisely what happens in the TV series “Bewitched” (Brette, Reference Brette2019b): Samantha the housewife is actually a witch: when she twitches her nose, people freeze; when she twitches it again, they unfreeze and go on as if nothing happened between the two instants. Their brain state was unchanged, their brain processes were stalled, so that they did not experience anything at all. IIT breaks with TV series wisdom because it associates conscious experiences to a state of the system, rather than to its ongoing processes. But to experience is something that a system does – something lived, not some formal characteristic of the system's state.
“Information is one of the most confused notions in contemporary intellectual life,” Searle once remarked (Searle, Reference Searle2013). Merker et al. (sect. 3, para. 12) rightfully point out that integrated information theory (IIT) conflates informativeness in the ordinary sense (a semantic notion relative to a cognitive agent) with information capacity as defined by Shannon (a formal syntactic notion defined by the observer). A similar confusion is at play in the literature about “neural codes” (Brette, Reference Brette2019a).
Consider one of IIT's “immediately evident” claims: “an experience of pure darkness is what it is by differing […] from […] other possible experiences” (Oizumi, Albantakis, & Tononi, Reference Oizumi, Albantakis and Tononi2014, p. 2). Superficially, the claim has some intuitive appeal: We appreciate the particular quality of darkness because we can compare it to different experiences we have had before, for example, lightness. But the formalization of IIT makes it clear that “possible experiences” refer neither to memory nor to anticipation by the system, but to formal possibilities defined by the external observer, alternative realities that the observer envisions but are not actually realized. Is it immediately evident that our present experience depends on hypothetical experiences that we have never had or even imagined?
To see the difficulty with this proposition, consider the photodiode consisting of a sensor and a detector, which Tononi considers as minimally conscious (Tononi, Reference Tononi2008); in a later version of IIT, the photodiode is only conscious when the detector feeds back onto the sensor (Oizumi et al., Reference Oizumi, Albantakis and Tononi2014), but this makes no difference to the present discussion. Observing the state of the detector reduces the uncertainty about the previous state of the system by one bit, making the mechanism minimally conscious. This uncertainty reduction only occurs because we consider that the photodiode lives in a universe where dark and light exist with equal probabilities. But what if the photodiode actually lived in a sealed opaque box so that light is not a “possible experience”? In this context, the uncertainty reduction is exactly zero, because we know the system is always in the dark state. We, therefore, conclude that the photodiode is not conscious if it is permanently enclosed in a box, but conscious if it is not. We must also conclude that the photodiode is conscious if it is in a box that someone might remove, even if that event does not actually occur. Thus, consciousness as defined by IIT is not an intrinsic property of the system, contrary to previous claims (Tononi, Reference Tononi2008, p. 220).
Or are we to consider that, even in the box, the lighted state is still “theoretically” possible given the design of the mechanism? But, if we are allowed to count events that can never happen as possible experiences, then where do we stop? We conclude that the repertoire of possible events is defined by the imagination of the observer, and if different minds can imagine different repertoires, then this makes the definition incoherent. In any case, consciousness defined in this way is not an intrinsic property of the system. Even in the framework of property dualism (matter has conscious properties distinct from its physical properties, but linked to them by additional laws), experiences are not allowed to depend on scenarios living in the imagination of someone else, but only on the current state or ongoing processes of the system.
Defining consciousness as a function of hypothetical possibilities leads IIT proponents to conclude that inactive systems can be conscious, but only if they can become active again (Oizumi et al., Reference Oizumi, Albantakis and Tononi2014). Specifically, Tononi imagines that a gray stimulus that would not activate color neurons would be perceived as gray, whereas lesioning the color area would result in a loss of color experience, because the neurons cannot become active again (Tononi, Reference Tononi2015). But what if the experimenter decides instead to cool the color area, a reversible operation? Would the person still perceive color in that case? If the experimenter decided to cool that area permanently, would the person lose color perception from the moment the brain is cooled down? And what happens if the cooling apparatus breaks down and the brain recovers? Does imagining this possibility make the person perceive color all along?
The proposition that an inactive system can be conscious, perhaps in some sort of meditative state (Oizumi et al., Reference Oizumi, Albantakis and Tononi2014, p. 17), is also questionable. The system cannot itself distinguish between the experience of being inactive (or in any constant state) for a duration A and the same inactivity for a duration B, for that discrimination should be realized as different states. Therefore, the experience of being inactive for any amount of time must be the same as the experience of being inactive for no time at all: a non-existent experience. This is precisely what happens in the TV series “Bewitched” (Brette, Reference Brette2019b): Samantha the housewife is actually a witch: when she twitches her nose, people freeze; when she twitches it again, they unfreeze and go on as if nothing happened between the two instants. Their brain state was unchanged, their brain processes were stalled, so that they did not experience anything at all. IIT breaks with TV series wisdom because it associates conscious experiences to a state of the system, rather than to its ongoing processes. But to experience is something that a system does – something lived, not some formal characteristic of the system's state.
Financial support
This study was supported by Agence Nationale de la Recherche (ANR-20-CE30-0025-01), Programme Investissements d'Avenir IHU FOReSIGHT (ANR-18-IAHU-01), and Fondation Pour l'Audition (FPA RD-2017-2).
Conflict of interest
The author declares no conflict of interest.