I concur with Merker and colleagues' incisive critique of Tononi's integrated information theory (IIT). Because IIT formally allows (or even requires) the possibility of multiple consciousnesses in a single brain, it renders inexplicable the unitary, serial nature of phenomenal consciousness (Dennett, Reference Dennett1991; James, Reference James1890). IIT “solves” this major problem by stipulating an “Axiom of Exclusion” which declares the singular unity of consciousness by fiat. Merker and colleagues correctly observe that this fails to explain why this serial unity should exist in a parallel and mostly unconscious brain. I argue that a hypothesis concerning the evolutionary function of consciousness can remedy this shortcoming, and go some way to meeting the “constitutive challenge” of consciousness raised by the target article.
Ever since Crick and Koch (Reference Crick and Koch1990) explicitly put to one side the question of “what consciousness is for” (p. 264), most commentators have avoided this biologically central problem (although see Birch, Ginsburg, & Jablonka, Reference Birch, Ginsburg and Jablonka2020; Feinberg & Mallatt, Reference Feinberg and Mallatt2016; Humphrey, Reference Humphrey2006; Reber, Reference Reber2018). Why has consciousness evolved in some organisms, such as humans and dogs, whereas others, such as ferns or mushrooms, do fine without it? Furthermore, why should our conscious mind select out a limited subset of neural computations for subjective awareness, whereas constantly relegating the vast majority of neural activity to unconsciousness?
A coherent functional hypothesis should answer both of these questions, and satisfy Merker and colleague's desideratum that the unity of consciousness be derived from first principles, rather than stipulated post hoc. Here, I outline such a hypothesis, formulated as they suggest at Marr's computational level, specifically postulating a necessary function of consciousness in a parallel cognitive system able to learn from its mistakes. In brief, given the multiplicity of hypotheses unconsciously entertained by a complex brain at any given time, the function of consciousness is to keep track of the one(s) actually chosen, and then globally tag them to allow subsequent local updating of neural circuitry (Fitch, Reference Fitch and Cutler2005, Reference Fitch2008). Thus, based on the outcomes of decisions and actions, consciousness retrospectively subserves the allocation of credit and blame in the responsible circuits.
To unpack this idea, let's start with some familiar, uncontroversial facts. The vertebrate brain is a massively parallel system, consisting of thousands of neuronal assemblies, each made up of hundreds to millions of collaborating cells, which collectively simultaneously compute a wide variety of functions (Crick & Koch, Reference Crick and Koch1990). Certain members of this population of computational subunits compete with one another, in the sense that their computations concern the same topic or problem (e.g., “what generated this pattern of retinal stimulation?”), but make different predictions (e.g., “predator” vs. “leaf pile”). Among these competitors, some assemblies end up dominating globally via a distributed, dynamic “winner-take-all” process, such that their solutions are the ones actually chosen (phenomenally perceived, or motorically executed).
From an evolutionary viewpoint, the most important such processes are the executive and motor computations that compete to generate actions, whether decisions (e.g., fight vs. flight) or the detailed motor programs that compute different ways to achieve the same goal (e.g., execute a successful leap). Crucially, this plurality of competing subunits controls a single body.
Furthermore, individuals should be able to learn from their successes and failures, which requires appropriate local updating of the individual cells and synapses that make up this vast neural population. But how do the subsystems myopically carrying out local computations even “know” whether their outputs were, in fact, chosen or executed?
Appropriate updating requires that the actual actions of the body, and their outcomes, be broadcast back to these myopic assemblies and their cells, distinguishing the computational outcome actually executed from the many alternative actions/outcomes that were computed and considered but not executed. This is a computational necessity, if the units in a massively parallel system are to learn from their single body's experience. I, thus, suggest that a crucial evolved function of consciousness is to globally tag these actual, executed computations, thereby supporting appropriate local allocation of credit (when successful) and blame (following failure).
Given this hypothesized computational function – picking out the singular actual from the plural potential – the subjectively singular and serial nature of phenomenal consciousness follows logically. Thus, this hypothesis explains the singularity of consciousness as the natural outcome of the global-to-local information transfer necessary for a massively parallel brain to control a singular body and learn from its successes and failures. This model also explains, in terms similar to those of Merker (Reference Merker2007, Reference Merker, Edelman, Fekete and Zach2012), why subcortical, one-to-many “broadcast” systems (particularly, brainstem and thalamic nuclei) and cortical circuitry both play a crucial role in consciousness: The former provide the globally available learning signal, whereas the latter make use of this signal to locally update themselves.
Thoroughly countering the obvious riposte that any computational system that represents and chooses among alternatives must, therefore, be conscious would require more space than available here. Briefly, generating a subjective “feeling of what it's like” necessitates specific cellular features of the biological “wetware” we use to compute (see Fitch [Reference Fitch2008] for details).
This model joins Dennett's “multiple drafts” model of consciousness (Dennett & Kinsbourne, Reference Dennett and Kinsbourne1995) in being retrospective: The distributed sequence of neural events leading to perception and action precede our consciously available “decisions,” and our subjective experience is generated near the end of this process (applied “backward” in subjective time). However, the stream of consciousness is not a “user illusion” with no functional or causal powers in this model. Rather, the function of this retrospective construction of conscious experience is prospective, subserving appropriate learning for the future.
When did this processing develop in evolutionary time? As soon as brains evolved the “Popperian” complexity needed to entertain multiple hypotheses (Dennett, Reference Dennett1996), they required a mechanism to distinguish what was actually chosen or executed from the innumerable roads not taken.
My goal here is to understand why consciousness evolved, not explain its neural basis. But seeking neural correlates of consciousness, without reference to its function, has led to a seemingly irreconcilable impasse in consciousness science. Hypotheses regarding its function offer a promising way out.
I concur with Merker and colleagues' incisive critique of Tononi's integrated information theory (IIT). Because IIT formally allows (or even requires) the possibility of multiple consciousnesses in a single brain, it renders inexplicable the unitary, serial nature of phenomenal consciousness (Dennett, Reference Dennett1991; James, Reference James1890). IIT “solves” this major problem by stipulating an “Axiom of Exclusion” which declares the singular unity of consciousness by fiat. Merker and colleagues correctly observe that this fails to explain why this serial unity should exist in a parallel and mostly unconscious brain. I argue that a hypothesis concerning the evolutionary function of consciousness can remedy this shortcoming, and go some way to meeting the “constitutive challenge” of consciousness raised by the target article.
Ever since Crick and Koch (Reference Crick and Koch1990) explicitly put to one side the question of “what consciousness is for” (p. 264), most commentators have avoided this biologically central problem (although see Birch, Ginsburg, & Jablonka, Reference Birch, Ginsburg and Jablonka2020; Feinberg & Mallatt, Reference Feinberg and Mallatt2016; Humphrey, Reference Humphrey2006; Reber, Reference Reber2018). Why has consciousness evolved in some organisms, such as humans and dogs, whereas others, such as ferns or mushrooms, do fine without it? Furthermore, why should our conscious mind select out a limited subset of neural computations for subjective awareness, whereas constantly relegating the vast majority of neural activity to unconsciousness?
A coherent functional hypothesis should answer both of these questions, and satisfy Merker and colleague's desideratum that the unity of consciousness be derived from first principles, rather than stipulated post hoc. Here, I outline such a hypothesis, formulated as they suggest at Marr's computational level, specifically postulating a necessary function of consciousness in a parallel cognitive system able to learn from its mistakes. In brief, given the multiplicity of hypotheses unconsciously entertained by a complex brain at any given time, the function of consciousness is to keep track of the one(s) actually chosen, and then globally tag them to allow subsequent local updating of neural circuitry (Fitch, Reference Fitch and Cutler2005, Reference Fitch2008). Thus, based on the outcomes of decisions and actions, consciousness retrospectively subserves the allocation of credit and blame in the responsible circuits.
To unpack this idea, let's start with some familiar, uncontroversial facts. The vertebrate brain is a massively parallel system, consisting of thousands of neuronal assemblies, each made up of hundreds to millions of collaborating cells, which collectively simultaneously compute a wide variety of functions (Crick & Koch, Reference Crick and Koch1990). Certain members of this population of computational subunits compete with one another, in the sense that their computations concern the same topic or problem (e.g., “what generated this pattern of retinal stimulation?”), but make different predictions (e.g., “predator” vs. “leaf pile”). Among these competitors, some assemblies end up dominating globally via a distributed, dynamic “winner-take-all” process, such that their solutions are the ones actually chosen (phenomenally perceived, or motorically executed).
From an evolutionary viewpoint, the most important such processes are the executive and motor computations that compete to generate actions, whether decisions (e.g., fight vs. flight) or the detailed motor programs that compute different ways to achieve the same goal (e.g., execute a successful leap). Crucially, this plurality of competing subunits controls a single body.
Furthermore, individuals should be able to learn from their successes and failures, which requires appropriate local updating of the individual cells and synapses that make up this vast neural population. But how do the subsystems myopically carrying out local computations even “know” whether their outputs were, in fact, chosen or executed?
Appropriate updating requires that the actual actions of the body, and their outcomes, be broadcast back to these myopic assemblies and their cells, distinguishing the computational outcome actually executed from the many alternative actions/outcomes that were computed and considered but not executed. This is a computational necessity, if the units in a massively parallel system are to learn from their single body's experience. I, thus, suggest that a crucial evolved function of consciousness is to globally tag these actual, executed computations, thereby supporting appropriate local allocation of credit (when successful) and blame (following failure).
Given this hypothesized computational function – picking out the singular actual from the plural potential – the subjectively singular and serial nature of phenomenal consciousness follows logically. Thus, this hypothesis explains the singularity of consciousness as the natural outcome of the global-to-local information transfer necessary for a massively parallel brain to control a singular body and learn from its successes and failures. This model also explains, in terms similar to those of Merker (Reference Merker2007, Reference Merker, Edelman, Fekete and Zach2012), why subcortical, one-to-many “broadcast” systems (particularly, brainstem and thalamic nuclei) and cortical circuitry both play a crucial role in consciousness: The former provide the globally available learning signal, whereas the latter make use of this signal to locally update themselves.
Thoroughly countering the obvious riposte that any computational system that represents and chooses among alternatives must, therefore, be conscious would require more space than available here. Briefly, generating a subjective “feeling of what it's like” necessitates specific cellular features of the biological “wetware” we use to compute (see Fitch [Reference Fitch2008] for details).
This model joins Dennett's “multiple drafts” model of consciousness (Dennett & Kinsbourne, Reference Dennett and Kinsbourne1995) in being retrospective: The distributed sequence of neural events leading to perception and action precede our consciously available “decisions,” and our subjective experience is generated near the end of this process (applied “backward” in subjective time). However, the stream of consciousness is not a “user illusion” with no functional or causal powers in this model. Rather, the function of this retrospective construction of conscious experience is prospective, subserving appropriate learning for the future.
When did this processing develop in evolutionary time? As soon as brains evolved the “Popperian” complexity needed to entertain multiple hypotheses (Dennett, Reference Dennett1996), they required a mechanism to distinguish what was actually chosen or executed from the innumerable roads not taken.
My goal here is to understand why consciousness evolved, not explain its neural basis. But seeking neural correlates of consciousness, without reference to its function, has led to a seemingly irreconcilable impasse in consciousness science. Hypotheses regarding its function offer a promising way out.
Financial support
This study was supported by Austrian Science Fund (FWF DK Grant no. W1262-B29).
Conflict of interest
The author declares no conflict of interest.