The target article by Morsella et al. promotes an ambitious project, meant to explain both what consciousness is and how it subserves voluntary action. This commentary discusses some conceptual issues involved in such functional explanations, pertaining to levels of explanation, languages of description, and requirements of efficient description and explanation.
Functional explanations claim that X is for Y, invoking a means/ends relationship between the two: structure X serves as means for realizing function Y. To living systems, this scheme can be applied at two levels: operation and design. At the level of system operation, it applies to interactions between means and ends – one subserving the other (i.e., X subserves Y). At the level of system design, it applies to interactions in the reverse direction – one shaping the other (i.e., Y shapes X). The first form of explanation applies to short-term operations in ongoing activity, the second to long-term design in development and evolution. Full functional accounts may combine the two scales. For instance, the claim that legs are for locomotion means that (1) evolution has designed legs to accommodate the function of locomotion so that (2) the ensuing design enables legs to subserve that function in ongoing operation. Likewise, the target article claims (i) that consciousness is designed to accommodate the needs of action control (ii) so that consciousness subserves action control in ongoing operation. Notwithstanding the interchange of roles of explanandum and explanans, the two explanations can be legitimately combined because they pertain to different levels and timescales.
When we talk about legs and locomotion, we address both X and Y in the common language of kinematics and dynamics. However, such use of a common language does not apply when discussing relationships between consciousness and action control. Consciousness pertains to phenomenal experience, whereas action control addresses behavioral performance and underlying brain mechanisms. Because the languages of experience and performance are incommensurate, there is no obvious way of bridging the categorical gap and explaining one through the other. A project aiming at “homing in on consciousness in the nervous system” promises a new attack on the mind/brain problem and the two-language problem entailed in it.
A convenient way of addressing this problem is to look for features that can be expressed and understood in both languages (Prinz Reference Prinz, Prinz and Sanders1984). Thus, to understand how consciousness can subserve action control in ongoing operation requires being able to discern features of conscious experience that translate into the language of performance. Likewise, to understand how the requirements of action control can shape consciousness in development and evolution requires ascertaining features of performance that translate into the language of experience. Such feature overlap may then lay the ground for foundational explanations (see discussion below).
Efficient functional explanations must meet two basic requirements: independent description and foundational explanation. Independent description requires that X and Y are both characterized in an equal and independent manner. The target article is more elaborate on the signature of action-related performance than the signature of conscious experience. This asymmetry runs the risk of violating independence. Independence description requires the provision of an independent account of the explanandum in the first place, that is, an account of core features of consciousness that are independent of its alleged role for action control (e.g., phenomenal experience, subjectivity, aboutness, intentionality, etc., as known from classical debates on the nature of consciousness). What is offered, instead, are functional features that already derive from the invoked role of consciousness for action control (pertaining to integration of competing efference bindings). As a result, these features cannot explain much more than themselves, and “consciousness” becomes an empty concept. It stands for no more than “that which is required for information integration in action selection.” Good old consciousness gets lost this way and becomes replaced by an operationally defined entity that lacks any surplus meaning.
Foundational explanation requires that X and Y become related to each other in a way that conforms to our understanding of efficient explanation (Prinz Reference Prinz2003a; Reference Prinz2012, Chs. 1 and 2). Of foundational explanations, we demand that they help us understand how consciousness enables efficient response selection and how the requirements of efficient response selection act to shape consciousness. Why is it that information integration for action selection precisely requires consciousness (and not something else), and why is it that consciousness precisely subserves integration for action selection as proper function (and not some other function)? As discussed, convincing answers to these questions must specify feature overlap between performance and experience, that is, common features that make sense in both languages. It is not easy to see how Morsella et al.'s passive frame theory can fulfill this requirement. There is no obvious way in which functional features pertaining to information integration in action control could overlap with any constitutive feature of consciousness so that one could imply or require the other. Accordingly, this theory fails to offer a foundational account of consciousness (unlike, for example, self-representational approaches for which such overlap has been claimed; cf. Graziano Reference Graziano2013; Prinz Reference Prinz2003a; Reference Prinz2012).
While foundational explanations require conceptual overlap between X and Y, correlational explanations may switch language by relating an experiential state such as consciousness to a computational state such as action conflict or a neural state such as activation in a corresponding brain network. However, correlational explanations do not answer the foundational question of how these relationships work (Graziano Reference Graziano2013, Ch. 1). To know that action selection is associated with, or even requires, consciousness (in correlational terms), does not mean to understand (in foundational terms) how the two are interrelated. We just acknowledge the miracle that they enable and require each other, without understanding how they do it. To understand how the magic trick is done, we need to move from correlations to foundations.
The target article by Morsella et al. promotes an ambitious project, meant to explain both what consciousness is and how it subserves voluntary action. This commentary discusses some conceptual issues involved in such functional explanations, pertaining to levels of explanation, languages of description, and requirements of efficient description and explanation.
Functional explanations claim that X is for Y, invoking a means/ends relationship between the two: structure X serves as means for realizing function Y. To living systems, this scheme can be applied at two levels: operation and design. At the level of system operation, it applies to interactions between means and ends – one subserving the other (i.e., X subserves Y). At the level of system design, it applies to interactions in the reverse direction – one shaping the other (i.e., Y shapes X). The first form of explanation applies to short-term operations in ongoing activity, the second to long-term design in development and evolution. Full functional accounts may combine the two scales. For instance, the claim that legs are for locomotion means that (1) evolution has designed legs to accommodate the function of locomotion so that (2) the ensuing design enables legs to subserve that function in ongoing operation. Likewise, the target article claims (i) that consciousness is designed to accommodate the needs of action control (ii) so that consciousness subserves action control in ongoing operation. Notwithstanding the interchange of roles of explanandum and explanans, the two explanations can be legitimately combined because they pertain to different levels and timescales.
When we talk about legs and locomotion, we address both X and Y in the common language of kinematics and dynamics. However, such use of a common language does not apply when discussing relationships between consciousness and action control. Consciousness pertains to phenomenal experience, whereas action control addresses behavioral performance and underlying brain mechanisms. Because the languages of experience and performance are incommensurate, there is no obvious way of bridging the categorical gap and explaining one through the other. A project aiming at “homing in on consciousness in the nervous system” promises a new attack on the mind/brain problem and the two-language problem entailed in it.
A convenient way of addressing this problem is to look for features that can be expressed and understood in both languages (Prinz Reference Prinz, Prinz and Sanders1984). Thus, to understand how consciousness can subserve action control in ongoing operation requires being able to discern features of conscious experience that translate into the language of performance. Likewise, to understand how the requirements of action control can shape consciousness in development and evolution requires ascertaining features of performance that translate into the language of experience. Such feature overlap may then lay the ground for foundational explanations (see discussion below).
Efficient functional explanations must meet two basic requirements: independent description and foundational explanation. Independent description requires that X and Y are both characterized in an equal and independent manner. The target article is more elaborate on the signature of action-related performance than the signature of conscious experience. This asymmetry runs the risk of violating independence. Independence description requires the provision of an independent account of the explanandum in the first place, that is, an account of core features of consciousness that are independent of its alleged role for action control (e.g., phenomenal experience, subjectivity, aboutness, intentionality, etc., as known from classical debates on the nature of consciousness). What is offered, instead, are functional features that already derive from the invoked role of consciousness for action control (pertaining to integration of competing efference bindings). As a result, these features cannot explain much more than themselves, and “consciousness” becomes an empty concept. It stands for no more than “that which is required for information integration in action selection.” Good old consciousness gets lost this way and becomes replaced by an operationally defined entity that lacks any surplus meaning.
Foundational explanation requires that X and Y become related to each other in a way that conforms to our understanding of efficient explanation (Prinz Reference Prinz2003a; Reference Prinz2012, Chs. 1 and 2). Of foundational explanations, we demand that they help us understand how consciousness enables efficient response selection and how the requirements of efficient response selection act to shape consciousness. Why is it that information integration for action selection precisely requires consciousness (and not something else), and why is it that consciousness precisely subserves integration for action selection as proper function (and not some other function)? As discussed, convincing answers to these questions must specify feature overlap between performance and experience, that is, common features that make sense in both languages. It is not easy to see how Morsella et al.'s passive frame theory can fulfill this requirement. There is no obvious way in which functional features pertaining to information integration in action control could overlap with any constitutive feature of consciousness so that one could imply or require the other. Accordingly, this theory fails to offer a foundational account of consciousness (unlike, for example, self-representational approaches for which such overlap has been claimed; cf. Graziano Reference Graziano2013; Prinz Reference Prinz2003a; Reference Prinz2012).
While foundational explanations require conceptual overlap between X and Y, correlational explanations may switch language by relating an experiential state such as consciousness to a computational state such as action conflict or a neural state such as activation in a corresponding brain network. However, correlational explanations do not answer the foundational question of how these relationships work (Graziano Reference Graziano2013, Ch. 1). To know that action selection is associated with, or even requires, consciousness (in correlational terms), does not mean to understand (in foundational terms) how the two are interrelated. We just acknowledge the miracle that they enable and require each other, without understanding how they do it. To understand how the magic trick is done, we need to move from correlations to foundations.