Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-06T14:10:55.226Z Has data issue: false hasContentIssue false

The brain is not an isolated “black box,” nor is its goal to become one

Published online by Cambridge University Press:  10 May 2013

Tom Froese
Affiliation:
Departamento de Ciencias de la Computación, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, A.P. 20-726, 01000 México D.F., México. t.froese@gmail.comhttp://froese.wordpress.com Ikegami Laboratory, Department of General Systems Studies, Graduate School of Arts and Sciences, University of Tokyo, Meguro-ku, Tokyo 153-8902, Japan. ikeg@sacral.c.u-tokyo.ac.jphttp://sacral.c.u-tokyo.ac.jp/
Takashi Ikegami
Affiliation:
Ikegami Laboratory, Department of General Systems Studies, Graduate School of Arts and Sciences, University of Tokyo, Meguro-ku, Tokyo 153-8902, Japan. ikeg@sacral.c.u-tokyo.ac.jphttp://sacral.c.u-tokyo.ac.jp/

Abstract

In important ways, Clark's “hierarchical prediction machine” (HPM) approach parallels the research agenda we have been pursuing. Nevertheless, we remain unconvinced that the HPM offers the best clue yet to the shape of a unified science of mind and action. The apparent convergence of research interests is offset by a profound divergence of theoretical starting points and ideal goals.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2013 

We share with Clark a commitment to exploring the deep continuities of life, mind, and sociality (Froese & Di Paolo Reference Froese and Di Paolo2011). Similar to the enactive notion of “sense-making,” Clark's “hierarchical prediction machine” (HPM) entails that perceiving cannot be separated from acting and cognizing. Nevertheless, we disagree with Clark's theoretical premises and their ideal consequences.

Clark begins with the assumption that the task of the brain is analogous to establishing a “view from inside the black box.” On this view, the mind is locked inside the head and it follows that, as Clark puts it, “the world itself is thus off-limits” (sect. 1.2, para. 1). This is the premise of internalism, from which another assumption can be derived, namely that knowledge about the world must be indirect. Accordingly, there is a need to create an internal model of the external source of the sensory signals, or, in Clark's terms, of “the world hidden behind the veil of perception” (sect. 1.2, para. 6). This is the premise of representationalism.

It is important to realize that these two premises set up the basic problem space, which the HPM is designed to solve. Without them, the HPM makes little sense as a scientific theory. To be sure, internalism may seem to be biologically plausible. As Clark observes, all the brain “knows” about, in any direct sense, are the ways its own states (e.g., spike trains) flow and alter. However, the enactive approach prefers to interpret this kind of autonomous organization not as a black-box prison of the mind, but rather as a self-organized perspectival reference point that serves to enact a set of meaningful relations with its milieu (Di Paolo Reference Di Paolo2009). On this view, mind and action are complex phenomena that emerge from the nonlinear interactions of brain, body, and environment (Beer Reference Beer2000). Such a dynamical perspective supports a relational, direct realist account of perception (Noë Reference Noë2004; Reference Noë2009).

An enactive approach to neuroscience exhibits many of the virtues of the HPM approach. Following the pioneering work of Varela (Reference Varela, Petitot, Varela, Pachoud and Roy1999), it is also formalizable (in dynamical systems theory); it has explanatory power (including built-in context-sensitivity); and it can be related to the fundamental structures of lived experience (including multistable perceptions). Indeed, it accounts for much of the same neuroscientific evidence, since global self-organization of brain activity – for example, via neural synchrony – requires extensive usage of what Clark refers to as “backward connections” in order to impose top-down constraints (Varela et al. Reference Varela, Lachaux, Rodriguez and Martinerie2001).

Advantageously, the enactive approach avoids the HPM's essential requirement of a clean functional separation between “error units” and “representation units,” and it exhibits a different kind of neural efficiency. Properties of the environment do not need to be encoded and transmitted to higher cortical areas, but not because they are already expected by an internal model of the world, but rather because the world is its own best model. The environment itself, as a constitutive part of the whole brain-body-environment system, replaces the HPM's essential requirement of a multilevel generative modeling machinery (cf. Note 16 in the target article).

The enactive approach also avoids absurd consequences of the HPM, which follow its generalization into an all-encompassing “free-energy principle” (FEP). The FEP states that “all the quantities that can change; i.e. that are part of the system, will change to minimize free-energy” (Friston & Stephan Reference Friston and Stephan2007, p. 427). According to Clark, the central idea is that perception, cognition, and action work closely together to minimize sensory prediction errors by selectively sampling, and actively sculpting, the stimulus array. But given that there are no constraints on this process (according to the FEP, everything is enslaved as long as it is part of the system), there are abnormal yet effective ways of reducing prediction error, for example by stereotypic self-stimulation, catatonic withdrawal from the world, and autistic withdrawal from others. The idea that the brain is an isolated black box, therefore, forms not only the fundamental starting point for the HPM, but also its ideal end point. Ironically, raising the HPM to the status of a universal principle has the opposite effect: namely, making it most suitable as an account of patently pathological mental conditions.

Similar concerns about the overgeneralization of the FEP have been raised by others (Gershman & Daw Reference Gershman, Daw, Rabinovich, Friston and Varona2012), and are acknowledged by Clark in his “desert landscape” and “dark room” scenarios. The general worry is that an agent's values need to be partially decoupled from prediction optimization, since reducing surprise for its own sake is not always in the organism's best interest. In this regard the enactive approach may be of help. Like Friston, it rejects the need for specialized value systems, as values are deemed to be inherent in autonomous dynamics (Di Paolo et al. Reference Di Paolo, Rohde, De Jaegher, Stewart, Gapenne and Di Paolo2010). But it avoids the FEP's problems by grounding values in the viability constraints of the organism. Arguably, it is the organism's precarious existence as a thermodynamically open system in non-equilibrium conditions which constitutes the meaning of its interactions with the environment (Froese & Ziemke Reference Froese and Ziemke2009).

However, this enactive account forces the HPM approach to make more realistic assumptions about the conditions of the agent. Notably, it is no longer acceptable that the FEP requires a “system that is at equilibrium with its environment” (Friston Reference Friston2010, p. 127). This assumption may appear plausible at a sufficiently abstract level (Ashby Reference Ashby1940), but only at the cost of obscuring crucial differences between living and non-living systems (Froese & Stewart Reference Froese and Stewart2010). Organisms are essentially non-equilibrium systems, and thermodynamic equilibration with the environment is identical with disintegration and death, rather than optimal adaptiveness. However, contra to the motivations for the FEP (Friston Reference Friston2009, p. 293), this does not mean that organisms aim to ideally get rid of disorder altogether, either. Living beings are precariously situated between randomness and stasis by means of self-organized criticality, and this inherent chaos has implications for perception (Ikegami Reference Ikegami2007). Following Bateson, we propose that it is more important to be open to perceiving differences that make a difference, rather than to eliminate differences that could surprise you.

References

Ashby, W. R. (1940) Adaptiveness and equilibrium. The British Journal of Psychiatry 86:478–83.Google Scholar
Beer, R. D. (2000) Dynamical approaches to cognitive science. Trends in Cognitive Sciences 4(3):9199.CrossRefGoogle ScholarPubMed
Di Paolo, E. A. (2009) Extended life. Topoi 28(1):921.CrossRefGoogle Scholar
Di Paolo, E. A., Rohde, M. & De Jaegher, H. (2010) Horizons for the enactive mind: Values, social interaction, and play. In: Enaction: Toward a new paradigm for cognitive science, ed. Stewart, J., Gapenne, O. & Di Paolo, E. A., pp. 3387. MIT Press.CrossRefGoogle Scholar
Friston, K. (2009) The free-energy principle: A rough guide to the brain? Trends in Cognitive Sciences 13(7):293301.CrossRefGoogle ScholarPubMed
Friston, K. J. (2010) The free-energy principle: A unified brain theory? Nature Reviews Neuroscience 11(2):127–38.CrossRefGoogle ScholarPubMed
Friston, K. & Stephan, K. (2007) Free energy and the brain. Synthese 159(3):417–58.CrossRefGoogle ScholarPubMed
Froese, T. & Di Paolo, E. A. (2011) The enactive approach: Theoretical sketches from cell to society. Pragmatics and Cognition 19(1):136.CrossRefGoogle Scholar
Froese, T. & Stewart, J. (2010) Life after Ashby: Ultrastability and the autopoietic foundations of biological individuality. Cybernetics and Human Knowing 17(4):83106.Google Scholar
Froese, T. & Ziemke, T. (2009) Enactive artificial intelligence: Investigating the systemic organization of life and mind. Artificial Intelligence 173(3–4):366500.CrossRefGoogle Scholar
Gershman, S. J. & Daw, N. D. (2012) Perception, action and utility: The tangled skein. In: Principles of brain dynamics: Global state interactions, ed. Rabinovich, M. I., Friston, K. J. & Varona, P., pp. 293312. MIT Press.Google Scholar
Ikegami, T. (2007) Simulating active perception and mental imagery with embodied chaotic itinerancy. Journal of Consciousness Studies 14(7):111–25.Google Scholar
Noë, A. (2004) Action in perception. MIT Press.Google Scholar
Noë, A. (2009) Out of our heads: Why you are not your brain, and other lessons from the biology of consciousness. Farrar, Straus and Giroux/Hill and Wang.Google Scholar
Varela, F. J. (1999) The specious present: A neurophenomenology of time consciousness. In: Naturalizing phenomenology: Issues in contemporary phenomenology and cognitive science, ed. Petitot, J., Varela, F. J., Pachoud, B. & Roy, J.-M., pp. 266317. Stanford University Press.Google Scholar
Varela, F. J., Lachaux, J.-P., Rodriguez, E. & Martinerie, J. (2001) The brainweb: Phase synchronization and large-scale integration. Nature Reviews Neuroscience 2:229–39.CrossRefGoogle ScholarPubMed