Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-06T17:46:00.072Z Has data issue: false hasContentIssue false

The primary (dis)function of consciousness: (Non)Integration

Published online by Cambridge University Press:  24 November 2016

Liad Mudrik*
Affiliation:
School of Psychological Sciences and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, 69978, Israel. mudrikli@post.tau.ac.ilhttp://www.mudriklab.tau.ac.il

Abstract

Morsella et al. put forward an interesting theory about the functions of consciousness. However, I argue that this theory is more about showing what is not the function of consciousness, and claiming that it does not integrate, than vice versa – as opposed to its proclaimed goal. In addition, the question of phenomenality and its relations with integration is still left open.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2016 

The target article by Morsella et al. starts with a fundamental question, sought after by generations of scholars: “What does consciousness contribute to the functioning of the nervous system?” (introductory para. 1). To answer this question, it adopts an interesting and unconventional approach – the passive frame theory – whose main thrust is that consciousness “integrate[s] information … involving incompatible skeletal muscle intentions for adaptive action” (sect. 2.4, para. 5).

Indeed, information integration has been repeatedly suggested as one function, and even as the primary function, of consciousness (e.g., Baars Reference Baars2005; Dehaene & Changeux Reference Dehaene and Changeux2011; Dehaene & Naccache Reference Dehaene and Naccache2001; Koch & Tononi Reference Koch and Tononi2011; Tononi Reference Tononi2013): Unconscious processing is typically claimed to rest on feedforward activations with little information sharing between processing nodes. Conscious perception, on the other hand, is held to involve long-range, recurrent processing and widespread neural activations that enable cortical interactions and synchronization (Baars Reference Baars2005; Dehaene & Changeux Reference Dehaene and Changeux2011; Engel et al. Reference Engel, Fries, König, Brecht and Singer1999; Lamme & Roelfsema Reference Lamme and Roelfsema2000; Schurger et al. Reference Schurger, Sarigiannidis, Naccache, Sitt and Dehaene2015; Tononi Reference Tononi2013; Tononi & Edelman Reference Tononi and Edelman1998; Treisman Reference Treisman and Cleeremans2003). These patterns of neural activation, then, are claimed to enable information integration – the combination of two distinct signals into a new, meaningful one. Under these accounts, consciousness is not only crucial, but also necessary for information integration (for review of these claims and empirical findings that either support or challenge them, see Mudrik et al. [Reference Mudrik, Faivre and Koch2014]).

Despite the common use of the word “integration” and its description as the primary function of consciousness, the account suggested by Morsella and co-authors is utterly different from the above-mentioned theories: while for these theories, consciousness is almost omnipotent, as integration (and other functions) cannot take place in its absence, the passive frame theory portrays a somewhat impotent consciousness, lagging behind unconscious mechanisms, which actually do most of the work. As I argue below, it seems like the proposed account, albeit being innovative, comprehensive, and thought-provoking, is more about showing what is not the function of consciousness, and about claiming that it does not integrate, than vice versa – as opposed to its proclaimed goal.

According to the theory, the conscious field is clearly unintegrated in the sense that each conscious event is encapsulated from the others: it is discrete, devoid of any ability to interact or influence other states. Unconscious mechanisms, on the contrary, “combine” and “comprehend” the different contents in the conscious field in a way that leads to action plan selection. They are active, as opposed to the passive conscious states. To illustrate their point, the authors present the Internet analogy: Consciousness is the platform that allows two people – in analogy, some unconscious mechanisms – to debate, but it cannot resolve the conflict. It serves as means for transmitting the message, but has no bearing on the message itself, or its effects.

However, in this Internet analogy, where does the integration truly lie? Would one say that the Internet integrates the discussion, or that the two discussants, admittedly able to talk thanks to the Internet, are the ones doing the integration? By their own analogy – and the theory it aims at illustrating – the authors pull the rug from under their own concept of integration, in such a way that makes the latter devoid of real content. If unconscious mechanisms are the ones resolving action conflicts, what is the meaning of saying that consciousness's primary role is to integrate “incompatible skeletal muscle intentions for adaptive action”? Why not simply say that consciousness's primary role is to enable unconscious integration of incompatible intentions?

It seems as if the authors are trying to have their cake and eat it too: On the one hand, they ascribe all integrative functions to unconscious mechanisms. On the other hand, they keep saying that consciousness's primary function is information integration, perhaps reflecting their own reluctance to wave the consciousness-integration long-lasting connection goodbye. In that respect, their article could have started with the (very interesting) question, “What do unconscious processes contribute to the functioning of the nervous system?” – because their theory provides a clearer answer to this question, than to the one they chose to pose.

Aside from what may seem like confusion in goals (attempting to clarify the function of consciousness, while in fact arguing for the function of unconscious mechanisms and the unfunctionality of consciousness), the theory does not explain why the conscious field is needed for unconscious integration to occur. Without the Internet, our two discussants would not have been able to share their views. If so, consciousness may be needed for distinct unconscious events to share information. Why should it be phenomenal, though? In the Internet analogy, technology enables two intentional agents with phenomenal experiences to interact. Yet here, phenomenal states are supposed to enable the interaction of non-phenomenal states – which are claimed to interpret, resolve conflict, and yield decisions. Where does that leave intention? Why should information sharing be phenomenal? And why assume that non-phenomenal states are able to integrate, while phenomenal ones cannot interact with one another? These questions are left unanswered under the current account.

Finally, let's imagine that the creature in the cave is a human being, deliberating about whether he should run away from the smoke and leave all his precious belongings behind. What is the status of his conscious deliberations over this possible voluntary action, according to the passive frame theory? Do they represent false/illusionary states, given that (being conscious experiences) they cannot affect the chosen action, which is unconsciously determined?

To sum up, the present target article is an impressive attempt to put forward a comprehensive functional theory of consciousness. To that end, it uniquely combines together neuroscientific, psychological, and philosophical work, integrating current findings with ideas and suggestions that go decades back even to the nineteenth century – which by itself is a rare and important endeavor that should be applauded. What was the role of the authors' consciousness in forming such an integrative account? Did it only create perception-like representations of these ideas, so that unconscious mechanisms could then integrate them into a skeletomotor output of typing? My own (dysfunctional, non-integrative?) conscious field wonders if this could indeed be the case.

References

Baars, B. J. (2005) Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience. Progress in Brain Research 150:4553.CrossRefGoogle Scholar
Dehaene, S. & Changeux, J. P. (2011) Experimental and theoretical approaches to conscious processing. Neuron 70(2):200–27.CrossRefGoogle ScholarPubMed
Dehaene, S. & Naccache, L. (2001) Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition 79(1–2):137.CrossRefGoogle Scholar
Engel, A. K., Fries, P., König, P., Brecht, M. & Singer, W. (1999) Temporal binding, binocular rivalry, and consciousness. Consciousness and Cognition 8(2):128–51.CrossRefGoogle ScholarPubMed
Koch, C. & Tononi, G. (2011) A test for consciousness. Scientific American 304(6):4447.CrossRefGoogle ScholarPubMed
Lamme, V. A. F. & Roelfsema, P. R. (2000) The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neuroscience 23(11):571–79.CrossRefGoogle ScholarPubMed
Mudrik, L., Faivre, N. & Koch, C. (2014) Information integration without awareness. Trends in Cognitive Sciences 18(9):488–96.CrossRefGoogle ScholarPubMed
Schurger, A., Sarigiannidis, I., Naccache, L., Sitt, J. D. & Dehaene, S. (2015) Cortical activity is more stable when sensory stimuli are consciously perceived. Proceedings of the National Academy of Sciences USA 112(16):E2083–92.CrossRefGoogle ScholarPubMed
Tononi, G. (2013) Integrated information theory of consciousness: An updated account. Archives Italiennes de Biologie 150(2–3):290326.Google Scholar
Tononi, G. & Edelman, G. M. (1998) Consciousness and complexity. Science 282(5395):1846–51.CrossRefGoogle ScholarPubMed
Treisman, A. M. (2003) Consciousness and perceptual binding. In: The unity of consciousness: Binding, integration, and dissociation, ed. Cleeremans, A., pp. 95113. Oxford University Press.CrossRefGoogle Scholar