Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-12T01:06:50.448Z Has data issue: false hasContentIssue false

United we stand, divided we fall: Cognition, emotion, and the moral link between them

Published online by Cambridge University Press:  08 June 2015

Andrea Manfrinati*
Affiliation:
Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126 Milano, Italy. andrea.manfrinati@unimib.ithttp://www.unimib.it/

Abstract

Contrary to Greene's dual-process theory of moral judgment (Greene 2013), this commentary suggests that the network view of the brain proposed by Pessoa, in which emotion and cognition may be used as labels in the context of certain behaviors, but will not map clearly into compartmentalized pieces of the brain, could represent a better explanation of the rationale behind people's moral behavior.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2015 

After revealing the error of Descartes (Damasio Reference Damasio1994), neurosciences seem to have taken two different paths in the study of brain organization during the past two decades. On the one hand, some researchers have tried to emphasize the deep interactions between cognition and emotion by postulating an integration of the brain's networks, none of which should be intended as specifically emotional or cognitive (Feldman Barret et al. Reference Feldman Barrett, Ochsner, Gross and Bargh2007; Ochsner & Gross Reference Ochsner and Gross2005; Pessoa Reference Pessoa2008). But, on the other hand, there has been an escalation of manichean points of view, according to which there are separate systems for emotion and cognition that seem to subtend different modules in the brain (Keren & Schul Reference Keren and Schul2009). Specifically, in the domain of moral decision making, the dualism between emotion and cognition has led to a dual-process brain framework that has received considerable attention due to the neuroimaging works of Joshua Greene (Greene et al. Reference Greene, Sommerville, Nystrom, Darley and Cohen2001; Reference Greene, Nystrom, Engell, Darley and Cohen2004; for a review, see Greene Reference Greene2013). The main point of Greene's theory is that, when we make moral decisions (deciding whether an act would be right or wrong), we can be automatic, fast, and emotional, or controlled, slow, and rational. In an attempt to establish a bridge between neuroimaging data and moral philosophy, Greene proposes that deontological judgments arise from areas of the brain more associated with emotional reactions, whereas utilitarian judgments arise from areas of the brain more associated with cognitive control. In this sense, deontology is an emotionally (strong) based theory that may, in some cases, pull us away from a clear deliberation about the significant characteristics of a moral situation. But, in some circumstances, we can transcend this visceral reaction by adopting a more rational and utilitarian point of view in order to override it, as well as constitute a more reliable moral guide because it is ineluctably cognitive. What emerges is a dual-process (moral) theory in which the deontological-utilitarian conflict (conflict that mirrors the emotional-cognitive conflict) should be understood in terms of a mutually suppressive relationship (Greene Reference Greene and Sinnot-Armstrong2008).

In his interesting The Cognitive-Emotional Brain, Pessoa (Reference Pessoa2013) proposes a different framework of brain organization in which cognitive control and emotion are not competitive mechanisms but integrated processes. The book's main claim is that emotion and cognition are functionally integrated systems that continuously impact each other's operation. Although Pessoa never speaks about moral judgments, I am very sympathetic with the main claim, and I think that the framework proposed by the author may also have important echoes in the context of moral decision making. Specifically, the network perspective of brain organization (e.g., behavioral processes are not implemented by a single brain area, but rather by the interaction of multiple areas), accompanied by the integrative view of cognition and emotion that emerges from Pessoa's book, permits us to reconsider the conflict between utilitarian (cognitive) and deontological (emotional) ethics in moral decision making, as suggested by Greene (Reference Greene2013), and to “transform” this moral conflict through a process of integrating a set of different moral considerations. Here, I will focus on two aspects of Pessoa's proposal.

A “conflict” between reason and emotion do not require the existence of two systems

As noted by Keren and Schul (Reference Keren and Schul2009), to prove the existence of two systems, we need a strong argument regarding how the dichotomy emotion/cognition is arranged, allowing one system to be characterized by one attribute and the other by its complement. The arrangement chosen by Greene (Reference Greene2013) is that emotions are automatic processes and, on the contrary, reason involves the conscious application of decision rules. Furthermore, Greene identifies the ventromedial area of the prefrontal cortex (vmPFC) as deputed to emotional processes and the dorsolateral prefrontal cortex (dlPFC) as a clearly cognitive area that is dedicated to cognitive control. When people are faced with the trolley dilemma, they apply a utilitarian perspective using the dlPFC that favors hitting the switch to maximize the number of lives saved. Conversely, when people are faced with the footbridge dilemma, they experience a strong emotional response enabled by the vmPFC. As a result, most people judge that the action is wrong by adopting a deontological perspective. Instead, for those few people who endorse a utilitarian perspective in the footbridge dilemma, they have to override the negative reaction to pushing innocent people off the footbridge in order to perform an extremely affectively difficult action. By combining these ideas, we have a dual-process theory of moral judgment.

But this “alleged” conflict between cognition and emotion really requires two mutually suppressing systems, and is it really a conflict between utilitarianism and deontology?

Pessoa argued that the theories that posit a push-pull antagonistic organization in the prefrontal cortex involved in cognition and emotion, although still influential, are no longer tenable. In fact, there are a large number of studies that strongly favor organization of the prefrontal cortex not as a simple push-pull mechanism, but as interactions that result in processes that are neither purely cognitive nor emotional (Ochsner & Gross Reference Ochsner and Gross2005; Pessoa Reference Pessoa2008). Specifically, the dlPFC is seen as a focal point for cognitive-emotional interactions, which have been observed across a wide range of cognitive tasks. This means that brain regions that are important for executive control are actively engaged by emotion. Emotion can either enhance or impair cognitive performance, and the dual competition framework proposed by Pessoa, in which emotion and cognition interact with/compete for the resources required for the tasks, permits us to explain how emotional content impacts executive control. Pessoa suggests that these interactions between emotion and cognition do not fit into a simple push-pull relationship; thus, a continuous framework seems better than a dichotomous one. The theory of moral judgment proposed by Moll et al. (Reference Moll, de Oliveira-Souza, Garrido, Bramati, Caparelli-Daquer, Paiva, Zahn and Grafman2007; Reference Moll, de Oliveira-Souza, Zahn, Grafman and Sinnot-Armstrong2008b) in which moral decision making is implemented by a single set of brain areas could represent a valid alternative to Greene's dual-process theory. In this single-process theory of moral judgment, emotions act as a guide to the salience of situational information or as an input to the reasoning process. In that sense, the conflict between emotional and cognitive mechanisms is replaced by a process that integrates a set of different considerations. For these reasons, as it is conceivable (and possible) to develop a continuous model that maps the interaction between emotion and cognition, I think that it is also conceivable (and possible) to develop a continuous model that captures the interactions between deontology and utilitarianism as compatible and reinforcing theories without considering them mutually exclusive (Gray & Schein Reference Gray and Schein2012).

Beyond the conflict between deontology and utilitarianism

In Manfrinati et al. (Reference Manfrinati, Lotto, Sarlo, Palomba and Rumiati2013), we developed an experimental paradigm in which participants were explicitly required to choose between two possible resolutions of a moral dilemma, one deontological and the other utilitarian. Furthermore, we asked participants to rate their emotional experience during moral-decision making, collecting valence and arousal ratings throughout the process of resolving the dilemma. In this way, we can assess whether, and to what extent, conscious emotion is engaged during the process of decision that will lead to the choice of one of the two resolutions. The results showed that cognitive and emotional processes participate in both deontological and utilitarian moral judgments. In particular, we found that, if the utilitarian judgment involves controlled reasoning processes to construct a set of practical principles for our moral behavior, then the whole process might have a high emotional cost. In fact, when people choose a utilitarian resolution, they might consider the consequences of an act as relevant in determining its morality, but they most likely feel that this resolution could undermine their moral integrity, thus evoking a harsh emotional feeling.

Furthermore, and contrary to Greene's (Reference Greene and Sinnot-Armstrong2008) prediction, according to which there is an asymmetry between utilitarian and deontological judgments, with the former driven by controlled cognitive processes and the latter driven by more automatic processes, we found that controlled reasoning is required to account not only for utilitarian judgments, but also for deontological judgments. In fact, our participants showed slower response times in choosing the deontological resolution of the dilemma than the utilitarian one. Given that these results showed an integrative pattern of emotional and cognitive processes in moral judgment, I think that this integrative pattern could also be applied to account for the relation between deontology and utilitarianism. Specifically, if the network perspective proposed by Pessoa suggests that the mind-brain is not decomposable in terms of emotion and cognition because they are functionally integrated systems, then we could hypothesize that moral cognition cannot be decomposed in deontology and utilitarianism. As claimed by Gray and Schein (Reference Gray and Schein2012), the normative conflict between deontology and utilitarianism seems to lose significance when we consider the psychological aspects of moral decision making. Indeed, when we investigate these psychological aspects, we realize that, many times, the acts of the moral agent are linked with their consequences, which suggests that people simultaneously care about deontological and utilitarian perspectives. Therefore, we might consider the relationship between deontology and utilitarianism as a continuum that “mirrors” the continuous framework between emotion and cognition highlighted by Pessoa.

To sum up, in this commentary, I have tried to point out that the network view of the brain proposed by Pessoa, in which emotion and cognition may be used as labels in the context of certain behaviors, but will not map clearly into compartmentalized pieces of the brain, may represent a significant rationale for the investigations of moral behavior. In accordance with Pessoa's claim that the effects of emotion on cognition, and vice versa, are best viewed not as a simple push-pull mechanism, but as interactions that result in processes that are neither purely cognitive nor emotional, it no longer makes sense – in the field of moral decision making – to engage in debate over whether moral judgment is accomplished exclusively by reason or by emotion. Rather, moral judgment is the product of complex integrations/interactions between emotional and cognitive mechanisms.

References

Damasio, A. (1994) Descartes' error: Emotion, reason and the human brain. Avon.Google Scholar
Feldman Barrett, L., Ochsner, K. N. & Gross, J. J. (2007) On the automaticity of emotion. In: Social psychology and the unconsciousness: The automaticity of higher mental processes, ed. Bargh, J. A., pp. 173219. Psychology Press.Google Scholar
Gray, K. & Schein, C. (2012) Two minds vs. two philosophies: Mind perception defines morality and dissolve the debate between deontology and utilitarianism. Review of Philosophy and Psychology 3:405–23.Google Scholar
Greene, J. D. (2008) The secret joke of Kant's soul. In: Moral psychology. Vol. 3: The neuroscience of morality: Emotion, brain disorders, and development, ed. Sinnot-Armstrong, W., pp. 3579. MIT Press.Google Scholar
Greene, J. D. (2013) Moral tribes. Emotion, reason, and the gap between us and them. Penguin Press.Google Scholar
Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M. & Cohen, J. D. (2004) The neural bases of cognitive conflict and control in moral judgment. Neuron 44:389400.CrossRefGoogle ScholarPubMed
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M. & Cohen, J. D. (2001) An fMRI investigation of emotional engagement in moral judgment. Science 293(5537):2105–108. Available at: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=11557895.Google Scholar
Keren, G. & Schul, Y. (2009) Two is not always better than one: A critical evaluation of two-system theories. Perspectives on Psychological Science 4(6):533–38.CrossRefGoogle ScholarPubMed
Manfrinati, A., Lotto, L., Sarlo, M., Palomba, D. & Rumiati, R. (2013) Moral dilemmas and moral principles: When emotion and cognition unite. Cognition and Emotion 27:1276–91.CrossRefGoogle ScholarPubMed
Moll, J., de Oliveira-Souza, R., Garrido, G. J., Bramati, I. E., Caparelli-Daquer, E. M. A., Paiva, M. L. M. F., Zahn, R. & Grafman, J. (2007) The self as a moral agent: Linking the neural bases of social agency and moral sensitivity. Social Neuroscience 2:336–52.Google Scholar
Moll, J., de Oliveira-Souza, R., Zahn, R. & Grafman, J. (2008b) The cognitive neuroscience of moral emotions. In: Moral psychology. Vol. 3: The neuroscience of morality: Emotion, brain disorders, and development, ed. Sinnot-Armstrong, W., pp. 117. MIT Press.Google Scholar
Ochsner, K. N. & Gross, J. J. (2005) The cognitive control of emotion. Trends in Cognitive Sciences 9:242–49.Google Scholar
Pessoa, L. (2008) On the relationship between emotion and cognition. Nature Reviews. Neuroscience 9(2):148–58. Available at: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=18209732.Google Scholar
Pessoa, L. (2013) The cognitive-emotional brain. From interactions to integration. MIT Press.Google Scholar