Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-06T10:50:10.935Z Has data issue: false hasContentIssue false

Moral judgment as reasoning by constraint satisfaction

Published online by Cambridge University Press:  11 September 2019

Keith J. Holyoak
Affiliation:
Department of Psychology, University of California, Los Angeles, CA 90095-1563holyoak@lifesci.ucla.eduhttp://reasoninglab.psych.ucla.edu
Derek Powell
Affiliation:
Department of Psychology, Stanford University, Stanford, CA 94305-2130.derekpowell@stanford.eduhttp://www.derekmpowell.com

Abstract

May's careful examination of empirical evidence makes a compelling case against the primacy of emotion in driving moral judgments. At the same time, emotion certainly is involved in moral judgments. We argue that emotion interacts with beliefs, values, and moral principles through a process of coherence-based reasoning (operating at least partially below the level of conscious awareness) in generating moral judgments and decisions.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2019 

May (Reference May2018) makes a compelling empirical case that reason, not emotion, is the primary causal factor driving human moral judgments. Of course, many philosophers (some predating Kant by a couple of millennia) have similarly considered the essence of moral judgment to be a matter of correct understanding. In the Analects, Confucius issued a critique that might well be applied to modern capitalism when he observed, “The superior man understands what is right; the inferior man understands what will sell.” But the sentimentalism against which May argues has attracted its own strong proponents (not all of them philosophers or moral psychologists). Ernest Hemingway laid out a simple test: “what is moral is what you feel good after and what is immoral is what you feel bad after.” Based on that subjective criterion (from Death in the Afternoon), Hemingway was able to attest to the moral rightness of bullfighting – for after the fight ends in the usual way, “I feel very sad but also very fine.” According to the great novelist's moral emotions, the artistry and allegory more than compensate for the dead bull and the dying horses.

The sentimentalist's account of moral judgment has an attractive simplicity – “Theirs not to reason why, theirs but to laugh or cry.” Moreover, few doubt that emotions play some role in moral decision making – even Kant (Reference Kant1785/2002) recognized that “sympathies” and “sentiments” are integral to proper moral functioning. The difficult question, which May tackles head on, is to determine how emotion and reason relate to one another in the context of moral judgment, and to assess their relative importance. We have argued previously (Holyoak & Powell Reference Holyoak and Powell2016) that theories of moral psychology have often been premised on outmoded conceptions of both emotion and cognition. Rather than being strictly separable processes, modern work in both psychology and neuroscience has emphasized their intricate interactions. In social psychology, appraisal theories postulate that emotions are caused by processes in which stimuli are evaluated on such cognitive dimensions as goal relevance, coping potential, and agency (Moors et al. Reference Moors, Ellsworth, Scherer and Frijda2013). At the neural level, compelling evidence indicates that emotion and cognition interact in the prefrontal cortex (Pessoa & Pereira Reference Pessoa, Pereira, Robinson, Watkins and Harmon-Jones2013), where cognitive and emotional signals appear to be combined in complex ways.

In the case of cognition, an outmoded conception is that thinking consists solely of the conscious application of deterministic rules. Current cognitive theories differ widely, but the dominant overarching view is that both inductive and deductive reasoning are largely based on forms of probabilistic inference (e.g., Cheng Reference Cheng1997; Griffiths et al. Reference Griffiths, Kemp, Tenenbaum and Sun2008; Oaksford & Chater Reference Oaksford and Chater2013). Probabilistic inference supports structured and systematic reasoning even when grappling with highly uncertain beliefs, premises, or observations. Probabilistic cognitive models also suggest how even simple intuitive judgments that do not draw on explicit or conscious deliberation (e.g., will a block tower fall or be stable?) might be supported by complex and highly structured knowledge, such as an intuitive theory of physics encompassing Newtonian mechanics (Battaglia et al. Reference Battaglia, Hamrick and Tenenbaum2013; see Kubricht et al. Reference Kubricht, Lu and Holyoak2017). Moreover, high-level human abilities such as creative problem solving (Holyoak Reference Holyoak2019; Kounios & Beeman Reference Kounios and Beeman2015) depend on complex interactions between some processes that depend on conscious attention and working memory, and others that depend on unconscious activation of neural networks distributed throughout the cortex (Knowlton et al. Reference Knowlton, Morrison, Hummel and Holyoak2012).

As May recognizes, a dual-process conception of moral reasoning that posits a strict separation between an unconscious emotional system (identified rather oddly with the philosophical position of deontology) and a conscious system for rational reasoning (supposedly dedicated to the computation of utilitarian outcomes) ignores the evidence for emotion/cognition interactions, as well as for unconscious aspects of reasoning. May suggests that dual-process theorists might be better off drawing a distinction between fast and intuitive versus slower and more deliberative cognitive processes (neither necessarily dependent on emotion). We would press the point further, and suggest that the popular notion of dual-process models is itself simplistic. Even dual-process theorists are uncertain about the nature of the two processes, or indeed their number (Evans Reference Evans, St., St, Evans and Frankish2009). The fast/intuitive versus slow/deliberative distinction provides a useful shorthand to mark the extremes of a continuum, but most complex cognitive abilities – including moral judgment – are likely to be based on multiple, integrated mechanisms that quickly blur any binary division.

May briefly considers the possible role of consistency or coherence in resolving moral issues that involve conflict or ambiguity. Coherence-based reasoning is a domain-general mechanism that applies to moral reasoning as a special case. Its operation has been observed in a variety of complex decisions in which moral issues arise, such as legal cases (Holyoak & Simon Reference Holyoak and Simon1999; Simon Reference Simon2012), attitudes to war (Spellman et al. Reference Spellman, Ullman and Holyoak1993), and attributions of blame and responsibility (Clark et al. Reference Clark, Chen and Ditto2015). A key property of coherence-based reasoning is that values, beliefs, and emotions may change to increase their coherence with the emerging decision (contrary to the usual assumption that these core elements are typically fixed over the course of a reasoning episode). The outcome of decision making is not simply the choice of an option, but rather a restructuring of the entire package of values, attitudes, beliefs and emotions that relate to the selected option (for reviews see Simon & Holyoak Reference Simon and Holyoak2002; Simon et al. Reference Simon, Stenstrom and Read2015).

Considerable evidence indicates that moral judgments are often based on a process of constraint satisfaction that is directed at achieving local (and perhaps transient) coherence (Holyoak & Powell Reference Holyoak and Powell2016). Coherence-based reasoning could be applied to adjudicate among competing moral principles. To take two examples from those suggested by May, a person might value the Consequentialist Principle, “All else being equal, an action is morally worse if it leads to more harm than other available alternatives” (p. 57); but also the Principle of Agential Involvement, “All else being equal, it is morally worse for an agent to be more involved in bringing about a harmful outcome” (p. 69). The latter has a decidedly deontological flavor, and both will typically be based in part on a causal analysis of the situation (Lagnado & Gerstenberg Reference Lagnado, Gerstenberg and Waldmann2017; Waldmann & Dieterich Reference Waldmann and Dieterich2007). The “all else being equal” implies that neither principle is absolute (and indeed, they are simply two among many). Depending on the relative strengths of these and potentially many other competing principles, coherence-based reasoning may lead to different judgments about what is the right course of action (see also Zamir & Medina Reference Zamir and Medina2010).

At the same time, whenever a set of factors leads someone to render a judgment in conflict with a given principle, coherence-based reasoning implies that the strength of that principle may be reduced (Horne et al. Reference Horne, Powell and Hummel2015). Fluidly shifting one's moral principles in this way might seem decidedly unprincipled, but the drive to achieve coherence in one's moral beliefs is of a piece with Rawls’ (Reference Rawls1971) notion of “reflective equilibrium.” Through coherence-based reasoning, such equilibria are sought dynamically and potentially unconsciously during the course of moral decision making.

Coherence-based reasoning is consistent with the thrust of May's empirical debunking of the various lines of argument themselves intended to debunk the role of reason in moral judgment. Many philosophers have sought to debunk commonsense moral beliefs (i.e., to argue that those beliefs are unjustified) by arguing that the grounds or processes by which those beliefs are formed are unsound. May argues that those seeking to undermine the reliability of human moral judgments on psychological grounds invariably find themselves on one of the horns of the “debunker's dilemma”: either the purportedly corruptive process backing those moral beliefs actually proves reliably informative in some circumstances, or it turns out that the impact of the corruptive process on moral judgments and beliefs is weak enough to be quite inconsequential. For example, May argues that far from leading us astray, emotions are often highly informative in moral situations. On the other hand, where emotions are incidental, their influence is generally exceedingly small (also see Horne & Powell Reference Horne and Powell2016). Coherence-based reasoning may explain why would-be debunkers are left facing May's dilemma. Generating judgments by constraint satisfaction allows reasoners to incorporate a diverse set of factors into their decision-making process while constraining the influence of any one of those factors. For instance, coherence-based reasoning can enable emotions to influence moral judgments yet not entirely override other relevant factors; this reasoning mechanism can also alter emotional responses to a situation based on the emerging judgment (Simon et al. Reference Simon, Stenstrom and Read2015).

The picture of moral reasoning that emerges from May's arguments, and in particular from his critique of sentimentalism, is quite the opposite of the kind of encapsulated, special-purpose “module” some evolutionary psychologists have envisioned (for a discussion, see Bolhuis et al. Reference Bolhuis, Brown, Richardson and Laland2011). Rather, all the mechanisms that impact judgment and decision making in non-moral domains – including those characterized as heuristics and biases – guide moral judgments as well (e.g., Rai & Holyoak Reference Rai and Holyoak2010). Human reason, with or without inputs from emotion, is certainly fallible. But May aptly quotes Kahneman (Reference Kahneman2011, p. 4), who observed that, “the focus on error does not denigrate human intelligence, any more than the attention to diseases in medical texts denies good health.”

May's renewed focus on the centrality of reason in moral judgment suggests that morality should be included, along with language and high-level thinking, on a short list of domains that lie at the core of what it means to be human (see Penn et al. Reference Penn, Holyoak and Povinelli2008, for discussion of thinking, and Wynne & Bolhuis Reference Wynne and Bolhuis2008, for a discussion of morality). The claim that a sense of morality is distinctively human is of course controversial. Contemporary comparative psychologists (often relying on anthropomorphism) routinely report finding evidence of moral motives in non-human animals, such as chimpanzees’ apparent concern for the equitable distribution of rewards (e.g., Brosnan et al. Reference Brosnan, Schiff and de Waal2005). But when put to critical tests, simpler explanations have been found for many of these behaviors (e.g., Engelmann et al. Reference Engelmann, Clift, Herrmann and Tomasello2017).

If morality is indeed a type of specifically human cognition, aligned with language and abstract thought, the common thread linking them may well be the requirement to be able to explicitly represent and think about higher-order relations. May describes empathy as involving a kind of “relational desire” – for example, the wish to ease a pain one feels by easing that of another. More generally, morality begins when one understands the values of others to whom we are related in some specific way – as relatives, fellow citizens, humans, or perhaps sentient beings – and makes concern about the values of these others a part of one's own values. This is the crucial step that renders the life and well-being of another one's own concern. As Aristotle observed in Nicomachean Ethics, it is also a crucial step toward friendship: “The best friend is the man who in wishing me well wishes it for my sake.” Perhaps reason is the bedrock of the most distinctively human emotions.

References

Battaglia, P. W., Hamrick, J. B. & Tenenbaum, J. B. (2013) Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences, USA 110(45):18327–332. Available at: https://doi.org/10.1073/pnas.1306572110.Google Scholar
Bolhuis, J. J., Brown, G. R., Richardson, R. C. & Laland, K. N. (2011) Darwin in mind: New opportunities for evolutionary psychology. PLoS Biology 9(7):e1001109. Available at: http://doi.org/10.1371/journal.pbio.1001109.Google Scholar
Brosnan, S. F., Schiff, H. C. & de Waal, F. B. M. (2005) Tolerance for inequity may increase with social closeness in chimpanzees. Proceedings of the Royal Society B: Biological Sciences 272:253–58. Available at: http://doi.org/10.1098/rspb.2004.2947.Google Scholar
Cheng, P. W. (1997) From covariation to causation: A causal power theory. Psychological Review 104:367405.Google Scholar
Clark, C. J, Chen, E. & Ditto, P. H. (2015) Moral coherence processes: Constructing culpability and consequences. Current Opinion in Psychology 6:123–28.Google Scholar
Engelmann, J. M., Clift, J. B., Herrmann, E. & Tomasello, M. (2017) Social disappointment explains chimpanzees' behaviour in the inequity aversion task. Proceedings of the Royal Society B: Biological Sciences 284(1861):20171502. Available at: http://doi.org/10.1098/rspb.2017.1502.Google Scholar
Evans, J.St., B. T. (2009) How many dual-process theories do we need: One, two or many? In: In two minds: Dual processes and beyond, ed. St, J.. Evans, B. T. & Frankish, K., pp. 3154. Oxford University Press.Google Scholar
Griffiths, T. L., Kemp, C. & Tenenbaum, J. B. (2008) Bayesian models of cognition. In: Cambridge handbook of computational cognitive modeling, ed. Sun, R., pp. 59100. Cambridge University Press.Google Scholar
Holyoak, K. J. (2019) The spider's thread: Metaphor in mind, brain, and poetry. MIT Press.Google Scholar
Holyoak, K. J. & Powell, D. (2016) Deontological coherence: A framework for commonsense moral reasoning. Psychological Bulletin 142(11):11791203.Google Scholar
Holyoak, K. J. & Simon, D. (1999) Bidirectional reasoning in decision making by constraint satisfaction. Journal of Experimental Psychology: General 128:331.Google Scholar
Horne, Z. & Powell, D. (2016) How large is the role of emotion in judgments of moral dilemmas? PLOS ONE 11(7): e0154780. Available at: http://doi.org/10.1371/journal.pone.0154780.Google Scholar
Horne, Z., Powell, D. & Hummel, J. (2015) A single counterexample leads to moral belief revision. Cognitive Science 39:1950–64.Google Scholar
Kahneman, D. (2011) Thinking, fast and slow. Farrar, Straus and Giroux.Google Scholar
Kant, I. (1785/2002) Groundwork for the metaphysics of morals. Yale University Press. (Original work published in 1785.)Google Scholar
Knowlton, B. J., Morrison, R. G., Hummel, J. E. & Holyoak, K. J. (2012) A neurocomputational system for relational reasoning. Trends in Cognitive Sciences 16:373–81.Google Scholar
Kounios, J. & Beeman, M. (2015) The eureka factor: Aha moments, creative insight, and the brain. Random House.Google Scholar
Kubricht, J. R., Lu, H. & Holyoak, K. J. (2017) Intuitive physics: Current research and controversies. Trends in Cognitive Sciences 21:749–59.Google Scholar
Lagnado, D. A. & Gerstenberg, T. (2017) Causation in legal and moral reasoning. In: Oxford handbook of causal reasoning, ed. Waldmann, M. R., pp. 562602. Oxford University Press.Google Scholar
May, J. (2018) Regard for reason in the moral mind. Oxford University Press.Google Scholar
Moors, A., Ellsworth, P. C., Scherer, K. R. & Frijda, N. H. (2013) Appraisal theories of emotion: State of the art and future development. Emotion Review 5:119–24.Google Scholar
Oaksford, M. & Chater, N. (2013) Dynamic inference and everyday conditional reasoning in the new paradigm. Thinking and Reasoning 19:346–79.Google Scholar
Penn, D. C., Holyoak, K. J. & Povinelli, D. J. (2008) Darwin's mistake: Explaining the discontinuity between human and nonhuman minds. Behavioral and Brain Sciences 31:109–30.Google Scholar
Pessoa, L. & Pereira, M. G. (2013) Cognition–emotion interactions: A review of the functional magnetic resonance imaging literature. In: Handbook of cognition and emotion, ed. Robinson, M. D., Watkins, E. & Harmon-Jones, E., pp. 5568. Guilford Press.Google Scholar
Rai, T. S. & Holyoak, K. J. (2010) Moral principles or consumer preferences? Alternative framings of the trolley problem. Cognitive Science 34:311–21.Google Scholar
Rawls, J. (1971) A theory of justice. Harvard University Press.Google Scholar
Simon, D. (2012) In doubt: The psychology of the criminal justice process. Harvard University Press.Google Scholar
Simon, D. & Holyoak, K. J. (2002) Structural dynamics of cognition: From consistency theories to constraint satisfaction. Personality and Social Psychology Review 6:283–94.Google Scholar
Simon, D., Stenstrom, D. M. & Read, S. J. (2015) The coherence effect: Blending hot and cold cognitions. Journal of Personality and Social Psychology 109:369–94.Google Scholar
Spellman, B. A., Ullman, J. B. & Holyoak, K. J. (1993) A coherence model of cognitive consistency. Journal of Social Issues 4:147–65.Google Scholar
Waldmann, M. R. & Dieterich, J. H. (2007) Throwing a bomb on a person versus throwing a person on a bomb: Intervention myopia in moral intuitions. Psychological Science 18:247–53.Google Scholar
Wynne, C. D. I. & Bolhuis, J. J. (2008) Minding the gap: Why there is still no theory in comparative psychology. Behavioral and Brain Sciences 31:152–53.Google Scholar
Zamir, E. & Medina, B. (2010) Law, economics, and morality. Oxford University Press.Google Scholar