Hostname: page-component-6bf8c574d5-h6jzd Total loading time: 0.001 Render date: 2025-02-20T22:16:39.215Z Has data issue: false hasContentIssue false

Optimism in unconscious, intuitive morality

Published online by Cambridge University Press:  11 September 2019

Cory J. Clark
Affiliation:
Department of Psychology, Durham University, Durham DH1 3LE, United Kingdomcory.j.clark@durham.ac.ukhttps://www.dur.ac.uk/psychology/staff/?id=17418
Bo M. Winegard
Affiliation:
Department of Psychology, Marietta College, Marietta, OH 45750. bmw002@marietta.eduhttps://www.marietta.edu/person/bo-winegard

Abstract

Moral cognition, by its very nature, stems from intuitions about what is good and bad, and these intuitions influence moral assessments outside of conscious awareness. However, because humans evolved a shared set of moral intuitions, and are compelled to justify their moral assessments as good and rational (even erroneously) to others, moral virtue and moral progress are still possible.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2019 

We agree with May (Reference May2018) that many recent criticisms of moral cognition have been hyperbolic. Moral progress is an indisputable fact of human history (Pinker Reference Pinker2011b). It is therefore likely that moral reason, coupled with changing norms, institutions, and technologies, can change opinions and guide humans to new (and likely better) moral assessments (or at least better for that social and historical context). However, ultimately, moral reasoning stems from moral intuitions, which, like axioms in geometry, one must simply accept as givens. These unreasoned intuitions evolved in the same way desires to apprehend beauty, eat delicious food, and appear rationally consistent did.

The sources and nature of these intuitions might be knowable if we scrutinize them enough, but we likely can only know them the way we might know the structure of atoms or the substance of distant stars: By observing and carefully analyzing them. Furthermore, they often motivate our moral behavior and assessments in ways that remain inscrutable to most of us. When we judge that, say, stealing a marble rye from an elderly woman is wrong, we do not know why we have the intuition that it is wrong; we only know that we do have the intuition.

In the following, we will argue that the critical distinction about moral cognition is not between reason and emotion (two concepts arduous to define), but between conscious and unconscious processing. We believe that much of moral cognition is impelled by unconscious processes and that even conscious processes flow from moral intuitions whose causes remain obscure to introspection. However, because humans evolved desires for good moral reputations, cognitive consistency, and to justify opinions and behaviors to other humans, moral virtue and moral progress are still possible.

May contends that reasoning can be unconscious. This raises a difficult definitional problem, which we will not try to resolve here. Still, however one defines “reason,” there are crucial differences between unconscious and conscious cognitive processes. For present purposes, chief among them is that we are often ignorant about the causes and contents of unconscious reasoning. For example, if one is strolling down a street and suddenly has the thought “breeding dogs is bad,” then one would be unaware of the causes of this cognition. It might be that one had carefully considered the consequences of dog breeding sometime before and that the fruits of such considerations finally burst forth as one was walking. However, it may also be that one recently saw an American Society for the Prevention of Cruelty to Animals (ASPCA) commercial and was feeling particularly emotional about the number of homeless animals, or that one was looking for an excuse to judge their snooty neighbor who just spent $5000 on an exotic dog breed. The unconscious reasoner has no way of identifying what compelled their sudden conscious conclusion.

Furthermore, because humans are designed and motivated (often unconsciously) to persuade other people, they are often biased about the purported causes of their own cognitions. To persuade others often requires appealing to universal principles. Therefore, humans likely believe that their judgments are caused by such principles more often than they are (e.g., I dislike him because he is a jerk, not because the person I have a crush on likes him better than he or she likes me). Copious data suggest that humans are indeed often ignorant about the causes of their attitudes and behaviors (Nisbett & Wilson Reference Nisbett and Wilson1977) and are easily misled about the motives underlying their moral inferences (Haidt Reference Haidt2001). This suggests that scientists should be suspicious of the manifestations of unconscious reasoning. Because many of the actual causes of moral inferences are not accessible to consciousness, attempts to explain reasons for moral judgments are often mere speculation. For example, we might believe that killing babies is wrong but have absolutely no access to why we have this moral judgment. However, humans loath to admit that they have no introspective access to the causes of their judgments, so they often confidently assert something (e.g., “because that causes them pain and suffering”) that cannot be the actual reason for their judgment (e.g., most humans would still consider it highly immoral even if the murder could be made painless).

Despite this general lack of awareness, humans are compelled to explain and justify their judgments and behaviors to others (Mercier & Sperber Reference Mercier and Sperber2011) – so as to maintain their reputations as good and reasonable actors. People care deeply about preserving their moral reputations (Vonasch et al. Reference Vonasch, Reynolds, Winegard and Baumeister2017) and so people will be compelled to produce explanations for their behaviors and assessments that serve this goal. And though the ultimate goal is to convince others that one is morally virtuous, one can be more persuasive if they also personally believe their own moral stories, and so self-deception would be useful. Moreover, people wish to be and to appear cognitively consistent (Festinger Reference Festinger1957) and want their moral judgments to appear rational and justifiable (Clark et al. Reference Clark, Baumeister and Ditto2017). These desires compel individuals to alter supposedly objective features of moral cases (such as how much control a moral actor had or whether a particular action caused harm) to appear morally coherent (Clark et al. Reference Clark, Chen and Ditto2015; Schein & Gray Reference Schein and Gray2018). Thus, the reasons people produce for their moral actions and assessments will be designed to signal virtue and justifiability rather than to describe the true underlying cognitive processes.

This inability to access one's reasons for moral judgments coupled with the passions that moral assessments provoke does challenge moral rationality, and it makes moral discourse, debate, and reasoning supercharged – and often full of deception. In many cases, moral conversation is a façade that disguises underlying processes of which the interlocutors are utterly unaware. This might cause pessimism and cynicism about moral discourse. Even the explanations that people forward for their moral judgments that do appear reasonable and rational are often post hoc justifications. However, because there are pressures to justify one's judgments and behaviors both morally and rationally – these judgments and behaviors often will be constrained by what humans can explain as moral and rational. For example, one might refrain from attacking a romantic rival because such a behavior would be difficult to justify, morally or rationally. That is, one might have a strong desire to denigrate or fight a rival, but then think, “could I justify this to another person”? If not, then one might not follow through on one's desire. And so even if social norms about what is moral and rational are not the ultimate (or even proximate) causes of moral judgments and behaviors, these norms will influence and constrain moral judgments and behaviors.

Before we elaborate further on why we should not throw the moral baby out with the reason bathwater, we would like to clarify why intuition (or passion) is an inseparable part of moral judgment, as Hume contended (and many others misunderstood). It is not that emotions should drive our moral assessments (this is not the meaning of Hume's famous “reason is and ought to be the slave of the passions”), but rather that it is not possible to reason one's way to a moral conclusion without an intuition about valence (e.g., “pain is bad,” “it is good to maximize goodness for sentient creatures”). Moral judgments, by their very nature, must be grounded in intuitions about what is good and what is bad. If a cognitively sophisticated sadomasochistic robot shared the seemingly universal human intuition that it is generally good to maximize goodness for sentient creatures but also had an intuition that pain is good, a morally good and rational BDSM robot might then conclude that they ought to cause as much pain as possible. Without these pre-rational preferences, humans (and robots) would not have moral judgments because they would not have preferences at all.

Fortunately, humans evolved a shared set of moral intuitions from which moral reasoning can build. Humans generally agree that pain and suffering are negative experiences and that we should minimize negative experiences for ourselves and others we care about, and so we can make a variety of claims about what types of behaviors (those that cause pain and suffering in others) are morally wrong. This is a minor point, but a pervasive mistake in moral psychology. Yes, transient emotions or passions often influence moral judgments. That addresses more proximate causes of certain moral judgments and behaviors. But all moral judgments are based on unreasoned intuitions about what is good and bad in the same way that all aesthetic judgments are ultimately based on unreasoned intuitions about what is beautiful and ugly. The same way we evolved to find bodies of water, clear skin, and bright red strawberries appealing, we evolved to find generosity, honesty, and selflessness appealing.

Though presumably, May would disagree with this characterization of moral judgment, an implicit understanding of this reality permeates his writing. For example, he points to motivations such as avoiding punishment, feeling better about ourselves, and being more likable to others as wrong reasons for moral behavior. But how do we know these are “wrong” reasons? May argues “something seems morally lacking in such actions.” We agree something seems lacking. But we did not reason that these are the “wrong reasons” for moral behavior. Rather, we share an intuition that desiring to benefit the self is not a morally good reason (and we suspect many or most humans share this intuition). One could argue, however, that given that promoting one's moral reputation is an action that benefits the self, and also motivates behaviors that help others, self-interested moral motivations might often be virtuous.

None of this means that unreasoned intuitions are fully formed moral judgments or that we never use reason to make moral assessments. It merely means that we must build our moral judgments and arguments from the raw materials of our moral intuitions.

May argues that humans would not attempt to rationalize or justify their moral judgments and behaviors if they did not have regard for reason. Though we have explained that these rationalizations and justifications are often deceptive post hoc explanations, we agree that people do care about appearing reasonable. And though May might find something morally lacking in this type of “reasoning,” we remain optimistic about moral progress. The same way humans evolved desires to appear morally good and shared intuitions that harming others is bad, humans evolved desires to appear reasonable and shared intuitions about what is reasonable. These shared motivations and intuitions have shaped and will continue to shape moral judgment and behavior, compelling them to be more consistent with rational and universal rules and less nakedly selfish and parochial. And just because these virtues evolved for purposes other than enlightened prosociality, this does not mean we cannot admire them in the same way we admire ambition or beauty. And in fact, we should admire them because such admiration incentivizes virtuous behavior.

This means that moral discourse and argumentation will have significant, predictable effects on human behavior. If we want people to cease eating factory farmed meat, then we should appeal to their desire to seem reasonable. Point out that they would not torture a chicken to save a dollar on their next order of wings, but that factory farms do just that: they create torturous living conditions to provide lower prices to consumers. Although the effects of such arguments might be small at first, as more people come to agree with them, the social pressure makes it harder for people to justify actions that are incongruous with explicit moral pronouncements. One of the better angels of our nature, it turns out, is persnickety people who demand that we explain our actions.

References

Clark, C. J., Baumeister, R. F. & Ditto, P. H. (2017) Making punishment palatable: Belief in free will alleviates punitive distress. Consciousness and Cognition 51:193211.Google Scholar
Clark, C. J, Chen, E. & Ditto, P. H. (2015) Moral coherence processes: Constructing culpability and consequences. Current Opinion in Psychology 6:123–28.Google Scholar
Festinger, L. (1957) A theory of cognitive dissonance. Stanford University Press.Google Scholar
Haidt, J. (2001) The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review 108:814–34.Google Scholar
May, J. (2018) Regard for reason in the moral mind. Oxford University Press.Google Scholar
Mercier, H. & Sperber, D. (2011) Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences 34:5774.Google Scholar
Nisbett, R. E. & Wilson, T. D. (1977) Telling more than we can know: Verbal reports on mental processes. Psychological Review 84(3):231–59.Google Scholar
Pinker, S. (2011b) The better angels of our nature: Why violence has declined. Viking.Google Scholar
Schein, C. & Gray, K. (2018) The theory of dyadic morality: Reinventing moral judgment by redefining harm. Personality and Social Psychology Review 22:3270.Google Scholar
Vonasch, A. J., Reynolds, T., Winegard, B. M. & Baumeister, R. F. (2017) Death before dishonor: Incurring costs to protect moral reputation. Social Psychological and Personality Science 9(5): 604–13. Available at: https://doi.org/10.1177/1948550617720271.Google Scholar