Regard for Reason in the Moral Mind (May Reference May2018) is an impressive work. Drawing on the latest psychological research, May pushes back against prominent sentimentalist theories in normative ethics and moral psychology that view moral judgments as the products of unreasoned, emotional processes. Ultimately, he defends a “cautiously optimistic” form of rationalism (p. 227): Our moral judgments are the product of reasoning, so “virtue is within reach” (p. xi), because we are capable of acquiring moral knowledge and knowing right from wrong (p. 4).
May is therefore optimistic about the source of our moral judgments (i.e., they are the products of reasoning, which is capable of tracking moral facts), which leads to optimism about the content of our moral judgments (i.e., we can know right from wrong), which leads to optimism about the consequences of our moral judgments (i.e., we can act in accordance with them). I agree with many of the positions that May argues for: In my view, moral judgments are products of reasoning (Royzman et al. Reference Royzman, Landy and Leeman2015b), moral cognition does not fundamentally differ from other kinds of cognition (Landy & Bartels Reference Landy and Bartels2018), and emotions are consequences, not causes, of moral judgments (Landy & Goodwin Reference Landy and Goodwin2015; Royzman et al. Reference Royzman, Atanasov, Landy, Parks and Gepty2014a).
However, I think that cautiously optimistic rationalism may not be cautious enough. Even if we accept May's (Reference May2018) optimism about the source of our moral judgments, we ultimately care about this because it speaks to our ability to actually attain moral knowledge and act accordingly – that is, because it speaks to whether we should be optimistic about the content and consequences of our moral judgments. In other words, although virtue may be “within reach,” this is important because it is relevant to the question of whether we can be reasonably expected to successfully reach out and actually take hold of virtue. Two observations lead to the conclusion that this may not happen as often as we would hope, and that a tempered pessimism about the content (and, therefore, the consequences) of our moral judgments is warranted.
First, it seems plausible that those of us who are better at reasoning are more likely to successfully reach out and grasp virtue, and, conversely, that those of us who do not reason well are less likely to do so. Indeed, May (Reference May2018) seems to accept at least a weak form of this position: “sophisticated” reasoners are “likely to have more well founded moral beliefs than those ignorant of the key details or more prone to cognitive errors” (p. 236). Many sentimentalists will dispute the claim that reasoning has any relationship at all with the content of our moral judgments (see, e.g., Haidt Reference Haidt2001; Schnall et al. Reference Schnall, Haidt, Clore and Jordan2008). Nonetheless, research has shown that there is substantial variation in people's domain-general ability and propensity to reason thoroughly, and that variation in this kind of domain-general reasoning performance does predict the content of people's moral judgments (see, e.g., Landy Reference Landy2016; Royzman et al. Reference Royzman, Landy and Goodwin2014b; Royzman et al. Reference Royzman, Landy and Leeman2015b; for a recent review and synthesis, see Landy & Royzman Reference Landy, Royzman and Pennycook2018). So, I will accept the premise that better reasoners are more likely to arrive at well-founded moral beliefs than are worse reasoners.
The problem for cautiously optimistic rationalism is that most people seem to be unable or unwilling to think through reasoning problems when they are faced with them. For example, the modal number of correct answers on the much-studied Cognitive Reflection Test (CRT; Frederick Reference Frederick2005) – a three-item performance measure of reasoning – is usually found to be zero (e.g., Campitelli & Gerrans Reference Campitelli and Gerrans2014; Frederick Reference Frederick2005; Pennycook et al. Reference Pennycook, Cheyne, Koehler and Fugelsang2016; Royzman et al. Reference Royzman, Landy and Goodwin2014b), despite the fact that the three problems require only rudimentary cognitive work to solve correctly. Performance on the CRT is thought to depend on both reasoning ability (similar to an IQ test) and the propensity or motivation to reason through problems (see Pennycook & Ross Reference Pennycook and Ross2016). So, most people seem not to be very good reasoners, because they lack the necessary ability, motivation, or both.
Rationalism implies that “if all goes well, you form the correct [moral] judgment, it's warranted or justified, and you thus know what to do” (May Reference May2018, p. 19), but the problem is that we have little reason to assume that “all goes well,” most of the time. The research suggests, instead, that things often go rather poorly. The premise that bad reasoners are unlikely to form well-founded moral beliefs, combined with the empirical evidence that most people lack either the ability or the motivation to engage in good reasoning, leads to the conclusion that, for most people, much of the time, either virtue is “out of reach,” or they lack sufficient motivation to extend their arms and grab it. Either way, people may not find themselves with virtue in hand very often.
May (Reference May2018) engages with a version of this problem (Ch. 5, sect. 3.3), in which he discusses widespread cognitive biases that interfere with domain-general reasoning. He argues that this is not problematic, though, because “they don't afflict moral judgment in particular but reasoning generally” (p. 125). May and I seem to agree that moral judgments are products of the same kind of domain-general reasoning mechanism that produces other kinds of judgments, given his argument that “moral judgment is just like other forms of cognition except that it involves moral matters” (p. 228). If this is the case, then widespread defects or biases in reasoning represent a potentially serious threat to the attainment of well-founded moral knowledge in most cases. Most people, as he notes, have “little claim to being a moral guru” (p. 126), but he does not acknowledge this as a serious problem for his optimism regarding the content of our moral judgments. Here, his comparison of moral reasoning with mathematical reasoning strikes me as apt. When it comes to both math and morals, what we presumably care about is arriving at the right answer via the right kind of process. Although the “basic capacity is not fundamentally flawed” (p. 129), anyone who has taught a statistics class can attest that many people never successfully reach out and grasp mathematical competence, and those that do often do so only with considerable effort. Rather than “sweeping pessimism about only the moral domain” (p. 230, emphasis added), a more tempered pessimism seems warranted about our reasoning in general. Of course, this entails pessimism about both our mathematical cognition and, more germane to the present discussion, our moral cognition. Even if we are optimistic that moral judgments result from the kinds of domain-general reasoning processes that also drive mathematical cognition, if those processes frequently go awry, we have little reason for optimism about the content of the moral judgments they produce.
May also notes that it is beyond the scope of his book to address “deep skepticism about the reliability of our general cognitive, learning, and reasoning capacities” (p 106). Fair enough. I offer this commentary in the spirit of advancing the discussion beyond the already considerable amount that he has accomplished in the book. Importantly, though, I am not arguing that “all cognition, moral and non-moral, is bunk” (p. 230). My point is that the empirical literature suggests that good reasoning is not impossible, but it is relatively rare. This is a separate “empirical threat to the acquisition or maintenance of well-founded moral beliefs” (p. 20) from the two that are addressed in chapter 5, and it is one that May's cautiously optimistic rationalism does not currently speak to. The argument that I am making for a moderate amount of pessimism cannot be dismissed as merely radical skepticism.
A defender of cautiously optimistic rationalism might reply that part of May's argument is that inferential, cognitive processes do not need to be conscious and explicit to qualify as reasoning (see, e.g., pp. 8–9, 54–55). They might then argue that the CRT and similar psychological instruments primarily tap conscious, “System 2 reasoning,” so it is possible that most people are rather good at more intuitive “System 1 reasoning,” and therefore that we do reach out and successfully grasp virtue reasonably often. The first premise in this argument can be contested – reasoning is often associated with “System 2,” but not “System 1,” processes (e.g., Kahneman Reference Kahneman2011; Kokis et al. Reference Kokis, Macpherson, Toplak, West and Stanovich2002) – though I do personally find May's argument that at least some instances of effortless, automatic cognition can qualify as a kind of “reasoning” to be compelling (see also Landy & Royzman Reference Landy, Royzman and Pennycook2018, fn. 2).
However, whether or not we accept this first premise, the second premise and, therefore, the conclusion, are problematic. The CRT is usually thought to measure success at overriding a response that is prepotent, intuitive, and incorrect (Frederick Reference Frederick2005, though see Pennycook et al. Reference Pennycook, Cheyne, Koehler and Fugelsang2016). That is, even if we accept the first premise in this reply, low scores on the CRT can be thought of as reflecting failures of “System 1 reasoning” to produce the correct response, as well as failures of “System 2 reasoning” to recognize this error and override it. This assertion is bolstered by the fact that CRT performance is negatively correlated with susceptibility to intuitive heuristics and biases (Toplak et al. Reference Toplak, West and Stanovich2011). This is not definitive evidence that people are, by and large, bad intuitive reasoners, but it does at least undermine the argument that we can safely assume that “System 1 reasoning” is generally reliable, and therefore that we should be optimistic about our chances of successfully taking hold of virtue.
Of course, to even be able to say whether we have virtue in hand on any given occasion requires an independent normative criterion by which moral judgments are right and which actions are virtuous and which are wrong. There are some defensible metaethical theories that would posit that no such criterion can reasonably be said to exist (e.g., moral error theory, see Mackie Reference Mackie1977), but even if one believes that moral properties are mind-independent and truth-apt, it is still the case that no theory of normative ethics has attained consensus after some 2,500 years of work in this area. How, then, are we to know when we have reached out and taken hold of virtue, and when we have not? We do not have a noncontroversial answer to this question, as of yet.
In sum, I agree with May that moral knowledge is “possible” (p. 5), but I doubt that it is all that probable, in most cases. Given what we know about the prevalence – or rather, the lack thereof – of good reasoning, a moderately pessimistic form of rationalism seems more appropriate than a cautiously optimistic one. Yes, our moral judgments are largely products of reasoning, but reasoning is not something that most of us are especially good at.
Regard for Reason in the Moral Mind (May Reference May2018) is an impressive work. Drawing on the latest psychological research, May pushes back against prominent sentimentalist theories in normative ethics and moral psychology that view moral judgments as the products of unreasoned, emotional processes. Ultimately, he defends a “cautiously optimistic” form of rationalism (p. 227): Our moral judgments are the product of reasoning, so “virtue is within reach” (p. xi), because we are capable of acquiring moral knowledge and knowing right from wrong (p. 4).
May is therefore optimistic about the source of our moral judgments (i.e., they are the products of reasoning, which is capable of tracking moral facts), which leads to optimism about the content of our moral judgments (i.e., we can know right from wrong), which leads to optimism about the consequences of our moral judgments (i.e., we can act in accordance with them). I agree with many of the positions that May argues for: In my view, moral judgments are products of reasoning (Royzman et al. Reference Royzman, Landy and Leeman2015b), moral cognition does not fundamentally differ from other kinds of cognition (Landy & Bartels Reference Landy and Bartels2018), and emotions are consequences, not causes, of moral judgments (Landy & Goodwin Reference Landy and Goodwin2015; Royzman et al. Reference Royzman, Atanasov, Landy, Parks and Gepty2014a).
However, I think that cautiously optimistic rationalism may not be cautious enough. Even if we accept May's (Reference May2018) optimism about the source of our moral judgments, we ultimately care about this because it speaks to our ability to actually attain moral knowledge and act accordingly – that is, because it speaks to whether we should be optimistic about the content and consequences of our moral judgments. In other words, although virtue may be “within reach,” this is important because it is relevant to the question of whether we can be reasonably expected to successfully reach out and actually take hold of virtue. Two observations lead to the conclusion that this may not happen as often as we would hope, and that a tempered pessimism about the content (and, therefore, the consequences) of our moral judgments is warranted.
First, it seems plausible that those of us who are better at reasoning are more likely to successfully reach out and grasp virtue, and, conversely, that those of us who do not reason well are less likely to do so. Indeed, May (Reference May2018) seems to accept at least a weak form of this position: “sophisticated” reasoners are “likely to have more well founded moral beliefs than those ignorant of the key details or more prone to cognitive errors” (p. 236). Many sentimentalists will dispute the claim that reasoning has any relationship at all with the content of our moral judgments (see, e.g., Haidt Reference Haidt2001; Schnall et al. Reference Schnall, Haidt, Clore and Jordan2008). Nonetheless, research has shown that there is substantial variation in people's domain-general ability and propensity to reason thoroughly, and that variation in this kind of domain-general reasoning performance does predict the content of people's moral judgments (see, e.g., Landy Reference Landy2016; Royzman et al. Reference Royzman, Landy and Goodwin2014b; Royzman et al. Reference Royzman, Landy and Leeman2015b; for a recent review and synthesis, see Landy & Royzman Reference Landy, Royzman and Pennycook2018). So, I will accept the premise that better reasoners are more likely to arrive at well-founded moral beliefs than are worse reasoners.
The problem for cautiously optimistic rationalism is that most people seem to be unable or unwilling to think through reasoning problems when they are faced with them. For example, the modal number of correct answers on the much-studied Cognitive Reflection Test (CRT; Frederick Reference Frederick2005) – a three-item performance measure of reasoning – is usually found to be zero (e.g., Campitelli & Gerrans Reference Campitelli and Gerrans2014; Frederick Reference Frederick2005; Pennycook et al. Reference Pennycook, Cheyne, Koehler and Fugelsang2016; Royzman et al. Reference Royzman, Landy and Goodwin2014b), despite the fact that the three problems require only rudimentary cognitive work to solve correctly. Performance on the CRT is thought to depend on both reasoning ability (similar to an IQ test) and the propensity or motivation to reason through problems (see Pennycook & Ross Reference Pennycook and Ross2016). So, most people seem not to be very good reasoners, because they lack the necessary ability, motivation, or both.
Rationalism implies that “if all goes well, you form the correct [moral] judgment, it's warranted or justified, and you thus know what to do” (May Reference May2018, p. 19), but the problem is that we have little reason to assume that “all goes well,” most of the time. The research suggests, instead, that things often go rather poorly. The premise that bad reasoners are unlikely to form well-founded moral beliefs, combined with the empirical evidence that most people lack either the ability or the motivation to engage in good reasoning, leads to the conclusion that, for most people, much of the time, either virtue is “out of reach,” or they lack sufficient motivation to extend their arms and grab it. Either way, people may not find themselves with virtue in hand very often.
May (Reference May2018) engages with a version of this problem (Ch. 5, sect. 3.3), in which he discusses widespread cognitive biases that interfere with domain-general reasoning. He argues that this is not problematic, though, because “they don't afflict moral judgment in particular but reasoning generally” (p. 125). May and I seem to agree that moral judgments are products of the same kind of domain-general reasoning mechanism that produces other kinds of judgments, given his argument that “moral judgment is just like other forms of cognition except that it involves moral matters” (p. 228). If this is the case, then widespread defects or biases in reasoning represent a potentially serious threat to the attainment of well-founded moral knowledge in most cases. Most people, as he notes, have “little claim to being a moral guru” (p. 126), but he does not acknowledge this as a serious problem for his optimism regarding the content of our moral judgments. Here, his comparison of moral reasoning with mathematical reasoning strikes me as apt. When it comes to both math and morals, what we presumably care about is arriving at the right answer via the right kind of process. Although the “basic capacity is not fundamentally flawed” (p. 129), anyone who has taught a statistics class can attest that many people never successfully reach out and grasp mathematical competence, and those that do often do so only with considerable effort. Rather than “sweeping pessimism about only the moral domain” (p. 230, emphasis added), a more tempered pessimism seems warranted about our reasoning in general. Of course, this entails pessimism about both our mathematical cognition and, more germane to the present discussion, our moral cognition. Even if we are optimistic that moral judgments result from the kinds of domain-general reasoning processes that also drive mathematical cognition, if those processes frequently go awry, we have little reason for optimism about the content of the moral judgments they produce.
May also notes that it is beyond the scope of his book to address “deep skepticism about the reliability of our general cognitive, learning, and reasoning capacities” (p 106). Fair enough. I offer this commentary in the spirit of advancing the discussion beyond the already considerable amount that he has accomplished in the book. Importantly, though, I am not arguing that “all cognition, moral and non-moral, is bunk” (p. 230). My point is that the empirical literature suggests that good reasoning is not impossible, but it is relatively rare. This is a separate “empirical threat to the acquisition or maintenance of well-founded moral beliefs” (p. 20) from the two that are addressed in chapter 5, and it is one that May's cautiously optimistic rationalism does not currently speak to. The argument that I am making for a moderate amount of pessimism cannot be dismissed as merely radical skepticism.
A defender of cautiously optimistic rationalism might reply that part of May's argument is that inferential, cognitive processes do not need to be conscious and explicit to qualify as reasoning (see, e.g., pp. 8–9, 54–55). They might then argue that the CRT and similar psychological instruments primarily tap conscious, “System 2 reasoning,” so it is possible that most people are rather good at more intuitive “System 1 reasoning,” and therefore that we do reach out and successfully grasp virtue reasonably often. The first premise in this argument can be contested – reasoning is often associated with “System 2,” but not “System 1,” processes (e.g., Kahneman Reference Kahneman2011; Kokis et al. Reference Kokis, Macpherson, Toplak, West and Stanovich2002) – though I do personally find May's argument that at least some instances of effortless, automatic cognition can qualify as a kind of “reasoning” to be compelling (see also Landy & Royzman Reference Landy, Royzman and Pennycook2018, fn. 2).
However, whether or not we accept this first premise, the second premise and, therefore, the conclusion, are problematic. The CRT is usually thought to measure success at overriding a response that is prepotent, intuitive, and incorrect (Frederick Reference Frederick2005, though see Pennycook et al. Reference Pennycook, Cheyne, Koehler and Fugelsang2016). That is, even if we accept the first premise in this reply, low scores on the CRT can be thought of as reflecting failures of “System 1 reasoning” to produce the correct response, as well as failures of “System 2 reasoning” to recognize this error and override it. This assertion is bolstered by the fact that CRT performance is negatively correlated with susceptibility to intuitive heuristics and biases (Toplak et al. Reference Toplak, West and Stanovich2011). This is not definitive evidence that people are, by and large, bad intuitive reasoners, but it does at least undermine the argument that we can safely assume that “System 1 reasoning” is generally reliable, and therefore that we should be optimistic about our chances of successfully taking hold of virtue.
Of course, to even be able to say whether we have virtue in hand on any given occasion requires an independent normative criterion by which moral judgments are right and which actions are virtuous and which are wrong. There are some defensible metaethical theories that would posit that no such criterion can reasonably be said to exist (e.g., moral error theory, see Mackie Reference Mackie1977), but even if one believes that moral properties are mind-independent and truth-apt, it is still the case that no theory of normative ethics has attained consensus after some 2,500 years of work in this area. How, then, are we to know when we have reached out and taken hold of virtue, and when we have not? We do not have a noncontroversial answer to this question, as of yet.
In sum, I agree with May that moral knowledge is “possible” (p. 5), but I doubt that it is all that probable, in most cases. Given what we know about the prevalence – or rather, the lack thereof – of good reasoning, a moderately pessimistic form of rationalism seems more appropriate than a cautiously optimistic one. Yes, our moral judgments are largely products of reasoning, but reasoning is not something that most of us are especially good at.