Part of Joshua May's (Reference May2018) project in Regard for Reason in the Moral Mind is to address the threat to genuine moral knowledge that is raised by peer disagreement about morality (pp. 116–28). Moral knowledge skeptics argue that the fact that there is widespread disagreement among epistemic peers about moral claims, including foundational ones, gives us reason to suspect that we typically lack moral knowledge (p. 108). May proposes that few foundational moral disputes are among epistemic peers (pp. 123–28) and more importantly, the widespread moral disagreement that skeptics envision has not been backed up by empirical data (pp. 16–123). In fact, several empirically informed projects, including the moral foundations literature suggest that there is actually a lot more agreement about foundational moral claims than one might think (pp. 120–23).
I do not really have much disagreement with the way May is approaching this argument, or even with where he ends up. I think he is right that the kind of moral disagreements we see, between peers, are not sufficient to warrant widespread general moral skepticism as proposed by the moral skeptics, but that there is sufficient disagreement that we should adopt a limited moral skepticism (p. 130). And I agree with May that there is reason to be optimistic that empirical threats have not uncovered widespread fundamental flaws in ordinary moral deliberation and judgment that do not also apply more generally to cognition itself. That said, I want to press two issues. First, moral knowledge skeptics make the claim that “there is a lot of peer disagreement about foundational moral claims” (p. 117) and whether this premise is consistent with the available evidence or not depends upon on what is meant by a moral “claim” and what exactly a moral “foundation” is. I doubt that these are the same things. May is certainly treating them as if they are but we should be more reluctant to make that claim. Second, I agree with May that what is warranted is limited skepticism. May takes his limited skepticism to recommend optimism about moral knowledge, in general, but I think we should be more cautious. If general, albeit limited skepticism is justified by the available data, then we know significantly less about morality than we thought we did.
May addresses the question of whether there actually is disagreement among epistemic peers about foundational moral propositions by looking in a very reasonable place: at the difference between conservatives and liberals and by looking at what, at first glance, appears to be the relevant empirical data, the moral foundations literature in psychology. He takes the moral foundations literature to demonstrate that within a society there is little disagreement about fundamental moral propositions between epistemic peers. This is because all five of the moral foundations (care, fairness, loyalty, authority, and sanctity) appear to be used by both conservatives and liberals. But, it seems like the moral foundations literature could be interpreted as supporting the claim that there is more fundamental disagreement here than May suggests. Jonathan Haidt, for example, claims that the way these foundations are weighted matters. Although May notes this, it seems reasonable to arrive at a different conclusion than May does, given these different weightings.
It is true that Haidt goes to pains to demonstrate that liberals do use all five moral foundations, differing from conservatives in that they merely target different issues (Haidt Reference Haidt2012, p. 179). This could support May's interpretation. But Haidt also argues that even though liberals and conservatives do seem to use all five of these foundations, liberals tend to consciously disown the use of the sanctity, authority, and the loyalty foundation (Haidt Reference Haidt2012, pp. 186–87). Liberals only acknowledge care and fairness as legitimate foundations for morality. They may use loyalty, authority, and sanctity when forming actual moral beliefs, but they do not consciously acknowledge this and according to Haidt, they typically will disown using these foundations (Haidt Reference Haidt2012, p. 179; Haidt Reference Haidt and Sommers2016, p. 208). By contrast, conservatives are comfortable acknowledging that they do use all five foundations explicitly.
So, here is the worry. Perhaps conservatives and liberals do have a deeper disagreement about foundations than May's reading suggests. Maybe the liberal tends to see three of these foundations as more akin to cognitive biases than as legitimate foundations for morality whereas the conservative accepts all five foundations as legitimate. Assuming some liberals and conservatives are epistemic peers, we should then worry that there is fundamental disagreement within a society between epistemic peers about moral foundations. Some people think that only care and fairness are legitimate foundations for morality and others think sanctity, loyalty, and authority are equally compelling moral foundations.
Moving on, May is trying to assess whether or not there is widespread disagreement about foundational moral judgments (i.e., beliefs or propositions) within North America, particularly between liberals and conservatives. But the moral foundations work he focuses on seems to be addressing different kinds of questions. Specifically, the moral foundations research seems mostly focused on explaining how we process information and how we arrive at judgments when it comes to morality. These are questions about moral cognition, not the frequency of various foundational moral beliefs within a population. I am not sure we can infer shared foundational moral propositions from shared moral cognition.
Jonathan Haidt argued that most moral judgments are not the result of careful, rational reflection. On his social intuitionist model, our moral deliberation starts with intuitions (Haidt Reference Haidt2012, p. 5). We have an automatic intuitive, response toward an event or scenario, this leads us to judge that said event is wrong or right/ good or bad, and then we provide a post hoc justification for why we have the judgment we have (Haidt Reference Haidt2012, pp. 55–60). These intuitive responses, not reason, are the foundation for morality, according to Haidt (pp. 103–108).
Haidt goes on to propose that these intuitions are innate and universal moral foundations (p. 130), which he takes to be specialized cognitive modules that evolved to address humanity's shared adaptive challenges (we need to care for children which led to a caring cognitive module, we need to form partnerships with non-kin which led to the fairness module, avoiding disease resulted in the sanctity module, etc.) (p. 146).
The important thing to note is that moral foundations are not moral propositions, at least not for Haidt. To claim that we use care as a moral foundation is not to claim that most people agree that we should care for our young or that we should care for each other. It is to simply say that we are built such that we do care for our young and what enables us to do that is we have certain kinds of emotive responses to the suffering of others particularly those we feel close to. This need not imply a commitment to any specific statements or positions or views on morality. It is not that many people value caring or place a premium on loyalty or sanctity. It is rather that we have evolved specific mental modules that are implicated when we form moral judgments and responses to the world.
Suppose this is an accurate model of what's going on in moral deliberation. This does not address the issue of whether or not people typically share the same foundational beliefs about morality. What it tells us is something about moral cognition, about how our brains work when we consider a moral issue. What we would need to be able to demonstrate as a result of the moral foundations project is that there is something about the way this processing occurs that leads us to be optimistic that, in general, there is significant agreement about foundational moral propositions.
Although this is interesting research, it does not seem to be the right kind of data to address whether or not there are commonly shared foundational moral propositions among epistemic peers. What we have is an account of moral cognition, but what we need to know to address the second premise of the skeptical argument is whether or not there is common agreement about foundational moral propositions among epistemic peers.
What we are looking for, or should be looking for, I would think, are whether or not there is sufficiently widespread agreement about foundational metaethical and normative ethical principles, to allow for agreed upon foundational propositions among epistemic peers. The moral foundations literature may give us reason to be optimistic that there could be, as it suggests, that similar mental processes are implicated in moral judgment. But it does not demonstrate agreement about foundational moral propositions. We might be able to use this literature in the way that May proposes, but what we would need to do is provide an argument demonstrating that this shared brain machinery implies that most of us do share similar foundational moral propositions.
What we really need is more clarifying work on what a moral foundation is, how moral foundations operate, and how much convergence there is in regards to the general foundational moral judgments or claims that individuals arrive at within a society.
To close, then, May is engaged in a valuable project but we need to go further. What we need is data addressing whether or not there is sufficient agreement about foundational moral propositions within a culture, not whether or not most humans are working with the same mental mechanisms when we engage in moral cognition. Until data of this sort is generated, we should adopt moderate skepticism toward moral agreement within a society. But moderate skepticism is grounds for withholding judgment as to whether or not we have widespread agreement about moral foundations. I agree with May that it is not grounds for pessimism, but it is no more grounds optimism.
Part of Joshua May's (Reference May2018) project in Regard for Reason in the Moral Mind is to address the threat to genuine moral knowledge that is raised by peer disagreement about morality (pp. 116–28). Moral knowledge skeptics argue that the fact that there is widespread disagreement among epistemic peers about moral claims, including foundational ones, gives us reason to suspect that we typically lack moral knowledge (p. 108). May proposes that few foundational moral disputes are among epistemic peers (pp. 123–28) and more importantly, the widespread moral disagreement that skeptics envision has not been backed up by empirical data (pp. 16–123). In fact, several empirically informed projects, including the moral foundations literature suggest that there is actually a lot more agreement about foundational moral claims than one might think (pp. 120–23).
I do not really have much disagreement with the way May is approaching this argument, or even with where he ends up. I think he is right that the kind of moral disagreements we see, between peers, are not sufficient to warrant widespread general moral skepticism as proposed by the moral skeptics, but that there is sufficient disagreement that we should adopt a limited moral skepticism (p. 130). And I agree with May that there is reason to be optimistic that empirical threats have not uncovered widespread fundamental flaws in ordinary moral deliberation and judgment that do not also apply more generally to cognition itself. That said, I want to press two issues. First, moral knowledge skeptics make the claim that “there is a lot of peer disagreement about foundational moral claims” (p. 117) and whether this premise is consistent with the available evidence or not depends upon on what is meant by a moral “claim” and what exactly a moral “foundation” is. I doubt that these are the same things. May is certainly treating them as if they are but we should be more reluctant to make that claim. Second, I agree with May that what is warranted is limited skepticism. May takes his limited skepticism to recommend optimism about moral knowledge, in general, but I think we should be more cautious. If general, albeit limited skepticism is justified by the available data, then we know significantly less about morality than we thought we did.
May addresses the question of whether there actually is disagreement among epistemic peers about foundational moral propositions by looking in a very reasonable place: at the difference between conservatives and liberals and by looking at what, at first glance, appears to be the relevant empirical data, the moral foundations literature in psychology. He takes the moral foundations literature to demonstrate that within a society there is little disagreement about fundamental moral propositions between epistemic peers. This is because all five of the moral foundations (care, fairness, loyalty, authority, and sanctity) appear to be used by both conservatives and liberals. But, it seems like the moral foundations literature could be interpreted as supporting the claim that there is more fundamental disagreement here than May suggests. Jonathan Haidt, for example, claims that the way these foundations are weighted matters. Although May notes this, it seems reasonable to arrive at a different conclusion than May does, given these different weightings.
It is true that Haidt goes to pains to demonstrate that liberals do use all five moral foundations, differing from conservatives in that they merely target different issues (Haidt Reference Haidt2012, p. 179). This could support May's interpretation. But Haidt also argues that even though liberals and conservatives do seem to use all five of these foundations, liberals tend to consciously disown the use of the sanctity, authority, and the loyalty foundation (Haidt Reference Haidt2012, pp. 186–87). Liberals only acknowledge care and fairness as legitimate foundations for morality. They may use loyalty, authority, and sanctity when forming actual moral beliefs, but they do not consciously acknowledge this and according to Haidt, they typically will disown using these foundations (Haidt Reference Haidt2012, p. 179; Haidt Reference Haidt and Sommers2016, p. 208). By contrast, conservatives are comfortable acknowledging that they do use all five foundations explicitly.
So, here is the worry. Perhaps conservatives and liberals do have a deeper disagreement about foundations than May's reading suggests. Maybe the liberal tends to see three of these foundations as more akin to cognitive biases than as legitimate foundations for morality whereas the conservative accepts all five foundations as legitimate. Assuming some liberals and conservatives are epistemic peers, we should then worry that there is fundamental disagreement within a society between epistemic peers about moral foundations. Some people think that only care and fairness are legitimate foundations for morality and others think sanctity, loyalty, and authority are equally compelling moral foundations.
Moving on, May is trying to assess whether or not there is widespread disagreement about foundational moral judgments (i.e., beliefs or propositions) within North America, particularly between liberals and conservatives. But the moral foundations work he focuses on seems to be addressing different kinds of questions. Specifically, the moral foundations research seems mostly focused on explaining how we process information and how we arrive at judgments when it comes to morality. These are questions about moral cognition, not the frequency of various foundational moral beliefs within a population. I am not sure we can infer shared foundational moral propositions from shared moral cognition.
Jonathan Haidt argued that most moral judgments are not the result of careful, rational reflection. On his social intuitionist model, our moral deliberation starts with intuitions (Haidt Reference Haidt2012, p. 5). We have an automatic intuitive, response toward an event or scenario, this leads us to judge that said event is wrong or right/ good or bad, and then we provide a post hoc justification for why we have the judgment we have (Haidt Reference Haidt2012, pp. 55–60). These intuitive responses, not reason, are the foundation for morality, according to Haidt (pp. 103–108).
Haidt goes on to propose that these intuitions are innate and universal moral foundations (p. 130), which he takes to be specialized cognitive modules that evolved to address humanity's shared adaptive challenges (we need to care for children which led to a caring cognitive module, we need to form partnerships with non-kin which led to the fairness module, avoiding disease resulted in the sanctity module, etc.) (p. 146).
The important thing to note is that moral foundations are not moral propositions, at least not for Haidt. To claim that we use care as a moral foundation is not to claim that most people agree that we should care for our young or that we should care for each other. It is to simply say that we are built such that we do care for our young and what enables us to do that is we have certain kinds of emotive responses to the suffering of others particularly those we feel close to. This need not imply a commitment to any specific statements or positions or views on morality. It is not that many people value caring or place a premium on loyalty or sanctity. It is rather that we have evolved specific mental modules that are implicated when we form moral judgments and responses to the world.
Suppose this is an accurate model of what's going on in moral deliberation. This does not address the issue of whether or not people typically share the same foundational beliefs about morality. What it tells us is something about moral cognition, about how our brains work when we consider a moral issue. What we would need to be able to demonstrate as a result of the moral foundations project is that there is something about the way this processing occurs that leads us to be optimistic that, in general, there is significant agreement about foundational moral propositions.
Although this is interesting research, it does not seem to be the right kind of data to address whether or not there are commonly shared foundational moral propositions among epistemic peers. What we have is an account of moral cognition, but what we need to know to address the second premise of the skeptical argument is whether or not there is common agreement about foundational moral propositions among epistemic peers.
What we are looking for, or should be looking for, I would think, are whether or not there is sufficiently widespread agreement about foundational metaethical and normative ethical principles, to allow for agreed upon foundational propositions among epistemic peers. The moral foundations literature may give us reason to be optimistic that there could be, as it suggests, that similar mental processes are implicated in moral judgment. But it does not demonstrate agreement about foundational moral propositions. We might be able to use this literature in the way that May proposes, but what we would need to do is provide an argument demonstrating that this shared brain machinery implies that most of us do share similar foundational moral propositions.
What we really need is more clarifying work on what a moral foundation is, how moral foundations operate, and how much convergence there is in regards to the general foundational moral judgments or claims that individuals arrive at within a society.
To close, then, May is engaged in a valuable project but we need to go further. What we need is data addressing whether or not there is sufficient agreement about foundational moral propositions within a culture, not whether or not most humans are working with the same mental mechanisms when we engage in moral cognition. Until data of this sort is generated, we should adopt moderate skepticism toward moral agreement within a society. But moderate skepticism is grounds for withholding judgment as to whether or not we have widespread agreement about moral foundations. I agree with May that it is not grounds for pessimism, but it is no more grounds optimism.