Stanford suggests that an explanation for the apparent “externalization” of moral demands can be found in the fact that this externalization ensures agents are simultaneously motivated to (a) engage in (frequently adaptive) cooperative interactions, and (b) exclude from these interactions (or even punish) those that are not motivated to cooperate. In this way, the externalization of moral demands is said to be an efficient way of leading to the twin behaviors – cooperation and exclusion/punishment of non-cooperators – needed to maintain adaptive cooperative arrangements (also taking into account the ancestral state of the cognitive and conative aspects of human psychology).
However, a key element of this explanation of the (apparent) externalization of moral demands is unconvincing. In particular, while it is relatively clear how an externalized moral demand – such as “injured in-group members need to be helped” – can motivate an agent to act as the demand states, it is not clear how an externalized moral demand alone can motivate an agent to exclude (or even punish) those who fail to act as the demand states. After all, moral demands do not (typically) state what should be done when they are violated: they prescribe what agents are to do, not what agents are to do when others have not done what they are to do.
For this reason, Stanford's proposal implicitly presumes that the externalization of moral demands co-evolved with other motivational structures. In particular, it must be assumed that agents do not just externalize moral demands, but are also motivated to exclude/punish those that violate them – rather than, say, try to educate them or learn from them (both of which are also a priori coherent responses to this situation). In this way, though, the major attraction of the author's explanation of the externalization of moral demands is lost: We are back at requiring separate motivational states for cooperation and the exclusion/punishment of non-cooperators.
Given this, I suggest that a more plausible hypothesis of the evolution of (the relevant aspects of) human moral cognition is based on seeing moral demands as subjective, but inherently conjunctive. More specifically, I suggest that there are evolutionary reasons to expect agents to be motivated by moral demands of the following sort: “Injured in-group members need to be helped and non-helpers of injured in-group members are not to be helped.” Note that this is not just a re-description of Stanford's proposal. Stanford's suggestion is that moral demands are somewhat akin to objective facts, not personal opinions. By contrast, my suggestion is that moral demands are akin to personal opinions, not objective facts – it is just that they are personal opinions that speak to, and thus motivate, a wider range of actions than other personal opinions. There are two reasons for why this purely subjective proposal is more plausible than Stanford's suggestion.
First, the subjectivist-conjunctive proposal still leads to the adaptive connection between cooperation and the exclusion/punishment of non-cooperators that Stanford has rightly emphasized. In turn, this implies that the same selective pressures appealed to in the target article – namely, the fact that the existence and maintenance of adaptive (hyper)cooperative arrangements depend on cooperators being consistently likely to interact with other cooperators (Skyrms Reference Skyrms1996; Reference Skyrms2004; Sober & Wilson Reference Sober and Wilson1998) – also operate here.
Second, the present subjectivist-conjunctive proposal is more in line with the ancestral condition of human (moral) psychology. As Stanford notes, there is good reason to think that humans started out with a decision-making machinery that included a battery of stored reflexes as well as representational (content-based) mental states, some of which are imperative (conative) in form, and some of which are indicative (cognitive) in form (see also Millikan Reference Millikan2002; Schulz Reference Schulz2011; Reference Schulz2013; Reference Schulz2018; Sterelny Reference Sterelny2003). In Stanford's proposal, moral cognition evolved by taking a subjective motivational (conative) state, and shifting it closer to a cognitive state. However, a far simpler change – with, as just noted, the same dual-motivational outcome – would be to just expand on the content of some of the agent's motivational states: Instead of motivating just one type of behavior, it changed to motivating several types of behavior simultaneously. Since smaller changes to an organism's existing traits are more likely to evolve than larger ones, this thus favors my proposal over Stanford's.
Finally, it is important to emphasize that the subjectivist hypothesis suggested here still provides an explanation of the seeming externalization of moral demands. In the present proposal, the objectivity of moral demands stems from the fact that they are expressions of subjective preferences both for acting in certain ways and for not interacting with (or even punishing) those that do not share these preferences. That is, when people are asked to rank the “objectivity” of a moral demand, they rank it as closer to a fact than to a personal preference (Goodwin & Darley Reference Goodwin and Darley2008) because moral demands motivate more behaviors than (most) personal preferences do – a feature they share with cognitive states, which are also relevant to a wide variety of different actions. In other words, the difference in how “objective” a norm is taken to be just rests on the content of the relevant motivational state, not the (second-order) attitude the agent takes towards that motivational state. Or, to put it succinctly: In my proposal, the difference between ice cream and Nazis is that, while I simply want to eat ice cream, I want to not be a Nazi and to not interact with those that want to be Nazis.
Stanford suggests that an explanation for the apparent “externalization” of moral demands can be found in the fact that this externalization ensures agents are simultaneously motivated to (a) engage in (frequently adaptive) cooperative interactions, and (b) exclude from these interactions (or even punish) those that are not motivated to cooperate. In this way, the externalization of moral demands is said to be an efficient way of leading to the twin behaviors – cooperation and exclusion/punishment of non-cooperators – needed to maintain adaptive cooperative arrangements (also taking into account the ancestral state of the cognitive and conative aspects of human psychology).
However, a key element of this explanation of the (apparent) externalization of moral demands is unconvincing. In particular, while it is relatively clear how an externalized moral demand – such as “injured in-group members need to be helped” – can motivate an agent to act as the demand states, it is not clear how an externalized moral demand alone can motivate an agent to exclude (or even punish) those who fail to act as the demand states. After all, moral demands do not (typically) state what should be done when they are violated: they prescribe what agents are to do, not what agents are to do when others have not done what they are to do.
For this reason, Stanford's proposal implicitly presumes that the externalization of moral demands co-evolved with other motivational structures. In particular, it must be assumed that agents do not just externalize moral demands, but are also motivated to exclude/punish those that violate them – rather than, say, try to educate them or learn from them (both of which are also a priori coherent responses to this situation). In this way, though, the major attraction of the author's explanation of the externalization of moral demands is lost: We are back at requiring separate motivational states for cooperation and the exclusion/punishment of non-cooperators.
Given this, I suggest that a more plausible hypothesis of the evolution of (the relevant aspects of) human moral cognition is based on seeing moral demands as subjective, but inherently conjunctive. More specifically, I suggest that there are evolutionary reasons to expect agents to be motivated by moral demands of the following sort: “Injured in-group members need to be helped and non-helpers of injured in-group members are not to be helped.” Note that this is not just a re-description of Stanford's proposal. Stanford's suggestion is that moral demands are somewhat akin to objective facts, not personal opinions. By contrast, my suggestion is that moral demands are akin to personal opinions, not objective facts – it is just that they are personal opinions that speak to, and thus motivate, a wider range of actions than other personal opinions. There are two reasons for why this purely subjective proposal is more plausible than Stanford's suggestion.
First, the subjectivist-conjunctive proposal still leads to the adaptive connection between cooperation and the exclusion/punishment of non-cooperators that Stanford has rightly emphasized. In turn, this implies that the same selective pressures appealed to in the target article – namely, the fact that the existence and maintenance of adaptive (hyper)cooperative arrangements depend on cooperators being consistently likely to interact with other cooperators (Skyrms Reference Skyrms1996; Reference Skyrms2004; Sober & Wilson Reference Sober and Wilson1998) – also operate here.
Second, the present subjectivist-conjunctive proposal is more in line with the ancestral condition of human (moral) psychology. As Stanford notes, there is good reason to think that humans started out with a decision-making machinery that included a battery of stored reflexes as well as representational (content-based) mental states, some of which are imperative (conative) in form, and some of which are indicative (cognitive) in form (see also Millikan Reference Millikan2002; Schulz Reference Schulz2011; Reference Schulz2013; Reference Schulz2018; Sterelny Reference Sterelny2003). In Stanford's proposal, moral cognition evolved by taking a subjective motivational (conative) state, and shifting it closer to a cognitive state. However, a far simpler change – with, as just noted, the same dual-motivational outcome – would be to just expand on the content of some of the agent's motivational states: Instead of motivating just one type of behavior, it changed to motivating several types of behavior simultaneously. Since smaller changes to an organism's existing traits are more likely to evolve than larger ones, this thus favors my proposal over Stanford's.
Finally, it is important to emphasize that the subjectivist hypothesis suggested here still provides an explanation of the seeming externalization of moral demands. In the present proposal, the objectivity of moral demands stems from the fact that they are expressions of subjective preferences both for acting in certain ways and for not interacting with (or even punishing) those that do not share these preferences. That is, when people are asked to rank the “objectivity” of a moral demand, they rank it as closer to a fact than to a personal preference (Goodwin & Darley Reference Goodwin and Darley2008) because moral demands motivate more behaviors than (most) personal preferences do – a feature they share with cognitive states, which are also relevant to a wide variety of different actions. In other words, the difference in how “objective” a norm is taken to be just rests on the content of the relevant motivational state, not the (second-order) attitude the agent takes towards that motivational state. Or, to put it succinctly: In my proposal, the difference between ice cream and Nazis is that, while I simply want to eat ice cream, I want to not be a Nazi and to not interact with those that want to be Nazis.