Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-12T03:43:25.734Z Has data issue: false hasContentIssue false

The person as moralist account and its alternatives

Published online by Cambridge University Press:  22 October 2010

Joshua Knobe
Affiliation:
Program in Cognitive Science and Department of Philosophy, Yale University, New Haven, CT 06520-8306. joshua.knobe@yale.eduhttp://pantheon.yale.edu/~jk762/

Abstract

The commentators offer helpful suggestions at three levels: (1) explanations for the particular effects discussed in the target article; (2) implications of those effects for our understanding of the role of moral judgment in human cognition; and (3) more theoretical questions about the overall relationship between ordinary cognition and systematic science. The present response takes up these three issues in turn.

Type
Author's Response
Copyright
Copyright © Cambridge University Press 2010

The commentators have offered helpful suggestions and criticisms at all levels, from the nitty-gritty of the individual experiments to the broadest sorts of theoretical and philosophical issues. Clearly, the questions at these different levels are intimately connected, but since one has to begin somewhere, perhaps it is best to start by focusing in on the trees and then move gradually toward thinking about the overall shape of the forest. In other words, we can start with specific questions about the explanations for particular effects and then move to implications for broader theoretical and philosophical issues.

R1. Alternative hypotheses

Recent studies indicate that people's moral judgments can impact their application of a surprising range of different concepts. Moral judgments appear to be impacting people's application of the concepts of intentional action, causation, freedom, knowledge, doing and allowing, desire, and many other concepts besides. The primary aim of the target article was to provide an explanation for this pervasive impact of moral judgment.

To explain these phenomena, I offered a specific hypothesis. The suggestion was that people come to an understanding of the actual world by comparing it with certain alternative possibilities (counterfactuals). People's moral judgments impact their selection of alternatives and thereby influence their application of a wide range of different concepts.

A number of commentators responded by developing competing hypotheses. These hypotheses explain the impact of moral considerations in terms of quite different sorts of cognitive processes.

R1.1. A case study

One worry about many of these hypotheses is that they proceed by picking out just one concept whose application is affected by moral judgment and examining this one concept in isolation from all the others. Hence, these hypotheses offer explanations for one of the effects of moral judgment but say nothing about other effects that seem, at least initially, to be closely related.

Of course, the fact that a hypothesis is framed entirely in terms of one of these effects does not mean that this hypothesis has to be incorrect. Future research might show that the hypothesis can be extended in fairly natural ways to handle other related phenomena, or perhaps it will be shown that the phenomena that initially seem so closely related are, in fact, fundamentally different. The problem, then, is not that these hypotheses are necessarily wrong but just that they have not yet been developed to the point where they can be properly evaluated.

Thus, to take one example, Scanlon suggests that we might be able to explain the apparent asymmetries in people's intuitions about intentional action by looking more closely at the meaning of the word intentionally. Specifically, suppose we assume that an expression like “John brought about the outcome intentionally” actually has two distinct meanings:

(a) John knew that he was bringing about the outcome.

(b) John aimed at bringing about the outcome.

People's moral judgments might then impact their intuitions simply by affecting their sense of which of these two meanings is the relevant one in the context at hand.

This hypothesis does seem to do a nice job of accounting for the asymmetries observed in people's intuitions about intentional action, but the first thing to notice here is that the very same effect can be observed for numerous other concepts. When people determine that a foreseen side-effect is morally bad, they are not only more inclined to say that the agent brought it about intentionally; they are also more willing to say that she was in favor of it, that she decided to bring it about, even that she advocated it. Presumably, it is not merely a coincidence that we find this exact same effect arising in the application of so many different concepts. So what we really need here is an explanation for the pattern as a whole.

One option would be to extend Scanlon's hypothesis by claiming that the ambiguity posited for the word intentionally can also be found in numerous other expressions. For example, one might say that an expression of the form “John advocated the outcome” also has two distinct meanings. Roughly:

(a) John called on people to adopt a policy with the aim of bringing about the outcome.

(b) John called on people to adopt a policy that he knew would bring about the outcome.

But we would then be offering a hypothesis of a very different type. We would no longer just be pointing to some idiosyncratic feature of the word intentionally. Instead, we would be positing a general feature of language that led to a systematic ambiguity within a whole class of expressions. And, of course, the methods used for testing the hypothesis would then have to be correspondingly different. We couldn't proceed just by looking at patterns in people's intuitions about intentional action. We would have to introduce a more general claim about word meanings and then evaluate this claim both by gathering data involving people's use of numerous different expressions and by thinking about the ways in which it fit into larger theories about lexical semantics, polysemy, and so forth.

R1.2. Application to further examples

This very same worry also arises, albeit in somewhat different forms, for a number of the other alternative hypotheses. For example:

Nanay points out that people's judgments about the two intentional action cases differ not only from a moral perspective, but also from a modal perspective. Specifically, he claims that people who are given the harm case make the judgment:

If the chairman had not ignored the environmental considerations, he would not have harmed the environment.

but that people who are given the help case do not make the judgment:

If the chairman had not ignored the environmental considerations, he would not have helped the environment.

Nanay then suggests that this difference in people's modal judgments can lead to a difference in people's intuitions about intentional action. Hence, it might be possible to explain the effect without introducing moral considerations in any way.

Menzies argues that the asymmetries observed for people's causal judgments can be explained if we adopt a theory of causal cognition that emphasizes the role of normality. Suppose we assume that people only regard an event as a cause to the extent that this event “intervenes in the normal course of events and makes a difference in the way these develop” (para. 5). Now suppose we further assume that people's ordinary notion of normality is not simply a matter of statistical frequency but also takes into account social, legal, and moral norms. Starting from these two assumptions, we arrive at an interesting and surprising conclusion: If both the behavior of the administrative assistant (a perfectly normal behavior) and the behavior of the professor (a violation of social and moral norms) were necessary for the problem to arise, people will tend to pick out the behavior of the professor and regard it, in particular, as the cause of the problem.

Hindriks suggests that we can come to a better understanding of the intentional action effect by applying the legal distinction between actus reus (guilty act) and mens rea (guilty mind). He then notes that most research in this domain has focused on the impact of people's judgments of the moral status of the agent's action, with the assumption being that these judgments are somehow influencing people's intuitions about intentional action. By contrast, he suggests that people's intuitions might actually be affected by a judgment of mens rea, that is, a judgment about the status of the agent's mental states. In earlier work, Hindriks has spelled out this claim in terms of the hypothesis that people tend to think that an agent S intentionally performed an action A to the extent that “An agent S ϕs intentionally if S intends to ψ, ϕs by ψing, expects to ϕ by ψing, and ψs in spite of the fact that he believes his expected ϕing constitutes a normative reason against ψing” (Hindriks Reference Hindriks2008, p. 635).

Humphrey argues that the intentional action effects can be given a straightforward Bayesian interpretation. All one needs to consider is the conditional probabilities people assign in the relevant cases. Thus, suppose we compare (a) the conditional probability that the agent harmed the environment intentionally, given that he implemented the program, and (b) the conditional probability that the agent helped the environment intentionally, given that he implemented the program. If one assigns priors in such a way that (a) is greater than (b), it will follow straightforwardly that people should be more inclined to guess that the agent harmed intentionally than they are to say that the agent helped intentionally.

Brogaard agrees that people's intuitions about intentional action are not purely scientific in nature, but she argues that it would also be a mistake to understand them in terms of the judgments people make about whether actions are morally right or wrong. Instead, she claims, we should understand these intuitions in terms of judgments of desert. People make judgments about whether the agent deserves a side-effect, or the blame for it, and these judgments of desert end up influencing their intuitions about whether or not the behavior was performed intentionally.

Lombrozo & Uttich note that people ascribe different attitudes in cases of norm violation from the attitudes they ascribe in cases of more ordinary behavior. If we see that a person has chosen to implement a program that has some entirely innocuous effect, we might assume that this person did not actually care very much about the program either way – maybe he just decided to adopt it without much thought. But now suppose, instead, that we saw a person choosing to implement a program that he knew would harm the environment. Since harming the environment is a norm violation, we might immediately conclude that he must have had some strong interest in adopting this program, and we would therefore be more inclined to attribute to him the kind of pro-attitude that would lead us to say that he acted intentionally.

Each of these proposals offers interesting suggestions about a particular concept – and many of these proposals will no doubt lead to important new insights – but all of them seem to leave us with a mystery as to why the impact of moral judgment is so pervasive.

For a particularly promising example, consider the hypothesis that Menzies offers about people's causal intuitions. Menzies suggests that causal intuitions can be affected in a complex way by judgments of what might be called “normality.” Now, it is an interesting question whether this hypothesis is right or wrong. (As it happens, I think that it is completely correct; Hitchcock & Knobe Reference Hitchcock and Knobe2009.) However, the key point is that this hypothesis does not explain why the effect we find for the concept of causation can also be found for so many other concepts. Indeed, there is an important sense in which it does not really explain the effect for causal intuitions at all. It simply describes a certain pattern in people's application of this concept, without telling us why the concept works like this and not some other way. So this sort of hypothesis gives us a tantalizing glimpse into the phenomenon at work here, but it seems that we will not really have an adequate account until we can offer a more general theory.

If I may be permitted to speculate, it seems to me that contemporary work on these problems is suffering from the legacy of a certain tradition of conceptual analysis. In early work in that tradition, it was thought that we should proceed by developing for each concept a list of necessary and sufficient conditions. The aim was to provide a separate list of conditions for each concept – one list for the concept of intentional action, one for the concept of causation, and so forth. This tradition has now been widely repudiated. None of the commentators on the present target article attempted to provide lists of necessary and sufficient conditions, and I am sure that most of them would agree that such an approach is unlikely to prove fruitful. Yet, though researchers today are anxious to distance themselves from this program of list-making, I suspect that a certain remnant of that earlier tradition still remains. There are still attempts to go through people's various concepts and provide something like an “analysis” for each of them; it's just that these analyses no longer take the form of necessary and sufficient conditions.

In my view, we should make an even more radical break with the tradition. There is simply no use in developing something like an “analysis of the concept of intentional action” and then, separately, an “analysis of the concept of causation.” Instead, we should recognize that people's intuitions about each of these concepts are shaped by a number of distinct psychological processes, and that each of these processes in turn influences intuitions about a number of different concepts. So what we really need is not a separate theory for each of the separate concepts but rather unifying theories of the underlying processes. Such theories might not offer us a comprehensive picture of any one concept, but they will allow us to generate specific testable predictions regarding a whole range of different concepts.

R1.3. Motivation to blame

The contribution from Alicke & Rose pursues precisely this strategy. They suggest that the phenomena might be explained in terms of a single underlying psychological process that can affect people's intuitions across a wide variety of different domains. Specifically, they suggest that people sometimes experience a motivation to justify attributions of blame and that this motivation can affect their views about intention, causation, and numerous other issues.

In the target article, I had argued that this sort of process could not explain the effects under discussion here. Alicke & Rose reply by reviewing some very impressive data from Alicke's earlier work (Alicke Reference Alicke1992), which they take to provide conclusive evidence that people's judgments actually can be distorted by a motivation to blame.

This commentary definitely raises a number of important issues, but I worry that I was not sufficiently clear in articulating the nature of the disagreement in the target article itself. The thing to keep in mind is that no one is actually trying to refute the key claim made in Alicke's earlier work. In that earlier work, Alicke provides excellent evidence for the claim that people's intuitions can be distorted by a motivation to blame, and none of the people writing on these issues more recently have been trying to call that claim into question. Rather, the issue is just about whether the theory developed in Alicke's earlier work provides the best explanation for a specific class of effects that have been uncovered in more recent work. Some researchers have argued that it can (Alicke Reference Alicke2008; Nadelhoffer Reference Nadelhoffer2006a); others have argued that it cannot (Nichols & Ulatowski Reference Nichols and Ulatowski2007; Wright & Bengson Reference Wright and Bengson2009; Young et al. Reference Young, Cushman, Adolphs, Tranel and Hauser2006).

At this point, I think that Alicke's basic theoretical claims about the importance of a motivation to blame have been established beyond reasonable doubt, and there is no need to provide any further evidence for them. The thing to focus on now is just the detailed structure of these particular effects and whether a motivational explanation can account for them. In the target article, I reviewed some of the experimental evidence for the view that it cannot.

R1.4. Sources of evidence

Sinnott-Armstrong raises more or less this same issue about my own preferred account. The account suggests that people's moral judgments affect their counterfactual reasoning, which in turn plays a role in their application of numerous different concepts. But, Sinnott-Armstrong asks, how is such an account to be assessed? Given that we can't actually see directly which counterfactuals people regard as relevant, how can we know whether the account is true or false?

This is exactly the right question to be asking, and I am sure that future research will offer us certain new techniques for answering it. At present, though, we have two major methods at our disposal.

First, the account predicts a particular pattern of intuitions across a broad range of different concepts. At the very heart of the approach is the idea that we should, as far as possible, avoid introducing ad hoc hypotheses just to explain the impact of moral judgment on one or another particular concept. Instead, we start out with perfectly general principles about the impact of moral judgment on counterfactual thinking. Then we introduce independently testable claims about the role of counterfactual thinking in the application of certain individual concepts. Together, these two types of claims generate specific testable predictions.

The thing to notice about this strategy is that it allows us to make predictions about the impact of moral considerations on the application of numerous concepts that have not yet been empirically investigated. Thus, to take one example, Jonathan Phillips (personal communication) points out that counterfactual reasoning seems to play a role in people's ordinary notion of choosing. (An agent cannot be said to have “chosen” one specific option unless other options were also available.) Hence, we should immediately predict an impact of moral judgment on people's intuitions about whether or not an agent can truly be said to have “chosen” a particular option. Or, to take a different case, it seems that counterfactual reasoning plays a role in people's intuitions about whether a given trait is innate. Accordingly, one might predict an impact of moral judgments on intuitions about innateness, and Richard Samuels and I are testing that prediction in a series of studies under development now. In essence, then, the first answer to Sinnott-Armstrong's question is that we can test the theory by using it to generate new predictions about the application of various concepts and checking to see whether those predictions are borne out.

But there is also a second way in which the theory can be put to the test. We can use various methods to look more directly at people's judgments about the relevance of counterfactuals. For example, numerous studies have proceeded by presenting participants with questions of the form: “If only ___, this outcome would not have arisen.” Participants can fill in the blank with whichever possibility they prefer, and researchers then infer that the possibilities chosen most often are regarded as most relevant. Studies using this methodology consistently show that moral judgments do have an impact on intuitions about counterfactual relevance (McCloy & Byrne Reference McCloy and Byrne2000; N'gbala & Branscombe Reference N'gbala and Branscombe1995).

In conclusion, then, our research can proceed by looking at the relationships among a complex constellation of different kinds of data. We start out with certain principles about the role of moral judgment in counterfactual thinking and certain hypotheses about the role of counterfactual thinking in the application of particular concepts. Then we check the theory against evidence regarding both counterfactual thinking and the application of concepts, testing to see whether all of these data conform to the theoretical predictions.Footnote 1 Presumably, they will not, and the theory will have to be revised in important respects. However, my hope is that we will at least be looking in roughly the right neighborhood and thereby moving toward a better understanding of these phenomena.

R2. The role of moral judgment

Suppose now that we focus, if only for the sake of argument, on the particular account advanced in the target article. The most important and controversial aspect of this account is the role it assigns to moral judgment. Yet, it can prove surprisingly difficult even to say what that role is and why it should be controversial, much less to determine whether the account is right or wrong.

R2.1. Investigating the judgments themselves

To begin with, there is the question as to what we even mean by the phrase “moral judgment.” When one first hears this phrase, one is naturally drawn to think of a specific sort of conscious event. One thinks, for example, of cases in which we focus in on a particular behavior, bring to bear a variety of different considerations, and then determine that an agent deserves moral blame or praise.

Now, conscious episodes like this certainly do take place, but it sounds a bit implausible to suppose that such episodes could somehow be exerting a pervasive impact on people's whole way of understanding the world. We quite often wonder whether, for example, a person has a particular intention, and it seems absurd to suppose that whenever we want to answer such a question, we have to start out by making a full-blown moral judgment.

There is, however, a way of interpreting the hypothesis on which this sense of absurdity dissolves. To get a feeling for the issue, consider the way we might proceed if someone suggested that people's whole way of understanding the world was shaped by statistical reasoning. Clearly, when one first turns to the topic of statistical reasoning, one imagines a particular sort of conscious episode. (One thinks, perhaps, of a person moving step-by-step through the computations involved in a formal analysis of variance.) But surely the claim is not that this sort of cognition is shaping our whole understanding of the world! Rather, the idea is that people go through a kind of immediate, automatic, non-conscious process and that this process is analogous in certain important respects to what people do when they are consciously conducting statistical analyses.

The claim under discussion here should be understood in more or less this same way. We are certainly not suggesting that people's conscious moral beliefs can somehow shape their whole understanding of the world (see Knobe Reference Knobe2007). Rather, the claim is that people make certain immediate, automatic, non-conscious moral appraisals and that these automatic appraisals then exert a surprising influence on the rest of their cognition.

With this basic framework in mind, we can now turn to a series of interesting suggestions from the commentators.

R2.1.1. Theory-of-mind and counterfactuals

The commentaries from Guglielmo and Girotto, Surian, & Siegal (Girotto et al.) point to two important characteristics of people's moral judgments:

  1. 1. Guglielmo notes that conscious moral judgments are based in part on reasoning about the agent's mental states.

  2. 2. Girotto et al. note that conscious moral judgments are based in part on counterfactual reasoning.

These two points appear to spell trouble for the theory presented in the target article. After all, the claim was that people make a moral judgment which then influences their reasoning about mental states and counterfactuals. But if people have to think about mental states and counterfactuals before they can even make this moral judgment, how could the process ever get off the ground?

My answer is that the initial judgment that influences people's subsequent reasoning is deeply different from the conscious judgment that this reasoning can ultimately inform. People's conscious moral judgments can take into account information about numerous different considerations, including mental states, counterfactuals, and a great deal else besides. But their initial, purely non-conscious judgments do not work like that. These initial judgments are instead the product of an extremely rapid and far less complex process.

To see the basic idea here, imagine what might go through your mind if you were actually in the room as the vignette about the professor and the pens unfolded. There you are, watching as the professor moves toward the desk and starts reaching for one of the pens. Ultimately, you might end up making a conscious moral judgment about this behavior. You might decide that the professor deserves blame for the problem that results, or that his act was morally wrong, or something of the kind. But before you can even begin any of this sophisticated reasoning, you might go through a more automatic, entirely non-conscious process of moral appraisal. As you see the professor reaching for the pens, you recognize that he is suppose to refrain from taking them, and you therefore conceptualize his action by comparing it to the behavior he was supposed to perform, namely, refraining from taking pens. The key claims now are that (a) your tendency to focus on this specific comparison involves a kind of very simple moral cognition and (b) this simple form of moral cognition does not itself depend on your subsequent reasoning about mental states or counterfactuals.

R2.1.2. Origins of moral judgment

A question now arises about how exactly people make these rapid and automatic moral judgments. Here a number of commentators have provided helpful suggestions.

Kang & Glassman propose that moral judgments are shaped by the aim of acquiring cultural capital. People seek to signal their membership in particular communities and end up arriving at moral judgments accordingly. (Just as one might wear skinny jeans to signal one's membership in the community of Brooklyn hipsters, one might condemn abortion to signal one's membership in the community of Southern evangelicals.)

Terroni & Fraguas suggest that people's moral judgments can be impacted by their emotional states. They then hypothesize that people might make substantially different moral judgments when their emotional states were altered by clinical depression. So a person might arrive at different judgments about the very same case depending on whether that person happened to be depressed or not.

Carpendale, Hammond, & Lewis (Carpendale et al.) argue that people's capacity for moral judgment develops in the context of social interaction. Children learn to treat others as human beings (as opposed to mere physical objects), and they thereby acquire an understanding of moral norms.

Each of these hypotheses seems plausible and promising, but it would be especially exciting if we could use these approaches to drive a wedge between people's conscious moral judgments and their more automatic moral appraisals. Thus, suppose that an individual is trying to gain cultural capital by signaling membership in the community of liberal intellectuals. She might thereby end up arriving at the obvious sorts of conscious moral judgments: opposition to sexism and homophobia, support for disadvantaged groups, and so forth. But would her non-conscious appraisals go in this same way? Perhaps not. It might be that her conscious moral judgments would be shaped by the aim of gaining cultural capital, whereas her intuitions about intentional action, causation, and the like would continue to reveal a very different system of values at work (see, e.g., Inbar et al. Reference Inbar, Pizarro, Knobe and Bloom2009). Or consider the case of depression. Even when a person is clinically depressed, she may be able to exert enough cognitive control to continue making exactly the same sorts of conscious judgments that she would have otherwise. But perhaps her depression would nonetheless impact her non-conscious appraisals, and we might be able to pick up this impact just by asking questions about intention or causation.

R2.2. Impact of non-moral considerations

The commentaries from Girotto et al. and Guglielmo point out that people's intuitions about intentional action can be influenced, not only by moral considerations, but also by information about the agent's mental states. Thus, people are reluctant to say that an agent brought about an outcome intentionally when the agent shows regret (Guglielmo & Malle, in press; Phelan & Sarkissian Reference Phelan and Sarkissian2008; Sverdlik Reference Sverdlik2004) or when the agent falsely believed that she would not be bringing the outcome about (Pellizzoni et al. Reference Pellizzoni, Girotto and Surian2010).

These are good points, and any correct theory of intentional action ascription will have to accommodate them. The theory presented in the target article does so by suggesting that moral considerations are used to set a kind of threshold, while information about the agent's mental states is used to determine whether the agent falls above or below that threshold. Hence, the position of the agent relative to the threshold ends up depending on a complex combination of moral considerations and mental state information.

R2.3. Moral concepts

What we have here, then, is a concept whose application can be influenced both by moral considerations and by mental state information. How should such a concept be understood? Gintis suggests that the best interpretation might be that people are simply using the concept of intentional action as a moral concept. The whole effect would then be rather unsurprising and unimportant. All it would show is that moral considerations can impact the application of moral concepts.

At least initially, this does seem like an appealing strategy. One starts out with a distinction between “moral” concepts and “non-moral” concepts, such that any concept whose application is impacted by moral considerations is supposed to fall in the former category. If one then finds an impact of moral considerations on a concept that had previously been classified as non-moral, one should not conclude that the whole framework is thereby called into question. All one needs to do is just reclassify that one concept.

Still, it does seem that there is a certain point at which this sort of strategy begins to look unworkable. If we find an impact of moral considerations on just one concept, we can always proceed by reclassifying it. But that is not the situation in which we actually find ourselves. These effects are arising not only for the concept of intentional action, but also for the concepts of causation and knowledge, even for the concept of advocating. At some point, I think, one has to conclude that it is becoming unhelpful to divide off a special sphere of “moral concepts” and claim that the impact of moral considerations arises only for them.

R2.4. Morality and normality

Kreps & Monin and Mandelbaum & Ripley take things even further in this direction. They suggest that the representation that is influencing people's intuitions in these cases is not actually specific to morality in particular. Rather, it is a representation of something like “normality” or “expectation.” Such a representation would then unite moral considerations with considerations of a more purely statistical variety.

Continuing with this general approach, Ulatowski & Johnson propose that one can impact the relevant representation simply by creating stimulus materials that present a given outcome as a “default.” Even if this outcome is not described as in any way morally good or right, the claim is that it will nonetheless be seen as having a particular sort of status that will prove relevant in people's subsequent cognition.

I think that the commentators are exactly right on this score, and I look forward to further research expanding on this theme. My only disagreement, if it can be considered a disagreement at all, is on the level of rhetoric. The commentators see themselves as deflating the claims made in the target article, showing that moral considerations are actually less central than I had originally suggested. By contrast, I would describe them as radicalizing the target article's original thesis. What they are showing is that it is not even possible to isolate a particular point in the process where the moral judgments come in. Instead, moral and statistical considerations appear to be welded seamlessly together from the very beginning.

R2.5. Morality and language

However, a number of commentators actually suggested moving in the opposite direction. They proposed theories according to which moral considerations are confined to a single, highly delimited role, while the remainder of the process has nothing to do with morality and proceeds more or less like a scientific investigation.

In particular, Egré and Cova, Dupoux, & Jacob (Cova et al.) suggest that the role of moral considerations might be confined entirely to language. The basic idea here is a simple and powerful one. Suppose that people's actual capacity for theory-of-mind works by classifying attitudes along a continuous scale. Still, it might be that our language cannot describe attitudes in these purely continuous terms. If we want to capture an agent's attitude in language, we need to impose some kind of threshold and then say that a particular term or phrase applies whenever the attitude goes beyond this threshold. So perhaps it is there that morality enters into the picture. In other words, it might be that the underlying scale is entirely non-moral, but that morality plays a role in the process we use to determine the position of the threshold for particular linguistic expressions.

One way of spelling out this sort of account would be to represent the underlying scale using numbers. We could say that the number 0 stands for absolute indifference, the positive numbers stand for pro-attitudes, and the negative numbers for con-attitudes. A particular agent's attitude could be represented using the diagram shown in Figure R1:

Figure R1. Representation of an agent's attitude on an absolute scale.

Yet, although people would have some representation of the agent's attitude along this scale, the actual expressions of natural language would not correspond to points on the scale in any absolute sense. So there would not be any expressions in English that could describe an agent as having an attitude of, say, “+2 or higher.” Instead, all of the expressions of natural language would stand in a more complex relationship to the scale. They would characterize the agent's attitude relative to a (partially moral) default. Thus, if it turned out in the case at hand that the default was to an attitude of -1, the expressions of our language would describe the agent's attitude only relative to this default position, characterizing it as “3 points past the default.”

There is, however, another possible way in which this system could work. It could be that human beings do not make use of any purely absolute representations at any stage of processing. Instead, the attitude would be represented from the very beginning in terms of its position relative to the default. We would start by labeling the 0 point as default and then represent the agent's attitude like this (Fig. R2):

Figure R2. Representation of an agent's attitude relative to a default.

On this latter view, the comparison with the default is already available in people's underlying, nonlinguistic representation of the attitude. The expressions of natural language can then correspond in a straightforward way to these nonlinguistic representations.

The primary difference between these two hypotheses is that the first posits an entirely non-moral representation, which is then obscured in certain ways by complex linguistic rules, whereas the second does not posit any purely non-moral representation at any level. The key to adjudicating between these hypotheses, then, is to come up with a richer account of what the non-moral representation is supposed to be doing. Given that it is not necessary as an explanation for the way people use expressions in natural language, what exactly is it used for? If we had a better account of what the non-moral representation was supposed to be doing, we would be better able to decide whether it is actually there.

R2.6. Characterizing the effect

The target article claims that moral considerations play a surprisingly important role in people's cognition. In trying to characterize this role, I adopted a number of different formulations. Sometimes I said that moral considerations figure in people's competence, sometimes that moral considerations suffuse the process through and through. The commentators suggest that both of those formulations are misleading and unhelpful.

Alexander, Mallon, & Weinberg (Alexander et al.) point out that no clear criteria are ever given for picking out a “competence” and distinguishing it from the various other factors that impact people's intuitions. They therefore suggest that we dispense with this distinction between competence and other factors and simply focus on exploring the various different processes that impact people's intuitions.

Stich & Wysocki note that there is a perfectly clear sense in which my own account does not have moral considerations influencing the process “through and through.” On the contrary, the account says that moral considerations play a role in one specific part of the process but do not exert any influence on certain other parts.

These are both excellent points, and I agree that the formulations adopted in the target article may indeed be unhelpful in certain respects. So instead of defending what I wrote there, let me simply accept these criticisms and try now to formulate the point more accurately.

My aim in the target article was to argue against a particular vision. This vision distinguishes two aspects of the processes that generate people's intuitions:

  1. 1. A kind of “underlying” or “fundamental” capacity

  2. 2. Various additional factors that in some way “bias” or “distort” people's intuitions

The claim, then, is that the fundamental capacity is entirely non-moral and that the impact of moral considerations only arises because of the presence of these distorting factors.

Now, the distinction between these two aspects might be spelled out in various different ways, and different researchers would presumably adopt quite different accounts of the distinction. What unites all of these various accounts, however, is the claim that we can carve off a distinct capacity that is entirely non-moral and that is capable, all by itself, of generating an answer to the issue in question. Hence, faced with a person's intuition about intentional action, we might say: “This person's fundamental capacity for theory-of-mind would normally have classified this behavior as unintentional. However, her moral judgments got in the way and led her to regard it as intentional.”

My aim was to show that this sort of strategy cannot be made to work. On the view I develop, there simply is no distinct capacity that is entirely non-moral and that is capable, all by itself, of determining whether a behavior is intentional or unintentional. Thus, on the model provided in the target article, there would be no sense in asking a question like: “Suppose we got rid of all the moral considerations and just allowed people's fundamental capacity for theory-of-mind to proceed undisturbed. Which conclusion would they then draw about whether this behavior was intentional?” The trouble here is that the model does not involve any kind of distinct non-moral capacity which could answer the question in the absence of all moral considerations.

Note that this argument does not require me to say anything positive about the distinction between competence and performance. Nor does it require me to claim that there is no stage anywhere in the process that is untouched by moral considerations. All it requires is a kind of negative claim. Specifically: that it not be possible to isolate a distinct capacity that has a particular sort of non-moral character.

R3. Ordinary cognition and science

In thinking about people's ordinary ways of making sense of the world, it sometimes proves helpful to draw analogies with more systematic and explicit systems of thought. So one might say that people's ordinary understanding is similar in certain respects to Aristotelian metaphysics, or to legal theory, or to certain religious doctrines. These analogies can then help to illuminate aspects of this ordinary understanding that might otherwise have remained obscure.

One particularly common analogy here has been between people's ordinary understanding and systematic science. This analogy calls up a specific picture of what the human mind is like. A scientific researcher might have two different kinds of beliefs in a particular domain – a system of scientific beliefs and then, quite separately, a system of moral beliefs. Such a researcher might then find that her collaborators strongly disagree with her moral beliefs but that they are nonetheless in complete agreement with her scientific beliefs.

In the target article, I argued that this analogy was misleading. People's ordinary cognition does not appear to involve a clear distinction between purely “scientific” beliefs and moral beliefs. It might be helpful, therefore, to reject the analogy with science and to look instead at analogies between ordinary cognition and forms of systematic thought in which moral and non-moral considerations are more explicitly mixed.

R3.1. The relevance of moral considerations

Spurrett & Martin argue that there is little to be gained by discussing the respects in which ordinary cognition might or might not resemble science. Instead, they suggest that we simply focus directly on the ways in which people apply specific considerations to address particular questions. Adopting this latter approach, they claim that the effects described in the target article are best characterized as “fallacies of relevance.” That is, these effects are best understood as cases in which people apply moral considerations to questions in which only non-moral considerations would be relevant.

Spurrett & Martin may turn out in the end to be right on this score, but it is important to emphasize that the claim they are making is precisely the claim that is up for debate here. The central thesis of the target article was that people's ordinary cognition is radically different from scientific inquiry and that, in particular, ordinary questions like “Who caused the problem?” are not best understood on the model of scientific questions about causal relations. So, on the view defended in the target article, moral considerations actually are relevant to the ordinary questions people ask about whether one thing caused another, and there is no fallacy involved in applying such considerations to questions like these.

R3.2. Science and development

Kushnir & Chernyak suggest that the analogy to science might apply not so much to the beliefs people have at any given time but rather to the development of these beliefs in the first place. Hence, the beliefs people hold as adults might be radically different in various respects from the beliefs held by trained scientists, but the process people go through as children to acquire those beliefs might turn out to show many of the stages characteristic of scientific inquiry: looking for evidence, checking its fit to existing views, modifying these views when they do not fit the evidence, and so forth.

Kushnir & Chernyak's reference to the developmental literature here is a very helpful one, and future research could examine these developmental issues more directly. But it seems important at the outset to emphasize the very distinctive claim one makes when one says that ordinary human cognition resembles science. Such a claim presumably is not merely saying that ordinary human cognition involves taking in evidence and using it to assess prior views (a claim which is obviously true and needs no further defense). Instead, the claim seems to be an interesting and controversial one, which says something in particular about the precise way in which human beings use evidence to update their beliefs.

To see why, consider the way we might apply a similar approach in another domain. Suppose that someone says, “Human visual cognition uses Fourier transforms.” The claim here is presumably not just that human visual cognition uses some kind of computation. Rather, what is being claimed is that visual cognition makes use of one specific kind of computation – a kind of computation that was first formalized by modern mathematicians and is now known as a Fourier transform. This is an interesting hypothesis, which can be put to the test in further experimental studies.

Now suppose that someone says: “Human cognitive development uses the methods of science.” In just the same way, this claim cannot simply mean that cognitive development involves taking in evidence and using it to adjust our beliefs. (After all, that basic approach long predates the development of systematic science and can be found in an enormous variety of quite different modes of thought.) Rather, the claim has to be understood as saying that cognitive development makes use of the sorts of methods, first made explicit in the “scientific revolution” of the sixteenth and seventeenth centuries, that are now regarded as the distinctive methods of science. This is certainly an interesting hypothesis, which we can set about testing in experimental studies.

The thesis of the target article, however, was that existing experiments do not suggest that this hypothesis is correct. If we look to the distinctive characteristics of science – the characteristics that distinguish science from other systematic modes of thought – we find that people's ordinary non-conscious cognition does not tend to show these characteristics. For that reason, it might be helpful to understand ordinary cognition, not by looking to an analogy with contemporary science, but by looking to an analogy with the earlier modes of thought that the scientific revolution displaced.

R3.3. The function of theory-of-mind

Yet, even if the methods people use in ordinary theory-of-mind turn out to be radically different from the one we find at work in science, the function of theory-of-mind might be exactly the same as the function of scientific psychology. Thus, it might be that people's ordinary theory-of-mind makes use of moral considerations, but that there is some sense in which its aim is simply to generate accurate predictions and explanations of behavior.

Exploring this possibility, Bartsch & Young suggest that the impact of moral considerations might be understood in terms of information about frequency or probability. Suppose people generally assume that morally bad behaviors are infrequent or improbable. The judgment that a behavior was morally bad would then impact their statistical understanding, which could in turn influence their intuitions about intention, causation, and the like.

A number of other commentators take up related themes. Baldo & Barberousse propose that affective reactions can themselves serve as information and that this information may influence people's intuitions. And Lombrozo & Uttich point out that, even if moral considerations are entering into people's judgments at the algorithmic level, the best description at the computational level might still be in terms of an attempt to predict and explain behavior.

Now, it certainly does seem to be the case that people can sometimes use moral judgments to arrive at statistical truths, and these proposals therefore merit closer analysis. We should distinguish, however, between two possible ways in which the proposals might be interpreted.

One possible claim would be about the actual cognitive process people go through on-line. It might be claimed, for example, that people make a moral judgment, then use this judgment to make an inference about the frequency of the relevant behaviors, which in turn influences their intuitions about causation. If the proposal is understood in this way, I think that it is probably false. The problem is that when researchers independently vary information about frequency and moral status, they continue to find that moral status is playing an independent role (Roxborough & Cumby Reference Roxborough and Cumby2009).

But perhaps there is another, very different way of understanding the proposal. One might say that facts about frequencies are playing a role, not at the level of people's on-line cognition, but rather at the level of an “ultimate” or “evolutionary” explanation. Thus, suppose that theory-of-mind evolved as a mechanism for predicting and explaining behavior. Then, if violations of moral norms were generally infrequent, knowing that a behavior violated a norm might be a good cue for making certain statistical judgments about it, and our capacity for theory-of-mind might therefore have evolved to take moral considerations into account. In other words, the actual sequence of cognitive processes taking place in people's minds might involve all sorts of irreducibly moral appraisals, but the best evolutionary explanation of this process might be that it generally serves to enable accurate prediction. (For an especially clear defense of this approach, see the commentary by Lombrozo & Uttich.)

What we have here is a quite interesting hypothesis, but it is hard to know exactly how one might put it to the test empirically. In essence, we are dealing with a conflict between two very different visions. One vision focuses specifically on the nature of people's capacity for theory-of-mind. It says that this capacity has a particular “purpose” or “function” – for example, to accurately predict and explain behavior – and the patterns of intuition under discussion here can be explained in terms of their tendency to fulfill that function. By contrast, the vision I develop in the target article emphasizes certain general principles governing human cognition as a whole. The claim, then, is that the patterns we find in people's theory-of-mind judgments are not best understood as fulfilling any kind of purpose that is specific to theory-of-mind. Rather, these patterns simply reflect certain perfectly general principles about the impact of moral judgment on human cognition.

Clearly, the debate between these two visions is not the sort of thing that could be settled by a single critical experiment. Nonetheless, it does seem that further studies can offer us some illumination here. The key thing to notice is that the theory advanced in the target article predicts that the effects found in theory-of-mind should also be found in other domains that have nothing to do with theory-of-mind or even with prediction and explanation. So we can test the theory by looking to these other domains and checking to see whether similar effects are found there. An initial step in that direction can be found in the commentaries from Egré and from Cova et al., both of which show an impact of moral judgment on the use of quantifiers like many. If we continue to find effects of that basic type, we will gradually acquire stronger and stronger reasons to conclude that the effects under discussion here are best explained in terms of very general facts about the structure of human cognition.

R3.4. The cognitive basis of science

Suppose, then, that people's ordinary way of making sense of the world really is deeply different from what one finds in systematic science. A question now arises about how the emergence of systematic science could even have been possible. Given that science is itself a human invention, how could the methods of science have ended up diverging so substantially from the methods characteristic of ordinary human cognition?

Levy offers a fascinating answer to this question. He suggests that the solution lies in the social character of science. In other words, the solution is not that each individual scientist can somehow enter a kind of special psychological state that allows her to transcend the limitations of ordinary human cognition and put all of her moral views to one side. Rather, the key point is that scientific inquiry is pursued by a whole community of different individuals, each of whom holds a slightly different set of moral views, and that this community as a whole is able to engage in a kind of inquiry that no single person could follow through on her own.

This suggestion strikes me as a deeply intriguing and promising one, and it would be wonderful to put it to the test in further experimental studies. Ideally, one would want to bring scientists into the lab and look systematically at the factors that influence their judgments. Assuming that scientists show many of the same effects found in lay people (e.g., Mercier & Sperber, forthcoming), there is good reason to expect that the presence of a broader community would have a substantial impact on their ability to call into question their own initial intuitions.

R4. Conclusion

Replies like this one are governed by some peculiar expectations. The author is supposed to fend off all the commentators' objections and show that his or her original article was actually completely correct all along. But, of course, I don't actually believe anything like that. A number of the hypotheses I defended in the past were subsequently refuted by other researchers, and I am sure that many of the hypotheses I have defended here will meet with a similar fate. Accordingly, it might be best to conclude, not by summarizing the views I hold right now, but rather by saying a few words about where things might move in the future.

When I first started investigating the impact of moral judgments on intuitions about intentional action, I assumed that most of people's cognition was entirely non-moral, and I therefore introduced a series of ad hoc maneuvers to explain the new experimental results. That strategy turned out to be completely misguided. As researchers began uncovering more and more cases in which morality influenced people's intuitions, it became ever more clear that we needed a theory that offered a more abstract characterization of the impact of morality on people's cognition as a whole.

I suspect that we will actually have to move even farther in that direction. As a number of the commentators noted, it might be a mistake to look for some special place where moral considerations enter the picture. Instead, we might need to develop a view on which the mind makes little distinction between moral and non-moral factors, so that the very same theory that explains the impact of moral considerations also explains our ability to make apparently “scientific” use of purely statistical or descriptive information.

Footnotes

1. A quick note about the relevance of these data: The claim under discussion here is that judgments of counterfactual relevance play a role in intuitions about, e.g., causation. Hence, this claim yields the prediction that any factor that impacts judgments of counterfactual relevance should also impact intuitions about causation. In other words, if we uncover five different factors that influence judgments of counterfactual relevance, we should predict that all five of these factors influence causal intuitions, as well.

However, the claim does not also go the other way. We are not claiming that counterfactual thinking is the only thing that ever affects causal intuitions, so we are not claiming that every factor that influences causal intuitions must also influence counterfactual reasoning. On the contrary, as Menzies helpfully notes, a whole series of excellent studies have shown that people's causal intuitions can be influenced by factors that seem not to play a role in counterfactual thinking.

References

Alicke, M. (1992) Culpable causation. Journal of Personality and Social Psychology 63:368–78.CrossRefGoogle Scholar
Alicke, M. (2008) Blaming badly. Journal of Cognition and Culture 8:179–86.CrossRefGoogle Scholar
Guglielmo, S. & Malle, B. F. (in press) Can unintended side-effects be intentional? Resolving a controversy over intentionality and morality. Personality and Social Psychology Bulletin.Google Scholar
Hindriks, F. (2008) Intentional action and the praise-blame asymmetry. Philosophical Quarterly 58:630–41.CrossRefGoogle Scholar
Hitchcock, C. & Knobe, J. (2009) Cause and norm. Journal of Philosophy 106(11):587612.CrossRefGoogle Scholar
Inbar, Y., Pizarro, D. A., Knobe, J. & Bloom, P. (2009) Disgust sensitivity predicts intuitive disapproval of gays. Emotion 9(3):435–39.CrossRefGoogle ScholarPubMed
Knobe, J. (2007) Reason explanation in folk psychology. Midwest Studies in Philosophy 31:90107.CrossRefGoogle Scholar
McCloy, R. & Byrne, R. (2000) Counterfactual thinking about controllable events. Memory and Cognition 28:1071–78.CrossRefGoogle ScholarPubMed
Mercier, H. & Sperber, D. (forthcoming) Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences.Google Scholar
Nadelhoffer, T. (2006a) Bad acts, blameworthy agents, and intentional actions: Some problems for jury impartiality. Philosophical Explorations 9:203–20.CrossRefGoogle Scholar
N'gbala, A. & Branscombe, N. R. (1995). Mental simulation and causal attribution: When simulating an event does not affect fault assignment. Journal of Experimental Social Psychology 31:139–62.CrossRefGoogle Scholar
Nichols, S. & Ulatowski, J. (2007) Intuitions and individual differences: The Knobe effect revisited. Mind and Language 22:346–65.CrossRefGoogle Scholar
Pellizzoni, S., Girotto, V. & Surian, L. (2010) Beliefs and moral valence affect intentionality attributions: The case of side effects. Review of Philosophy and Psychology 1:201209.CrossRefGoogle Scholar
Phelan, M. & Sarkissian, H. (2008) The folk strike back; or, why you didn't do it intentionally, though it was bad and you knew it. Philosophical Studies 138(2):291–98.CrossRefGoogle Scholar
Roxborough, C. & Cumby, J. (2009) Folk psychological concepts: Causation. Philosophical Psychology 22:205–13.CrossRefGoogle Scholar
Sverdlik, S. (2004) Intentionality and moral judgments in commonsense thought about action. Journal of Theoretical and Philosophical Psychology 24:224–36.CrossRefGoogle Scholar
Wright, J. C. & Bengson, J. (2009) Asymmetries in judgments of responsibility and intentional action. Mind and Language 24(1):2450.CrossRefGoogle Scholar
Young, L., Cushman, F., Adolphs, R., Tranel, D. & Hauser, M. (2006) Does emotion mediate the effect of an action's moral status on its intentional status? Neuropsychological evidence. Journal of Cognition and Culture 6:291304.CrossRefGoogle Scholar
Figure 0

Figure R1. Representation of an agent's attitude on an absolute scale.

Figure 1

Figure R2. Representation of an agent's attitude relative to a default.