In this elegant and provocative article, Knobe summarizes a growing body of work suggesting that moral considerations influence a range of “non-moral” judgments, from mental state ascriptions to causal ratings. Knobe offers two interpretations for these data: (1) his preferred view of people as “moralists,” and (2) the traditional position of people as intuitive “scientists,” albeit poor ones subject to moral biases. We unpack these options using Marr's levels of analysis, and suggest at least one viable alternative, which we call the “rational scientist” position.
In Knobe's “person as moralist” position, “moral considerations actually figure in the competencies people use to make sense of human beings and their actions” (sect. 1, para. 7, emphasis added). In contrast, the “person as scientist” position claims that the “fundamental” capacities underlying these judgments are analogous to processes in scientific inquiry (sect. 2.2, para. 2). Both positions, as laid out by Knobe, involve a distinction between the “fundamental” or “primary” aspects of a cognitive system and those that are “secondary.” Knobe suggests that to account for the data, the scientist approach must claim that moral considerations play a secondary role, biasing judgments that are fundamentally scientific.
Examining these positions in terms of Marr's levels of analysis (Marr Reference Marr1982) reveals two different questions at play: one at the computational level, about the function of the cognitive system in question, and one at the algorithmic level, about the representations and processes that carry out that computation. For an advocate of the moralist position, the computational-level description of a cognitive system appeals to a “moralizing” function (perhaps evaluating people and their actions), and the algorithmic level is merely doing its job. For an advocate of the “biased” scientist position that Knobe considers, the computational-level description appeals to a scientific function (perhaps predicting and explaining people's actions), but the algorithmic level is buggy, with moral considerations biasing judgments.
This leaves two additional options (see Table 1). First is the “biased moralist” position, with a “moralizing” function at the computational level, but a buggy algorithm. Without a fuller computational-level analysis that provides a normative account of the judgments the algorithmic level should generate, this position is hard to distinguish from the “non-biased” moralist.
Table 1. Four possible positions to account for the data Knobe cites demonstrating an influence of moral considerations on non-moral judgments, such as mental state ascriptions and causal ratings. The positions are expressed in terms of Marr's levels of analysis, with one of two computational level functions, and algorithms that generate the judgments they do either as a result of their computational level functions (non-buggy) or because they are biased by other (e.g., moral) considerations (buggy).
Second is the “rational scientist” position, which we advocate for some cognitive systems (Uttich & Lombrozo Reference Uttich and Lombrozo2010). According to this position, a given cognitive system has a scientific function at the computational level, and the algorithm is just doing its job. To account for the slew of data Knobe cites, an advocate for this position must explain how moral considerations can influence judgments without threatening claims about the system's function (at the computational level) or the efficacy of the processes that carry out that function (at the algorithmic level).
In a recent paper (Uttich & Lombrozo Reference Uttich and Lombrozo2010), we attempt precisely this for ascriptions of intentional action. The cognitive system in question, broadly speaking, is theory of mind: the capacity to ascribe mental states to others. Traditionally, this capacity has been conceptualized as analogous to a scientific theory, with the function of predicting, explaining, and controlling behavior. At the computational level, this puts the traditional picture in the “scientific” camp. But what are the implications for the role of moral considerations in carrying out this function? Knobe seems to assume that moral considerations have no legitimate role in this picture. But we argue the reverse: that accurately inferring mental states can in fact require sensitivity to moral considerations, particularly whether a behavior conforms to or violates moral norms.
Here, in brief, is our argument. Norms – moral or conventional – provide reasons to act in accordance with those norms. For example, a norm to tip cab drivers provides a reason to do so. Observing someone conform to this norm is relatively uninformative: We can typically infer knowledge of the norm, but not necessarily a personal desire to provide additional payment. In contrast, norm-violating behavior can be quite informative, particularly when other mental-state information is lacking. If we believe a person knows the norm, then observing that person fail to tip a driver suggests an underlying preference, desire, or constraint that is strong enough to outweigh the reason to conform. This same logic applies to Knobe's chairman vignettes (sect. 3.1). When the side effect of the chairman's actions helps the environment, he is conforming to a norm, and the action is relatively uninformative about his underlying mental states. When he proceeds with a plan that causes environmental harm, the action is norm violating, and allows us to infer underlying mental states that support an ascription of intentional action.
Our aim here is not to elaborate and marshal evidence for this position; we direct interested readers to Uttich and Lombrozo (Reference Uttich and Lombrozo2010). Rather, we hope to populate the space of possible positions and call attention to what seem to be distinct computational- and algorithmic-level assumptions lurking in the background of Knobe's target article. Knobe argues against various versions of the “biased scientist” position, but does not consider the “rational scientist” position. Like the two “moralist” positions, the biased and the rational scientist positions can be difficult to distinguish, and require a more fully specified computational-level description with a corresponding normative theory to identify which judgments stem from buggy versus non-buggy algorithms.
Knobe infuses normativity into folk considerations, painting a picture of people as moralists. But distinguishing the four positions we identify (Table 1) may actually require appeals to normativity in the generation and evaluation of empirically testable theoretical claims. In other words, we must appeal to normativity as theorists, regardless of whether or how we do so as folk. We suspect that Knobe avoids this framing as a side effect of other commitments and a preference for process-level theorizing. Whether or not it was intentional, we think it is a mistake to collapse computational and algorithmic questions. We hope future debate can restore normative questions to their proper place in scientific theorizing, whether the folk are ultimately judged scientists or moralists.
In this elegant and provocative article, Knobe summarizes a growing body of work suggesting that moral considerations influence a range of “non-moral” judgments, from mental state ascriptions to causal ratings. Knobe offers two interpretations for these data: (1) his preferred view of people as “moralists,” and (2) the traditional position of people as intuitive “scientists,” albeit poor ones subject to moral biases. We unpack these options using Marr's levels of analysis, and suggest at least one viable alternative, which we call the “rational scientist” position.
In Knobe's “person as moralist” position, “moral considerations actually figure in the competencies people use to make sense of human beings and their actions” (sect. 1, para. 7, emphasis added). In contrast, the “person as scientist” position claims that the “fundamental” capacities underlying these judgments are analogous to processes in scientific inquiry (sect. 2.2, para. 2). Both positions, as laid out by Knobe, involve a distinction between the “fundamental” or “primary” aspects of a cognitive system and those that are “secondary.” Knobe suggests that to account for the data, the scientist approach must claim that moral considerations play a secondary role, biasing judgments that are fundamentally scientific.
Examining these positions in terms of Marr's levels of analysis (Marr Reference Marr1982) reveals two different questions at play: one at the computational level, about the function of the cognitive system in question, and one at the algorithmic level, about the representations and processes that carry out that computation. For an advocate of the moralist position, the computational-level description of a cognitive system appeals to a “moralizing” function (perhaps evaluating people and their actions), and the algorithmic level is merely doing its job. For an advocate of the “biased” scientist position that Knobe considers, the computational-level description appeals to a scientific function (perhaps predicting and explaining people's actions), but the algorithmic level is buggy, with moral considerations biasing judgments.
This leaves two additional options (see Table 1). First is the “biased moralist” position, with a “moralizing” function at the computational level, but a buggy algorithm. Without a fuller computational-level analysis that provides a normative account of the judgments the algorithmic level should generate, this position is hard to distinguish from the “non-biased” moralist.
Table 1. Four possible positions to account for the data Knobe cites demonstrating an influence of moral considerations on non-moral judgments, such as mental state ascriptions and causal ratings. The positions are expressed in terms of Marr's levels of analysis, with one of two computational level functions, and algorithms that generate the judgments they do either as a result of their computational level functions (non-buggy) or because they are biased by other (e.g., moral) considerations (buggy).
Second is the “rational scientist” position, which we advocate for some cognitive systems (Uttich & Lombrozo Reference Uttich and Lombrozo2010). According to this position, a given cognitive system has a scientific function at the computational level, and the algorithm is just doing its job. To account for the slew of data Knobe cites, an advocate for this position must explain how moral considerations can influence judgments without threatening claims about the system's function (at the computational level) or the efficacy of the processes that carry out that function (at the algorithmic level).
In a recent paper (Uttich & Lombrozo Reference Uttich and Lombrozo2010), we attempt precisely this for ascriptions of intentional action. The cognitive system in question, broadly speaking, is theory of mind: the capacity to ascribe mental states to others. Traditionally, this capacity has been conceptualized as analogous to a scientific theory, with the function of predicting, explaining, and controlling behavior. At the computational level, this puts the traditional picture in the “scientific” camp. But what are the implications for the role of moral considerations in carrying out this function? Knobe seems to assume that moral considerations have no legitimate role in this picture. But we argue the reverse: that accurately inferring mental states can in fact require sensitivity to moral considerations, particularly whether a behavior conforms to or violates moral norms.
Here, in brief, is our argument. Norms – moral or conventional – provide reasons to act in accordance with those norms. For example, a norm to tip cab drivers provides a reason to do so. Observing someone conform to this norm is relatively uninformative: We can typically infer knowledge of the norm, but not necessarily a personal desire to provide additional payment. In contrast, norm-violating behavior can be quite informative, particularly when other mental-state information is lacking. If we believe a person knows the norm, then observing that person fail to tip a driver suggests an underlying preference, desire, or constraint that is strong enough to outweigh the reason to conform. This same logic applies to Knobe's chairman vignettes (sect. 3.1). When the side effect of the chairman's actions helps the environment, he is conforming to a norm, and the action is relatively uninformative about his underlying mental states. When he proceeds with a plan that causes environmental harm, the action is norm violating, and allows us to infer underlying mental states that support an ascription of intentional action.
Our aim here is not to elaborate and marshal evidence for this position; we direct interested readers to Uttich and Lombrozo (Reference Uttich and Lombrozo2010). Rather, we hope to populate the space of possible positions and call attention to what seem to be distinct computational- and algorithmic-level assumptions lurking in the background of Knobe's target article. Knobe argues against various versions of the “biased scientist” position, but does not consider the “rational scientist” position. Like the two “moralist” positions, the biased and the rational scientist positions can be difficult to distinguish, and require a more fully specified computational-level description with a corresponding normative theory to identify which judgments stem from buggy versus non-buggy algorithms.
Knobe infuses normativity into folk considerations, painting a picture of people as moralists. But distinguishing the four positions we identify (Table 1) may actually require appeals to normativity in the generation and evaluation of empirically testable theoretical claims. In other words, we must appeal to normativity as theorists, regardless of whether or how we do so as folk. We suspect that Knobe avoids this framing as a side effect of other commitments and a preference for process-level theorizing. Whether or not it was intentional, we think it is a mistake to collapse computational and algorithmic questions. We hope future debate can restore normative questions to their proper place in scientific theorizing, whether the folk are ultimately judged scientists or moralists.