We have no dispute with Knobe's description of the ways in which various accounts – including motivational bias and conversational pragmatics – of the experimental results he describes don't succeed, and how the competency approach he favours currently does better.
This said, it remains unclear precisely what Knobe's position is, because his exposition depends on analogies that are both underdeveloped and problematic. Consequently, the answers to two questions are not sufficiently clear. The first question is: What specific commitment regarding human cognition is being rejected? The second question is: What specific claim about human cognition is being defended?
Analogical reasoning transfers information from one object to another, non-identical one. For this to work well, the two objects need to have enough salient features or relationships in common. Disanalogies between paradigmatic features of the two objects impede the transfer. Niels Bohr, for example, explained his rejection of previous models of the atom partly by drawing an analogy with the solar system. Despite important differences between atoms and planetary systems, this was a good way of getting at a few key and, at the time, radical ideas: Atoms are mostly empty; very small parts of them are in approximately orbital activity around other central ones.
Knobe offers two analogies for the view he is ostensibly rejecting. According to the first, the human mind works “something like a modern university” (sect. 1, para. 2). According to the second, which is an analogy within the first, some mental processes use the “same sorts of methods we find in university science departments” (sect. 1, para. 2). Except for relatively cryptic remarks on the ways disciplines are supposedly separated in universities, and some (also brief) remarks on science, Knobe develops neither analogy in significant detail.
What might it mean for the mind to be like a university? Knobe suggests that the organisation of a university corresponds to a set of distinctions between types of questions, so that the mind has something analogous to theology, art, philosophy, and some scientific departments. He goes on to argue that the mind is not like this. But the administrative organisation of universities into departments exists along with a patchwork of overlapping techniques, theories, problems, and collaborative research programmes cutting across departmental divisions. The analogy also doesn't do the work Knobe requires because some departments, such as those of history and politics, consider both factual and moral questions, just as art departments consider factual and aesthetic ones (not merely “is this painting good?” but also “is it genuine?”). Philosophy departments notoriously consider almost anything – these days they even do experiments.
The fact that the overall organisation of universities is not consistently or strictly modular need not be a big problem, since most of the heavy lifting is done by the second analogy, suggesting a view (the one to be rejected) where some mental processes use the same methods as scientists do. Unfortunately, though, there are no agreed upon set of criteria that separate science from non-science, partly because there is no clear division between the methods of “science” and those of other enterprises. Philosophers of science have argued for generations without converging on consensus about what, if anything, demarcates science from pseudo-science and non-science. That this is so is reason to recognise that “like science” is not a promising explanatory analogy.
We suggest that neither analogy need be repaired, or even replaced. Instead, the claim at issue can be stated directly. Knobe gives us a clue when he says that “Genuinely scientific inquiry seems to be sensitive to a quite specific range of considerations and seems to take those considerations into account in a highly distinctive manner” (sect. 2.1, para. 5). We think it makes most sense to read this as saying that the “specific range of considerations” are epistemic considerations, which is to say ones strictly relevant to whether or not some claim is true. It is a good normative rule for truth seekers to avoid fallacies of relevance. One example of such a fallacy is an appeal to consequences. Saying that evolution by natural selection should be rejected because believing it (supposedly) leads to selfishness, appeals to considerations which have no evidential value. Likewise, that an experimenter is very nice, or nasty, or eccentric, has no epistemic value as far as the empirical test of a hypothesis itself is concerned.
The phenomena to which Knobe draws our attention, and which his own empirical work has done a great deal to document, are all examples of fallacies of relevance, mostly in the attribution of credit for intention and causation. Whether someone caused something, intended it, or is responsible for it, depends on what they did and how that influenced the world. It does not depend on whether what happened is the sort of thing we would regard as morally objectionable. The fact that considerations relating to the moral value of the outcomes appear to affect judgements regarding what was intended, or caused, suggests that some of our mental processes are routinely prone to what, by responsible epistemic lights, are fallacies of relevance.
The general claim about human cognition that Knobe is rejecting, we therefore suggest, is one to the effect that the organisation of (human) cognition respects this normative standard, and that it does so by not allowing strictly irrelevant considerations to interact during processing. We already have ample evidence that the general claim is false, from, among other things, a long history of social psychology and behavioural economic experiments. Thorndike (Reference Thorndike1920), for example, showed that in assessments of other people, perceptions of some traits were more correlated with perceptions of other traits than should be the case if traits (such as attractiveness and competence) varied independently. What is exciting and surprising about the work Knobe reviews (and has been conducting himself) is that, from this point of view, it shows the persistent influence of moral reactions in judgements about matters where those reactions are irrelevant to truth.
It would be interesting, not to mention extremely important, to see whether the effects are reduced when people deliberate about causation and responsibility in organised groups charged with an epistemic task – for example, juries.
We have no dispute with Knobe's description of the ways in which various accounts – including motivational bias and conversational pragmatics – of the experimental results he describes don't succeed, and how the competency approach he favours currently does better.
This said, it remains unclear precisely what Knobe's position is, because his exposition depends on analogies that are both underdeveloped and problematic. Consequently, the answers to two questions are not sufficiently clear. The first question is: What specific commitment regarding human cognition is being rejected? The second question is: What specific claim about human cognition is being defended?
Analogical reasoning transfers information from one object to another, non-identical one. For this to work well, the two objects need to have enough salient features or relationships in common. Disanalogies between paradigmatic features of the two objects impede the transfer. Niels Bohr, for example, explained his rejection of previous models of the atom partly by drawing an analogy with the solar system. Despite important differences between atoms and planetary systems, this was a good way of getting at a few key and, at the time, radical ideas: Atoms are mostly empty; very small parts of them are in approximately orbital activity around other central ones.
Knobe offers two analogies for the view he is ostensibly rejecting. According to the first, the human mind works “something like a modern university” (sect. 1, para. 2). According to the second, which is an analogy within the first, some mental processes use the “same sorts of methods we find in university science departments” (sect. 1, para. 2). Except for relatively cryptic remarks on the ways disciplines are supposedly separated in universities, and some (also brief) remarks on science, Knobe develops neither analogy in significant detail.
What might it mean for the mind to be like a university? Knobe suggests that the organisation of a university corresponds to a set of distinctions between types of questions, so that the mind has something analogous to theology, art, philosophy, and some scientific departments. He goes on to argue that the mind is not like this. But the administrative organisation of universities into departments exists along with a patchwork of overlapping techniques, theories, problems, and collaborative research programmes cutting across departmental divisions. The analogy also doesn't do the work Knobe requires because some departments, such as those of history and politics, consider both factual and moral questions, just as art departments consider factual and aesthetic ones (not merely “is this painting good?” but also “is it genuine?”). Philosophy departments notoriously consider almost anything – these days they even do experiments.
The fact that the overall organisation of universities is not consistently or strictly modular need not be a big problem, since most of the heavy lifting is done by the second analogy, suggesting a view (the one to be rejected) where some mental processes use the same methods as scientists do. Unfortunately, though, there are no agreed upon set of criteria that separate science from non-science, partly because there is no clear division between the methods of “science” and those of other enterprises. Philosophers of science have argued for generations without converging on consensus about what, if anything, demarcates science from pseudo-science and non-science. That this is so is reason to recognise that “like science” is not a promising explanatory analogy.
We suggest that neither analogy need be repaired, or even replaced. Instead, the claim at issue can be stated directly. Knobe gives us a clue when he says that “Genuinely scientific inquiry seems to be sensitive to a quite specific range of considerations and seems to take those considerations into account in a highly distinctive manner” (sect. 2.1, para. 5). We think it makes most sense to read this as saying that the “specific range of considerations” are epistemic considerations, which is to say ones strictly relevant to whether or not some claim is true. It is a good normative rule for truth seekers to avoid fallacies of relevance. One example of such a fallacy is an appeal to consequences. Saying that evolution by natural selection should be rejected because believing it (supposedly) leads to selfishness, appeals to considerations which have no evidential value. Likewise, that an experimenter is very nice, or nasty, or eccentric, has no epistemic value as far as the empirical test of a hypothesis itself is concerned.
The phenomena to which Knobe draws our attention, and which his own empirical work has done a great deal to document, are all examples of fallacies of relevance, mostly in the attribution of credit for intention and causation. Whether someone caused something, intended it, or is responsible for it, depends on what they did and how that influenced the world. It does not depend on whether what happened is the sort of thing we would regard as morally objectionable. The fact that considerations relating to the moral value of the outcomes appear to affect judgements regarding what was intended, or caused, suggests that some of our mental processes are routinely prone to what, by responsible epistemic lights, are fallacies of relevance.
The general claim about human cognition that Knobe is rejecting, we therefore suggest, is one to the effect that the organisation of (human) cognition respects this normative standard, and that it does so by not allowing strictly irrelevant considerations to interact during processing. We already have ample evidence that the general claim is false, from, among other things, a long history of social psychology and behavioural economic experiments. Thorndike (Reference Thorndike1920), for example, showed that in assessments of other people, perceptions of some traits were more correlated with perceptions of other traits than should be the case if traits (such as attractiveness and competence) varied independently. What is exciting and surprising about the work Knobe reviews (and has been conducting himself) is that, from this point of view, it shows the persistent influence of moral reactions in judgements about matters where those reactions are irrelevant to truth.
It would be interesting, not to mention extremely important, to see whether the effects are reduced when people deliberate about causation and responsibility in organised groups charged with an epistemic task – for example, juries.