Knobe's home university must be a remarkable place if, as he suggests, scientists there “typically leave [moral] questions to one side” (sect. 6, para. 2). In the wider world, science is nothing if not a moral enterprise. At the very least, scientists make a public commitment to tell the truth, to respect the rules of argument, to make their arguments open to refutation, not to cheat, and so on. Think of a scientist who is engaged in peerreviewing a colleague's work: He or she is probably using the “ethical circuits” in his or her brain in similar ways to a judge at a criminal trial. Contrary to the picture Knobe paints, I would say science is an approach to the world that could only have been developed by humans who were already constantly aware of right and wrong.
Persons as scientists ought to be moral. But persons as moralists ought to be scientific, too. Knobe claims that his experimental studies show that when people are morally engaged, they begin to think “unscientifically.” Yet it can be argued, on the evidence of his own experiments, that the opposite is true.
Let's consider the chairman study. Subjects are asked to judge what the chairman's intentions were. But, it is important to note that, since subjects have only limited access to the facts, the best they can do is to make an informed guess. What Knobe then finds is that they guess, on one hand, the chairman intended to harm the environment, but on the other, he did not intend to help it. So, either way, they guess the chairman's intentions were reprehensible. But isn't this exactly what we might expect if the subjects are rational guessers who have, as it happens, been given prior reason to believe that the chairman is a bad man?
Knobe himself comes close to saying as much in the last paragraph of section 5.2 when he says that “before people even begin considering what actually happened […] they make a judgement about what sort of attitude an agent could be expected to hold.” However, what he does not seem to realise is that this is a thoroughly scientific approach. Philosophers of science widely agree that the best procedure under conditions of uncertainty is to adopt a Bayesian algorithm and calculate the probabilities of a particular outcome based on prior knowledge (see the discussion in Pugliucci Reference Pugliucci2010).
True enough, ordinary people as scientists are not equally attentive to all kinds of prior information. And when it comes to predicting the behaviour of others, there is no question that morally relevant information takes pride of place. In particular, as Cosmides and Tooby have shown, people tend to be on the alert for any evidence that another person has deliberately broken a social contract. Moreover, if and when people suspect this, they begin to think all the more rationally (see, e.g., Cosmides et al. Reference Cosmides, Barrett and Tooby2010). Now, the evolved “cheater-detection mechanism,” which Cosmides and Tooby have identified, would certainly be activated by news about the chairman who does not pull his weight in protecting the environment. We might, therefore, expect subjects in the experiment to be thinking particularly clearly about intentionality, causation, and so on.
No doubt the cheater-detection module plays a key role too when scientists review each other's scientific work – which is why we all do it so well. (What's that motto at Yale, where Knobe comes from? Lux et Veritas – “Light and Truth.”)
Knobe's home university must be a remarkable place if, as he suggests, scientists there “typically leave [moral] questions to one side” (sect. 6, para. 2). In the wider world, science is nothing if not a moral enterprise. At the very least, scientists make a public commitment to tell the truth, to respect the rules of argument, to make their arguments open to refutation, not to cheat, and so on. Think of a scientist who is engaged in peerreviewing a colleague's work: He or she is probably using the “ethical circuits” in his or her brain in similar ways to a judge at a criminal trial. Contrary to the picture Knobe paints, I would say science is an approach to the world that could only have been developed by humans who were already constantly aware of right and wrong.
Persons as scientists ought to be moral. But persons as moralists ought to be scientific, too. Knobe claims that his experimental studies show that when people are morally engaged, they begin to think “unscientifically.” Yet it can be argued, on the evidence of his own experiments, that the opposite is true.
Let's consider the chairman study. Subjects are asked to judge what the chairman's intentions were. But, it is important to note that, since subjects have only limited access to the facts, the best they can do is to make an informed guess. What Knobe then finds is that they guess, on one hand, the chairman intended to harm the environment, but on the other, he did not intend to help it. So, either way, they guess the chairman's intentions were reprehensible. But isn't this exactly what we might expect if the subjects are rational guessers who have, as it happens, been given prior reason to believe that the chairman is a bad man?
Knobe himself comes close to saying as much in the last paragraph of section 5.2 when he says that “before people even begin considering what actually happened […] they make a judgement about what sort of attitude an agent could be expected to hold.” However, what he does not seem to realise is that this is a thoroughly scientific approach. Philosophers of science widely agree that the best procedure under conditions of uncertainty is to adopt a Bayesian algorithm and calculate the probabilities of a particular outcome based on prior knowledge (see the discussion in Pugliucci Reference Pugliucci2010).
True enough, ordinary people as scientists are not equally attentive to all kinds of prior information. And when it comes to predicting the behaviour of others, there is no question that morally relevant information takes pride of place. In particular, as Cosmides and Tooby have shown, people tend to be on the alert for any evidence that another person has deliberately broken a social contract. Moreover, if and when people suspect this, they begin to think all the more rationally (see, e.g., Cosmides et al. Reference Cosmides, Barrett and Tooby2010). Now, the evolved “cheater-detection mechanism,” which Cosmides and Tooby have identified, would certainly be activated by news about the chairman who does not pull his weight in protecting the environment. We might, therefore, expect subjects in the experiment to be thinking particularly clearly about intentionality, causation, and so on.
No doubt the cheater-detection module plays a key role too when scientists review each other's scientific work – which is why we all do it so well. (What's that motto at Yale, where Knobe comes from? Lux et Veritas – “Light and Truth.”)