Elqayam & Evans (E&E) outline an argument that, if correct, would fatally undermine the research programme of the rational analysis of reasoning, in which we and colleagues have been closely involved (e.g., Chater & Oaksford Reference Chater and Oaksford1999; Hahn & Oaksford Reference Hahn and Oaksford2007; Oaksford & Chater Reference Oaksford and Chater1994; Reference Oaksford and Chater1998a; Reference Oaksford and Chater1998b; Reference Oaksford and Chater2007; Oaksford et al. Reference Oaksford, Chater and Larkin2000). But it would apply equally to models in behavioural ecology (Krebs & Davies Reference Krebs and Davies1996), adaptive explanation in evolutionary biology (Sober Reference Sober1993), ideal observer models in perception (Blake et al. Reference Blake, Bulthoff, Sheinberg, Knill and Richards1996), Bayesian cognitive science (Chater et al. Reference Chater, Tenenbaum and Yuille2006), and rational choice theory and the whole of microeconomics (Kreps Reference Kreps1992).
What these explanations have in common is that they harness normative theories to explain descriptive facts. The structure of the eye is explained because it forms clear images of the environment; the foraging patterns of a bee are explained as maximizing food intake; a person's “information foraging” is explained as maximizing the amount of information acquired (Nelson Reference Nelson2005; Oaksford & Chater Reference Oaksford and Chater1994; Pirolli Reference Pirolli2007). But it should be immediately clear that E&E's application of the is-ought fallacy appears itself to be fallacious. Such explanations do not derive ought conclusions (which the is-ought fallacy forbids), but derive is conclusions: attempting to explain facts about wings, bees, or people.
How then do norms enter in to rational explanations? Anderson (Reference Anderson1990; Reference Anderson1991) provides an elegant account, in the context of psychological processes (see Oaksford & Chater Reference Oaksford and Chater2007):
-
Step 1. Specify precisely the goals of the cognitive system.
-
Step 2. Develop a formal model of the environment to which the system is adapted.
-
Step 3. Make minimal assumptions about computational limitations.
-
Step 4. Derive the optimal behaviour function, given 1–3 above. (This requires formal analysis using rational norms, such as probability theory and decision theory.)
-
Step 5. Examine the empirical evidence to see whether the predictions of the behaviour function are confirmed.
-
Step 6. Repeat, iteratively refining the theory.
Note that norms, such as those from probability theory or decision theory, enter only in Step 4: they help derive optimal behaviour, given the specification of goals, environment, and computational limitations. No ought is hidden here either. Given that Steps 1 to 3 specify a well-defined problem (which is required, or else an optimal solution to the problem will be ill-defined), then the optimal solution (if there is one) is a matter of fact not evaluation: an is, not an ought. Take the familiar example of the travelling salesman problem: if the goal is visiting all towns in the shortest possible route; if the map is such and such; then it is a matter of fact (not evaluation) that the optimal route is thus and so.
Now E&E might object that the choice of norms to solve the problem at Step 4 can be challenged: Are there not competing normative theories? We suggest, by contrast, that Step 4 must always be well-defined (for the rational explanation to be viable); but that the assumptions that go in to Steps 1 to 3 can be challenged. Thus, in explaining behaviour in Wason's (Reference Wason1968) selection task, accounts differ concerning whether people are optimizing information gain, or some measure of “utility” (i.e., there are differences over Step 1) (Oaksford & Chater Reference Oaksford and Chater1994); and theories can differ about assumptions about environmental structure (Step 2) (Klauer Reference Klauer1999; Oaksford & Chater Reference Oaksford and Chater2003). Theories do not differ about, for example, the axioms of probability theory.
Perhaps individual rational explanations are free of E&E's charge; but might the rhetoric of rational analysis of reasoning fall into this trap? E&E suggest that it may, proposing that the following argument (which they attribute to us in their section 5.1, para. 1) commits the “is-ought” fallacy:
(1) Premise 1: People behave in a way that approximates Bayesian probability (“is”)
Premise 2: This behavior is successfully adaptive (“is”)
Conclusion: Therefore, Bayesian probability is the appropriate normative system (“ought”)
This would indeed be a fatally flawed argument but, pace E&E, it is one that no proponent of rational analysis, including us, has ever proposed. Our work has been completely explicit that the normative basis of, for example, Bayesian inference is consistency arguments, such as Dutch book theorems (e.g., Chater & Oaksford, in press; Oaksford & Chater Reference Oaksford and Chater2007).
We suspect that, in considering the above argument, E&E are conflating the conclusion that Bayesian probability is the appropriate normative theory, with the conclusion that rational analyses, using Bayesian methods at Step 4, is the best descriptive theory. Rational analyses, like other scientific theories, are chosen by their fit to the data.
Such issues are crucial in the psychology of reasoning: in building a rational analysis of conditional reasoning with verbal materials (i.e., statements such as if A, then B; not-B, and so on). Specifying the goal of reasoning (Step 1) may, for example, be crucial: Is the aim to pick out conclusions that have a high probability, given the premises? Or is it to pick out only conclusions that are definitely true, given the premises? How do we interpret the materials that constitute the “environment” over which reasoning must occur (Step 2)? Do people interpret if…, then… as a material condition (as in propositional logic), interpret it has an assertion about conditional probability (Edgington Reference Edgington1995; Ramsey Reference Ramsey1931), or adopt one of many other possible interpretations. Different assumptions will lead to different rational analyses – as with any other type of scientific theory, competing accounts must be adjudicated, primarily by their compatibility with the empirical data. Note, crucially, however, that such assumptions are about facts: what is the goal of a person's reasoning; how do we interpret the conditional. It is not about normative evaluations, such as what should be the goal of reasoning or how should people interpret the conditional. Such questions, while interesting, are not part of the scientific project of rational analysis.
We conclude, overall, that E&E's injunction that we never infer an ought from an is is entirely correct; and entirely consistent with the program of rational explanation of cognition, and more generally with optimality explanations across the biological and social sciences.
Elqayam & Evans (E&E) outline an argument that, if correct, would fatally undermine the research programme of the rational analysis of reasoning, in which we and colleagues have been closely involved (e.g., Chater & Oaksford Reference Chater and Oaksford1999; Hahn & Oaksford Reference Hahn and Oaksford2007; Oaksford & Chater Reference Oaksford and Chater1994; Reference Oaksford and Chater1998a; Reference Oaksford and Chater1998b; Reference Oaksford and Chater2007; Oaksford et al. Reference Oaksford, Chater and Larkin2000). But it would apply equally to models in behavioural ecology (Krebs & Davies Reference Krebs and Davies1996), adaptive explanation in evolutionary biology (Sober Reference Sober1993), ideal observer models in perception (Blake et al. Reference Blake, Bulthoff, Sheinberg, Knill and Richards1996), Bayesian cognitive science (Chater et al. Reference Chater, Tenenbaum and Yuille2006), and rational choice theory and the whole of microeconomics (Kreps Reference Kreps1992).
What these explanations have in common is that they harness normative theories to explain descriptive facts. The structure of the eye is explained because it forms clear images of the environment; the foraging patterns of a bee are explained as maximizing food intake; a person's “information foraging” is explained as maximizing the amount of information acquired (Nelson Reference Nelson2005; Oaksford & Chater Reference Oaksford and Chater1994; Pirolli Reference Pirolli2007). But it should be immediately clear that E&E's application of the is-ought fallacy appears itself to be fallacious. Such explanations do not derive ought conclusions (which the is-ought fallacy forbids), but derive is conclusions: attempting to explain facts about wings, bees, or people.
How then do norms enter in to rational explanations? Anderson (Reference Anderson1990; Reference Anderson1991) provides an elegant account, in the context of psychological processes (see Oaksford & Chater Reference Oaksford and Chater2007):
Step 1. Specify precisely the goals of the cognitive system.
Step 2. Develop a formal model of the environment to which the system is adapted.
Step 3. Make minimal assumptions about computational limitations.
Step 4. Derive the optimal behaviour function, given 1–3 above. (This requires formal analysis using rational norms, such as probability theory and decision theory.)
Step 5. Examine the empirical evidence to see whether the predictions of the behaviour function are confirmed.
Step 6. Repeat, iteratively refining the theory.
Note that norms, such as those from probability theory or decision theory, enter only in Step 4: they help derive optimal behaviour, given the specification of goals, environment, and computational limitations. No ought is hidden here either. Given that Steps 1 to 3 specify a well-defined problem (which is required, or else an optimal solution to the problem will be ill-defined), then the optimal solution (if there is one) is a matter of fact not evaluation: an is, not an ought. Take the familiar example of the travelling salesman problem: if the goal is visiting all towns in the shortest possible route; if the map is such and such; then it is a matter of fact (not evaluation) that the optimal route is thus and so.
Now E&E might object that the choice of norms to solve the problem at Step 4 can be challenged: Are there not competing normative theories? We suggest, by contrast, that Step 4 must always be well-defined (for the rational explanation to be viable); but that the assumptions that go in to Steps 1 to 3 can be challenged. Thus, in explaining behaviour in Wason's (Reference Wason1968) selection task, accounts differ concerning whether people are optimizing information gain, or some measure of “utility” (i.e., there are differences over Step 1) (Oaksford & Chater Reference Oaksford and Chater1994); and theories can differ about assumptions about environmental structure (Step 2) (Klauer Reference Klauer1999; Oaksford & Chater Reference Oaksford and Chater2003). Theories do not differ about, for example, the axioms of probability theory.
Perhaps individual rational explanations are free of E&E's charge; but might the rhetoric of rational analysis of reasoning fall into this trap? E&E suggest that it may, proposing that the following argument (which they attribute to us in their section 5.1, para. 1) commits the “is-ought” fallacy:
(1) Premise 1: People behave in a way that approximates Bayesian probability (“is”)
Premise 2: This behavior is successfully adaptive (“is”)
Conclusion: Therefore, Bayesian probability is the appropriate normative system (“ought”)
This would indeed be a fatally flawed argument but, pace E&E, it is one that no proponent of rational analysis, including us, has ever proposed. Our work has been completely explicit that the normative basis of, for example, Bayesian inference is consistency arguments, such as Dutch book theorems (e.g., Chater & Oaksford, in press; Oaksford & Chater Reference Oaksford and Chater2007).
We suspect that, in considering the above argument, E&E are conflating the conclusion that Bayesian probability is the appropriate normative theory, with the conclusion that rational analyses, using Bayesian methods at Step 4, is the best descriptive theory. Rational analyses, like other scientific theories, are chosen by their fit to the data.
Such issues are crucial in the psychology of reasoning: in building a rational analysis of conditional reasoning with verbal materials (i.e., statements such as if A, then B; not-B, and so on). Specifying the goal of reasoning (Step 1) may, for example, be crucial: Is the aim to pick out conclusions that have a high probability, given the premises? Or is it to pick out only conclusions that are definitely true, given the premises? How do we interpret the materials that constitute the “environment” over which reasoning must occur (Step 2)? Do people interpret if…, then… as a material condition (as in propositional logic), interpret it has an assertion about conditional probability (Edgington Reference Edgington1995; Ramsey Reference Ramsey1931), or adopt one of many other possible interpretations. Different assumptions will lead to different rational analyses – as with any other type of scientific theory, competing accounts must be adjudicated, primarily by their compatibility with the empirical data. Note, crucially, however, that such assumptions are about facts: what is the goal of a person's reasoning; how do we interpret the conditional. It is not about normative evaluations, such as what should be the goal of reasoning or how should people interpret the conditional. Such questions, while interesting, are not part of the scientific project of rational analysis.
We conclude, overall, that E&E's injunction that we never infer an ought from an is is entirely correct; and entirely consistent with the program of rational explanation of cognition, and more generally with optimality explanations across the biological and social sciences.