Normative issues have the potential to bedevil our field (the study of thinking) and Elqayam & Evans (E&E) have done us a great service in laying bare many of the problematic consequences of taking normative theories too seriously. Here, we ask whether normativism has been uniformly harmful, whether the end of normativism is really nigh, and whether the antidote proposed by E&E may do more harm than good.
We are not as alarmed about normativism as are E&E, many of whose arguments concern the psychology of deductive reasoning, and conditionals in particular, where the problem of multiple norms seems to be very acute. However, there are other areas in the study of high-level cognition (for summaries, see Feeney & Heit Reference Feeney and Heit2007; Murphy Reference Murphy2002) where normativism has the potential to be equally problematic but descriptivism has held sway.
Even in the areas on which E&E focus, normativism has not been uniformly disastrous. We do find it plausible that there are entire literatures which would not exist were it not for normative considerations. For instance, it is unlikely that anything resembling the actual literature on base rate neglect would exist had there not been a preoccupation with Bayesian norms in the 1960s and 1970s (e.g., Kahneman & Tversky Reference Kahneman and Tversky1973; Peterson & Beach Reference Peterson and Beach1967). However, inspired by the gap between normative behaviour and what people do in base rate neglect experiments, very important findings have been described about the difficulties people encounter in representing statistical information. For example, we now know the importance of the way the problem is described in facilitating people's recognition of the set relations underlying statistical problems (see Barbey & Sloman Reference Barbey and Sloman2007; Evans et al. Reference Evans, Handley, Perham, Over and Thompson2000; Girotto & Gonzalez Reference Girotto and Gonzalez2001). Extremely interesting claims about the importance of causal models in statistical reasoning have also been made on the basis of experiments using the base rates paradigm (Krynski & Tenenbaum Reference Krynski and Tenenbaum2007). We know that people tend to use base rate statistics that they have acquired via experience more than those given to them by the experimenter (Gigerenzer et al. Reference Gigerenzer, Hell and Blank1988), and the study of base rate neglect has greatly increased our understanding of the role of inhibitory control in thinking (De Neys & Glumicic Reference De Neys and Glumicic2008). None of this work seems to have been carried out in an evaluative spirit, although each of the researchers coded their participants' responses in the standard, normatively determined way. Despite this, all of these studies can fairly be described as having contributed to our understanding of psychological processes. So even in the very select range of domains considered by E&E, normativism has had various consequences. These range from literatures almost coming to a standstill – as seems to be the case with the literature on Wason's selection task – to the continued productive use of a paradigm whose invention was rooted in Kahneman and Tversky's goal of showing that a particular normative theory is an inadequate psychological account.
By alluding to areas in the study of high-level cognition, such as inductive reasoning and categorisation, where descriptivism rules, we do not mean to suggest that normativism does not have the potential to be perilous. Oaksford and Chater (Reference Oaksford and Chater2007), in their Bayesian analysis of reasoning have been concerned with deciding on the most appropriate norm and with the psychological mechanisms that might approximate that norm. Unfortunately, Bayesian analyses in other domains of high-level cognition (for a review, see Jones & Love Reference Jones and Love2011) have not paid as much attention to mechanism. It is true that some of these analyses are pitched at the descriptive level (see Krynski & Tenenbaum [Reference Krynski and Tenenbaum2007] on causal models and base rate neglect), but many others work at a computational level (e.g., Kemp & Tenenbaum Reference Kemp and Tenenbaum2009). As Sloman (Reference Sloman, Feeney and Heit2007) has pointed out, computational Bayesian models also work as normative models, whether or not they are described in such terms by their creators. This is because implicit in this type of computational model is the claim that there is a single Bayesian account for a particular type of thinking. No doubt inspired by this insight, Fernbach et al. (Reference Fernbach, Darlow and Sloman2011) have recently described a normative model of causal inductive reasoning based on causal Bayes nets and shown that when people reason predictively, from cause to effect, their inferences do not conform to the prescriptions of the model. This is a very important demonstration for those of us who work on inductive reasoning; but it also feels as if history might be beginning to repeat itself, and rather than being at the end of normativism, we may be about to see another battle in a war that seems likely to end no time soon.
Finally, E&E suggest in a number of places in the target article that we should focus on expert reasoning and how it is acquired. We see several problems with this as an agenda for our field. First, the cognitive biases seen in experts (defined, of course, with reference to some normative theory) are the same as those seen in naïve reasoners (see Bornstein & Emler Reference Bornstein and Emler2001), so there may be very little to be gained from the exclusive study of experts. Of course, one could study how expert reasoners become expert, but then, if experts display the same biases as naïve reasoners, intervention is clearly required, which necessitates debate about norms. It seems to us that this debate will happen even if the goal of a meliorist intervention is instrumental rationality. This is because, in a domain where complex statistical thinking is required, experts may have to be taught how to approximate a norm in order to attain their goals. However, perhaps the most serious problem with the abandonment of naïve individuals by our field is that this would drastically reduce our contribution to basic psychological science. Thinking is central to what it means to be human and if E&E are correct that the old paradigm doesn't work, then we must find ways to usefully study how naïve and expert participants choose, make judgements, and reason.
Normative issues have the potential to bedevil our field (the study of thinking) and Elqayam & Evans (E&E) have done us a great service in laying bare many of the problematic consequences of taking normative theories too seriously. Here, we ask whether normativism has been uniformly harmful, whether the end of normativism is really nigh, and whether the antidote proposed by E&E may do more harm than good.
We are not as alarmed about normativism as are E&E, many of whose arguments concern the psychology of deductive reasoning, and conditionals in particular, where the problem of multiple norms seems to be very acute. However, there are other areas in the study of high-level cognition (for summaries, see Feeney & Heit Reference Feeney and Heit2007; Murphy Reference Murphy2002) where normativism has the potential to be equally problematic but descriptivism has held sway.
Even in the areas on which E&E focus, normativism has not been uniformly disastrous. We do find it plausible that there are entire literatures which would not exist were it not for normative considerations. For instance, it is unlikely that anything resembling the actual literature on base rate neglect would exist had there not been a preoccupation with Bayesian norms in the 1960s and 1970s (e.g., Kahneman & Tversky Reference Kahneman and Tversky1973; Peterson & Beach Reference Peterson and Beach1967). However, inspired by the gap between normative behaviour and what people do in base rate neglect experiments, very important findings have been described about the difficulties people encounter in representing statistical information. For example, we now know the importance of the way the problem is described in facilitating people's recognition of the set relations underlying statistical problems (see Barbey & Sloman Reference Barbey and Sloman2007; Evans et al. Reference Evans, Handley, Perham, Over and Thompson2000; Girotto & Gonzalez Reference Girotto and Gonzalez2001). Extremely interesting claims about the importance of causal models in statistical reasoning have also been made on the basis of experiments using the base rates paradigm (Krynski & Tenenbaum Reference Krynski and Tenenbaum2007). We know that people tend to use base rate statistics that they have acquired via experience more than those given to them by the experimenter (Gigerenzer et al. Reference Gigerenzer, Hell and Blank1988), and the study of base rate neglect has greatly increased our understanding of the role of inhibitory control in thinking (De Neys & Glumicic Reference De Neys and Glumicic2008). None of this work seems to have been carried out in an evaluative spirit, although each of the researchers coded their participants' responses in the standard, normatively determined way. Despite this, all of these studies can fairly be described as having contributed to our understanding of psychological processes. So even in the very select range of domains considered by E&E, normativism has had various consequences. These range from literatures almost coming to a standstill – as seems to be the case with the literature on Wason's selection task – to the continued productive use of a paradigm whose invention was rooted in Kahneman and Tversky's goal of showing that a particular normative theory is an inadequate psychological account.
By alluding to areas in the study of high-level cognition, such as inductive reasoning and categorisation, where descriptivism rules, we do not mean to suggest that normativism does not have the potential to be perilous. Oaksford and Chater (Reference Oaksford and Chater2007), in their Bayesian analysis of reasoning have been concerned with deciding on the most appropriate norm and with the psychological mechanisms that might approximate that norm. Unfortunately, Bayesian analyses in other domains of high-level cognition (for a review, see Jones & Love Reference Jones and Love2011) have not paid as much attention to mechanism. It is true that some of these analyses are pitched at the descriptive level (see Krynski & Tenenbaum [Reference Krynski and Tenenbaum2007] on causal models and base rate neglect), but many others work at a computational level (e.g., Kemp & Tenenbaum Reference Kemp and Tenenbaum2009). As Sloman (Reference Sloman, Feeney and Heit2007) has pointed out, computational Bayesian models also work as normative models, whether or not they are described in such terms by their creators. This is because implicit in this type of computational model is the claim that there is a single Bayesian account for a particular type of thinking. No doubt inspired by this insight, Fernbach et al. (Reference Fernbach, Darlow and Sloman2011) have recently described a normative model of causal inductive reasoning based on causal Bayes nets and shown that when people reason predictively, from cause to effect, their inferences do not conform to the prescriptions of the model. This is a very important demonstration for those of us who work on inductive reasoning; but it also feels as if history might be beginning to repeat itself, and rather than being at the end of normativism, we may be about to see another battle in a war that seems likely to end no time soon.
Finally, E&E suggest in a number of places in the target article that we should focus on expert reasoning and how it is acquired. We see several problems with this as an agenda for our field. First, the cognitive biases seen in experts (defined, of course, with reference to some normative theory) are the same as those seen in naïve reasoners (see Bornstein & Emler Reference Bornstein and Emler2001), so there may be very little to be gained from the exclusive study of experts. Of course, one could study how expert reasoners become expert, but then, if experts display the same biases as naïve reasoners, intervention is clearly required, which necessitates debate about norms. It seems to us that this debate will happen even if the goal of a meliorist intervention is instrumental rationality. This is because, in a domain where complex statistical thinking is required, experts may have to be taught how to approximate a norm in order to attain their goals. However, perhaps the most serious problem with the abandonment of naïve individuals by our field is that this would drastically reduce our contribution to basic psychological science. Thinking is central to what it means to be human and if E&E are correct that the old paradigm doesn't work, then we must find ways to usefully study how naïve and expert participants choose, make judgements, and reason.
ACKNOWLEDGMENT
Simon McNair is funded by a Ph.D. award from the Department of Education and Learning, Northern Ireland.