Why so low? A striking feature of the data reviewed by Barney & Sloman (B&S) is that the percentage of participants achieving the correct answer in the Medical Diagnosis problem rarely exceeds 50% (e.g., Table 3 of the target article). Thus, whether it is presenting information as natural frequencies or making nested set relations apparent that leads to improvements, overall the levels of performance remain remarkably low.
Potential reasons for this low level of overall performance are not discussed adequately in the target article. Although acknowledging in section 2.2 that “wide variability in the size of the effects makes it clear that in no sense do natural frequencies eliminate base-rate neglect” (para. 2), B&S fail to apply the same standard to their own proposal that “set-relation inducing formats” (be they natural frequencies or otherwise) facilitate a shift to a qualitatively different system of reasoning. The clear message of the article is that by presenting information appropriately, participants can “overcome their natural associative tendencies” (sect. 4, para. 3) and employ a reasoning system which applies rules to solve problems. Why does this system remain inaccessible for half of the participants in the studies reviewed? Is the rule system engaged, but the wrong rules are applied (e.g., Brase et al. Reference Brase, Fiddick and Harries2006, Experiment 1)? Or do these participants remain oblivious to the nested sets relations and persevere with “inferior” associative strategies?
B&S cite evidence from studies of syllogistic reasoning, deductive reasoning, and other types of probability judgment in support of their contention that nested set improvements are domain-general. In these other tasks, however, the improvements are considerably more dramatic than in the base-rate studies (e.g., Newstead [Reference Newstead1989] found a reduction in errors from 90% to 5% for syllogisms with Euler circle representations). The contrast between these large improvements in other domains and the modest ones in the base-rate neglect problems sits uncomfortably in a dual-process framework. Why doesn't the rule-based system overcome associative tendencies in similar ways across different tasks? In essence, the issue is one of specification – one needs to be able to specify what aspects of a problem make it amenable to being solved by a particular system for the notion of dual systems to have explanatory value. Why not simply appeal to “difficulty” and create a taxonomy or continuum of tasks differentially affected by various manipulations (diagrams, incentives, numerical format). Such a framework would not require recourse to dual processes or the vague rules of operation and conditions for transition that duality entails.
Incentives to “shift” systems? B&S report evidence from a study by Brase et al. (Reference Brase, Fiddick and Harries2006) in which it was found that monetary incentives improved performance on a base-rate problem. These data might be useful in gaining a clearer understanding of the factors that induce a shift between systems. It is difficult to make clear predictions, but under one interpretation of the dual-process position, one might expect incentives to have a larger effect in problems for which the set relations are apparent. The idea being that when the representation of information prevents (the majority of) people from engaging the rule-based system (e.g., when probabilities are used), no amount of incentive will help – most people simply won't “get it.” A simple test of this hypothesis would be to compare probability and frequency incentive conditions. Brase et al. did not do this, comparing instead natural frequency conditions with and without additional pictorial representations. One would assume that the pictorial representations enhance nested set relations (target article, sect. 2.5) and increase the likelihood of shift to the rule-based system; hence, incentives would be more effective in the pictorial condition that condition. Brase et al.'s findings were telling: there was a main effect of incentives but this did not interact with the presence or absence of the picture; and indeed there was no main effect of providing a pictorial representation.
Two processes or two kinds of experience? In evaluating the B&S account we believe that it is useful to consider some of the lessons learned from the study of base-rate respect and neglect in category learning. In these studies people learn to discriminate between the exemplars of categories that differ in their frequency of presentation. The question is whether this base-rate information is used appropriately in subsequent categorization decisions, with features from more common categories being given greater weight. The results have been mixed. People can use base-rate information adaptively (Kruschke Reference Kruschke1996), ignore it (Gluck & Bower Reference Gluck and Bower1988), or show an inverse base-rate effect, giving more weight to features from less frequent categories (Medin & Edelson Reference Medin and Edelson1988). Note that the issue of information format does not arise here, as all learning involves assessments of feature frequency. Critically, one does not need to invoke multiple processes to explain these results. Kruschke (Reference Kruschke1996) has shown that sensitivity and insensitivity to category base-rates can be predicted by a unitary learning model that takes account of the order in which different categories are learned, and allows for shifts of attention to critical features. In brief, people only neglect category-base rates when their attention is drawn to highly distinctive features in the less frequent category. The moral here is that before we resort to dual-process explanations of base-rate respect and neglect, we should first consider explanations based on the way that general learning mechanisms interact with given data structures.
Conclusion. B&S provide a very useful overview of the base-rate-neglect literature and provide convincing arguments for questioning many of the popular accounts of the basic phenomena. The nested sets hypothesis is a sensible and powerful explanatory framework; however, incorporating the hypothesis into the overly vague dual-process model seems unnecessary.