Barbey & Sloman (B&S) claim that an associative system is responsible for errors in a range of probabilistic judgments. Although this is plausible in the case of the conjunction fallacy (where a similarity-based judgment substitutes for a probability judgment), it is less applicable to the Medical Diagnosis base-rate problem. What are the automatic associative processes that are supposed to drive incorrect responses in this case? Respondents reach incorrect solutions in various different ways (Brase et al. Reference Brase, Fiddick and Harries2006; Eddy Reference Eddy, Kahneman, Slovic and Tversky1982), many of which involve explicit computations. Indeed, the modal answer is often equal to one minus the false positive rate (e.g., 95% when the false positive rate is 5%). This clearly involves an explicit calculation, not the output of an implicit process. Thus, errors can arise from incorrect application of rules (or application of incorrect rules), rather than just brute association.
The key point here is that base-rate neglect in the Medical Diagnosis problem provides little evidence for the exclusive operation of an implicit associative system. Indeed, it is arguable that adherents of classical statistics are guilty of similar base-rate neglect in their reliance on likelihood ratios (Howson & Urbach Reference Howson and Urbach2006). Presumably this is not due to an implicit associative system, but is based on explicit rules and assumptions.
What about the claim that the rule-based system is responsible for correct responses in frequency-based versions of the task? This hinges on the idea that representing the task in a frequency format alerts people to the relevant nested set relations, and thus permits the operation of the rule-based system. In one sense, this is trivially true – those participants who reach the correct answer can always be described as applying appropriate rules. But what are these rules? And what is the evidence for their use, as opposed to associative processes that could also yield a correct response?
B&S explicitly block one obvious answer – that once people are presented with information in a format that reveals the nested set structure, they use a simplified version of Bayes' rule to compute the final solution. The authors' reasons for rejecting this answer, however, are unconvincing. The cited regression analyses (Evans et al. Reference Evans, Handley, Over and Perham2002; Griffin & Buehler Reference Griffin and Buehler1999) were performed on a different task. And they were computed on grouped data, so it is possible that those who answered correctly did weight information equally. Furthermore, it is wrong to assume that a Bayesian position requires equal weighting – in fact, a full Bayesian treatment would allow differential weights according to the judged reliability of the sources.
More pertinently, if people are not using the frequency version of Bayes' rule, what are they doing? How do they pass from nested set relations to a correct Bayesian answer? B&S offer no concrete or testable proposal, and thus no reason to exclude an associative solution. Why can't the transparency of the nested set relations allow other associative processes to kick in? It is question-begging to assume that the associative system is a priori unable to solve the task.
Indeed, there are at least two arguments that support this alternative possibility. First, our sensitivity to nested sets relations might itself rest on System 1 (associative) processes. When we look at the Euler circles, we simply “see” that one set is included in the other (perhaps this is why they are so useful, because they recruit another System 1 process?). Second, it is not hard to conceive of an associative system that gives correct answers to the Medical Diagnosis problem. Such a system just needs to learn that the correct diagnosis covaries with the base rate as well as the test results. This could be acquired by a simple network model trained on numerous cases with varying base rates and test results. And a system (or person) that learned in this way could be described as implementing the correct Bayesian solution.
The dual-process framework in general makes a strong distinction between normative and non-normative behaviour. In so doing, it embraces everything and explains nothing. One simply cannot align the normative/non-normative and rule-based/associative distinctions. True, rule-based processes might often behave in accordance with a norm such as Bayes' theorem, and associative systems non-normatively (as in the example from Gluck & Bower Reference Gluck and Bower1988); but, as argued above, it is also possible for rule-based processes to behave irrationally (think of someone explicitly using an incorrect rule), and for associative systems to behave normatively (backpropagation networks are, after all, optimal pattern classifiers).
Moreover, we know that without additional constraints, each type of process can be enormously powerful. Imagine a situation in which patients with symptom A and B have disease 1, while those with symptoms A and C have disease 2, with the former being more numerous than the latter (i.e., the base-rate of disease 1 is greater). Now consider what inference to make for a new patient with only symptom A and another with symptoms B and C. Both cases are ambiguous, but if choice takes account of base-rate information, then disease 1 will be diagnosed in both cases. In fact, people reliably go counter to the base rate for the BC conjunction (hence the “inverse base-rate effect”), choosing disease 2, whereas they choose disease 1 for symptom A (Medin & Edelson Reference Medin and Edelson1988; Johansen et al., in press). Thus, in one and the same situation, we see both usage and counter-usage of base-rate information. But strikingly, these simultaneous patterns of behaviour have been explained both in rule-based systems (Juslin et al. Reference Juslin, Wennerholm and Winman2001) and in associative ones (Kruschke 2001), emphasizing the inappropriateness of linking types of behaviour (normative, non-normative) to different processing “systems” (rule-based or associative).
The crux of B&S's argument, that a dual-process framework explains people's performance on probability problems, is unconvincing both theoretically and empirically. This is not to dismiss their critique of the frequentist program, but to highlight the need for finer-grained analyses. A crude dichotomy between the associative-system and the rule-based system does not capture the subtleties of human inference.