Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-02-11T13:38:56.803Z Has data issue: false hasContentIssue false

Why the empirical literature fails to support or disconfirm modular or dual-process models

Published online by Cambridge University Press:  29 October 2007

David Trafimow
Affiliation:
Department of Psychology, MSC 3452, New Mexico State University, Las Cruces, NM 88003-8001. dtrafimo@nmsu.eduhttp://www.psych.nmsu.edu/faculty/trafimow.html
Rights & Permissions [Opens in a new window]

Abstract

Barbey & Sloman (B&S) present five models that account for performance in Bayesian inference tasks, and argue that the data disconfirm four of them but support one model. Contrary to B&S, I argue that the cited data fail to provide strong confirmation or disconfirmation for any of the models.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2007

There is insufficient space here to comment on all of the models for explaining performance on Baysian inference tasks that Barbey & Sloman (B&S) ostensibly disconfirm, so I will focus on what they consider to be the model that makes the strongest claims – the idea that the mind comprises specialized modules (see Barrett & Kurzban [2006] for a recent review). B&S's strategy is to list the prerequisites of the modular model and cite data that contradict them. The listed prerequisites are cognitive impenetrability, informational encapsulation, unique sensitivity to natural frequency formats, transparency of nested set relations, and appeal to evolution, the first three of which are contradicted by some of the cited findings. However, these findings are not a problem for the modular model because researchers who espouse the modular view have long since moved away from these three prerequisites and instead focus on “how modules process information” (Barrett & Kurzban Reference Barrett and Kurzban2006, p. 630). Although modules are considered to be domain specific, a domain is not defined as a content domain, but rather, as any way of individuating inputs. It is entirely possible that a module will process information for which it was not originally designed as a by-product if this other information conforms to the properties that determine which inputs are processed.

Let us now consider transparency of nested set relations and appeal to evolution. The former is featured by the model favored by B&S, and so they clearly cannot mean to argue against it, which leaves only the latter as a possible basis for disconfirmation. But few in the scientific world would argue that evolution did not happen and so this is unlikely to be disconfirmed; certainly B&S have not presented any evidence to disconfirm evolution. Consequently, the modular model is not forced to make or not make any of the predictions listed in Table 2 of the target article, and I am compelled to conclude that B&S have failed to disconfirm the modular model (or any of the weaker ones).

The foregoing comments should not be taken as arguments in favor of mental modules. For one thing, the watering down of the concept of modules, which renders it less susceptible to disconfirmation, may have caused the informational content and general utility of the model to also be watered down. In addition, the auxiliary assumptions necessary to make the modular model useful are extremely complicated and these complications may be under-appreciated. As an example, consider an arm as a module. Arms increase the ability to use tools, crawl, fight, balance, climb, and many other abilities. In addition, the arm might be said to comprise features (fingers, elbows, etc.) How would one tease apart the functions for which arms evolved versus those that are mere by-products, especially after taking into account that the features may or may not have evolved for very different reasons? Surely a mind is much more complicated than an arm, and so the potential complications are much more extensive. Perhaps these issues will be solved eventually but my bet is that it will not happen soon. Until this time, the modular model seems unlikely to provide a sound basis for Bayesian theorizing or theorizing in any other area of psychology.

The data cited by B&S also fail to provide much support for the dual-process model they maintain. It is doubtless true that presenting Bayesian problems such that the set structure is more transparent increases performance. But it is not clear why this necessitates a distinction between associative and rule-based processes, a distinction that has not been strongly supported in the literature. In fact, Kruglanski and Dechesne (Reference Kruglanski and Dechesne2006) have provided a compelling argument that these two types of processes are not qualitatively distinguishable from each other; both processes can involve attached truth values, pattern activation, and conditioning. Worse yet, even if the distinction were valid in some cases (and I don't think it is), there is very little evidence that it is valid in the case at hand. B&S seem to argue that when the set structure is not transparent, then people use associative processing; whereas they use a rule when the set structure is more transparent. It could be, however, that when the set structure is not transparent, people use rules but not the best ones. Or, when the set structure is transparent, this transparency may prime more appropriate associations. These alternative possibilities weaken the evidentiary support for the distinction. 

B&S provide a section titled, “Empirical summary and conclusions” (sect. 2.10) that illustrates what I consider to be the larger problem with the whole area. Consider the empirical conclusions. First, the helpfulness of frequencies varies across experiments and is correlated with intelligence and motivation. Who would predict that there will be no variance and that intelligence and motivation will be irrelevant to problem solving? Second, partitioning the data so as to make it more apparent what to do facilitates problem solving – another obvious conclusion. Third, frequency judgments are guided by inferential strategies. Again, who would predict that people's memories of large numbers of events will be so perfect as to render inferential processes unnecessary? (To anticipate the authors' Response, modular theorists cannot be forced to predict this.) Fourth, people do not optimally weight and combine event frequencies and use information that they should ignore. Given the trend in both social and cognitive psychological research for the last quarter century or more, documenting the many ways people mess up, this is hardly surprising. Finally, nested set representations are helpful, which is not surprising because they make the nature of the problem more transparent. Trafimow (Reference Trafimow2003) provided a Bayesian demonstration of the scientific importance of making predictions that are not obvious. Hopefully, future researchers in the area will take this demonstration seriously.

ACKNOWLEDGMENT

I thank Tim Ketelaar for his helpful comments on a previous version of this commentary.

References

Barrett, H. C. & Kurzban, R. (2006) Modularity in cognition: Framing the debate. Psychological Review 113:628–47.CrossRefGoogle ScholarPubMed
Kruglanski, A. W. & Dechesne, M. (2006) Are associative and propositional processes qualitatively distinct? Comment on Gawronski and Bodenhausen (2006). Psychological Bulletin 132:736–39.CrossRefGoogle ScholarPubMed
Trafimow, D. (2003) Hypothesis testing and theory evaluation at the boundaries: Surprising insights from Bayes's theorem. Psychological Review 110:526–35.CrossRefGoogle ScholarPubMed