The Jones & Love (J&L) target article begins what is hopefully an extended discussion of the virtues of rational models in psychology. Such discussion is sorely needed because the recent proliferation of such models has not been accompanied by the meta-theoretical understanding needed to appreciate their scientific contribution. When rational models are presented at conferences, the speaker always receives polite applause, but casual conversation afterwards often reveals that many listeners have little idea of how scientific understanding of the topic has been advanced. Even the practitioners (including myself) are often unable to fluently answer the question: “How has the field's understanding of the psychology of X been advanced?” This state of affairs needs to change.
J&L's article may help by clarifying how rational models can vary in their purpose and source of justification. At one extreme, there are models that fall into the “Bayesian Fundamentalism” category and yet are not susceptible to J&L's criticisms. One only need look at a model as old and venerable as signal detection theory (SDT) for an example. SDT specifies optimal behavior, given certain assumptions about the representation of perceptual input, priors, and a cost function. Importantly, the priors (the probability of a signal) and costs (e.g., of a false alarm) can be tied to features of the SDT experiment itself (for a review, see Maloney & Zhang Reference Maloney and Zhang2010). There are many examples of such models in the domains of perception and action.
But the apparent target of J&L's article are models in which priors are assumed rather than tied to features of an experimental (or any other) context and for which costs of incorrect decisions are unspecified. For example, numerous models specify how one should learn and reason with categories; that is, they assume some sort of prior distribution over systems of mutually exclusive categories (e.g., Kemp & Tenenbaum Reference Kemp and Tenenbaum2009; Sanborn et al. Reference Sanborn, Griffiths and Navarro2010a). But although this assumption may seem uncontroversial, it is not. Notoriously, even biological species (the paradigmatic example of categories) fail to conform to these assumptions, as there are cases in which the males of one “species” can successfully breed with the females of another, but not vice versa (and cases of successful breeding between As and Bs, and Bs and Cs, but not As and Cs) (Dupre Reference Dupre1981). In what sense should a model that accounts for human categorical reasoning be considered rational when its prior embodies assumptions that are demonstrably false? Of course, the costs associated with such ungrounded priors may be small, but models that fail to explicitly consider costs are common. Many rational models in higher-order cognition have this character.
My own modest proposal is that we should drop the label “rational” for these sorts of models and call them what they are, namely, probabilistic models. I suggest that freeing probabilistic models from the burden of rationality clarifies both their virtues and obligations. Considering obligations, J&L correctly observe that, if not grounded in the environment, justification for a model's priors must be found elsewhere. But the history of science provides numerous examples of testing whether postulated hidden variables (e.g., priors in a probabilistic model) exist in the world or in the head of the theorist, namely, through converging operations (Salmon Reference Salmon1984). For example, one's confidence in the psychological reality of a particular prior is increased when evidence for it is found across multiple, dissimilar tasks (e.g., Maloney & Mamassian Reference Maloney and Mamassian2009; Rehder & Kim Reference Rehder and Kim2010). It is also increased when the probabilistic model not only provides post hoc accounts of existing data but is also used to derive and test new predictions. For instance, the case for the psychological reality of SDT was strengthened when perceivers responded in predicted ways to orthogonal manipulations of stimulus intensity and payoff structure. This is how one can treat the assumptions of a probabilistic model as serious psychological claims and thus be what J&L describe as an “enlightened” Bayesian.
Taking the rationality out of probabilistic models also shifts attention to their other properties, and so clarifies for which tasks such models are likely to be successful. By using Bayes’ law as the only rule of inference, one's “explanation” of a psychological phenomenon, divided between process and knowledge in classic information-processing models, is based solely on knowledge (priors) instead. Said differently, one might view Bayes’ law as supporting a programming language in which to express models (a probabilistic analog of how theorists once exploited the other normative model of reasoning – formal logic – by programming in PROLOG [programming logic]; Genesereth & Nilsson Reference Genesereth and Nilsson1987). These models will succeed to the extent that task performance is determined primarily by human reasoners’ prior experience and knowledge. Probabilistic models also help identify variables that are likely to be critical to behavior (i.e., they provide an old-fashioned task analysis; Card et al. Reference Card, Moran and Newell1983); in turn, this analysis will suggest critical ways in which people may differ from one another. Finally, by making them susceptible to analysis, probabilistic models are directing researchers’ attention towards entirely new sorts of behaviors that were previously considered too complex to study systematically.
My expectation is that the analysis conducted by J&L will help lead to an appreciation of the heterogeneity among rational/probabilistic models and to clarity regarding the standards to which each should be held. This clarity will not only help conference-goers understand why they are clapping, it will promote the other sorts of virtuous model testing practices that J&L advocate. There are examples of Bayesian models being compared with competing models, both Bayesian (Rehder & Burnett Reference Rehder and Burnett2005) and non-Bayesian ones (e.g., Kemp & Tenenbaum Reference Kemp and Tenenbaum2009; Rehder Reference Rehder2009; Rehder & Kim Reference Rehder and Kim2010), but more are needed. Such activities will help the rational movement move beyond a progressive research program (in Lakatos's terms; see Lakatos Reference Lakatos, Lakatos and Musgrave1970) in which research activities are largely confirmatory, to a more mature phase in which the scientific contribution of such models is transparent.
The Jones & Love (J&L) target article begins what is hopefully an extended discussion of the virtues of rational models in psychology. Such discussion is sorely needed because the recent proliferation of such models has not been accompanied by the meta-theoretical understanding needed to appreciate their scientific contribution. When rational models are presented at conferences, the speaker always receives polite applause, but casual conversation afterwards often reveals that many listeners have little idea of how scientific understanding of the topic has been advanced. Even the practitioners (including myself) are often unable to fluently answer the question: “How has the field's understanding of the psychology of X been advanced?” This state of affairs needs to change.
J&L's article may help by clarifying how rational models can vary in their purpose and source of justification. At one extreme, there are models that fall into the “Bayesian Fundamentalism” category and yet are not susceptible to J&L's criticisms. One only need look at a model as old and venerable as signal detection theory (SDT) for an example. SDT specifies optimal behavior, given certain assumptions about the representation of perceptual input, priors, and a cost function. Importantly, the priors (the probability of a signal) and costs (e.g., of a false alarm) can be tied to features of the SDT experiment itself (for a review, see Maloney & Zhang Reference Maloney and Zhang2010). There are many examples of such models in the domains of perception and action.
But the apparent target of J&L's article are models in which priors are assumed rather than tied to features of an experimental (or any other) context and for which costs of incorrect decisions are unspecified. For example, numerous models specify how one should learn and reason with categories; that is, they assume some sort of prior distribution over systems of mutually exclusive categories (e.g., Kemp & Tenenbaum Reference Kemp and Tenenbaum2009; Sanborn et al. Reference Sanborn, Griffiths and Navarro2010a). But although this assumption may seem uncontroversial, it is not. Notoriously, even biological species (the paradigmatic example of categories) fail to conform to these assumptions, as there are cases in which the males of one “species” can successfully breed with the females of another, but not vice versa (and cases of successful breeding between As and Bs, and Bs and Cs, but not As and Cs) (Dupre Reference Dupre1981). In what sense should a model that accounts for human categorical reasoning be considered rational when its prior embodies assumptions that are demonstrably false? Of course, the costs associated with such ungrounded priors may be small, but models that fail to explicitly consider costs are common. Many rational models in higher-order cognition have this character.
My own modest proposal is that we should drop the label “rational” for these sorts of models and call them what they are, namely, probabilistic models. I suggest that freeing probabilistic models from the burden of rationality clarifies both their virtues and obligations. Considering obligations, J&L correctly observe that, if not grounded in the environment, justification for a model's priors must be found elsewhere. But the history of science provides numerous examples of testing whether postulated hidden variables (e.g., priors in a probabilistic model) exist in the world or in the head of the theorist, namely, through converging operations (Salmon Reference Salmon1984). For example, one's confidence in the psychological reality of a particular prior is increased when evidence for it is found across multiple, dissimilar tasks (e.g., Maloney & Mamassian Reference Maloney and Mamassian2009; Rehder & Kim Reference Rehder and Kim2010). It is also increased when the probabilistic model not only provides post hoc accounts of existing data but is also used to derive and test new predictions. For instance, the case for the psychological reality of SDT was strengthened when perceivers responded in predicted ways to orthogonal manipulations of stimulus intensity and payoff structure. This is how one can treat the assumptions of a probabilistic model as serious psychological claims and thus be what J&L describe as an “enlightened” Bayesian.
Taking the rationality out of probabilistic models also shifts attention to their other properties, and so clarifies for which tasks such models are likely to be successful. By using Bayes’ law as the only rule of inference, one's “explanation” of a psychological phenomenon, divided between process and knowledge in classic information-processing models, is based solely on knowledge (priors) instead. Said differently, one might view Bayes’ law as supporting a programming language in which to express models (a probabilistic analog of how theorists once exploited the other normative model of reasoning – formal logic – by programming in PROLOG [programming logic]; Genesereth & Nilsson Reference Genesereth and Nilsson1987). These models will succeed to the extent that task performance is determined primarily by human reasoners’ prior experience and knowledge. Probabilistic models also help identify variables that are likely to be critical to behavior (i.e., they provide an old-fashioned task analysis; Card et al. Reference Card, Moran and Newell1983); in turn, this analysis will suggest critical ways in which people may differ from one another. Finally, by making them susceptible to analysis, probabilistic models are directing researchers’ attention towards entirely new sorts of behaviors that were previously considered too complex to study systematically.
My expectation is that the analysis conducted by J&L will help lead to an appreciation of the heterogeneity among rational/probabilistic models and to clarity regarding the standards to which each should be held. This clarity will not only help conference-goers understand why they are clapping, it will promote the other sorts of virtuous model testing practices that J&L advocate. There are examples of Bayesian models being compared with competing models, both Bayesian (Rehder & Burnett Reference Rehder and Burnett2005) and non-Bayesian ones (e.g., Kemp & Tenenbaum Reference Kemp and Tenenbaum2009; Rehder Reference Rehder2009; Rehder & Kim Reference Rehder and Kim2010), but more are needed. Such activities will help the rational movement move beyond a progressive research program (in Lakatos's terms; see Lakatos Reference Lakatos, Lakatos and Musgrave1970) in which research activities are largely confirmatory, to a more mature phase in which the scientific contribution of such models is transparent.