The Bayesian framework asserts that problems of perception, action, and cognition can be understood as (approximations to) ideal rational inference. Bayes’ rule is a direct consequence of the definition of conditional probability, and is reasonably captured as the simple “vote counting” procedure outlined in the target article by Jones & Love (J&L). This is clearly not where the interesting science lies. The real scientific problems for a Bayesian analysis arise in defining the appropriate hypothesis space (the “candidates” for whom votes will be cast), and a principled means of assigning priors, likelihoods, and cost functions that will, when multiplied, determine the distribution of votes (and the ultimate winner[s]).
Bayesian models of cognition begin by asserting that brains are devices that compute, and that it is possible to dissociate what they compute from how they compute. David Marr's (Reference Marr1982) now infamous dissociation of the computational, algorithmic, and implementation “levels of analysis” is usually invoked to justify this belief, and inspires attempts to “reverse engineer” the mind (Tenenbaum et al. Reference Tenenbaum, Kemp, Griffiths and Goodman2011). It is no coincidence that Marr's levels resemble the stages of someone writing a computer program, which are granted some (unspecified) form of ontological status: A problem is defined, code is written to solve it, and a device is employed to run the code. But unlike the computational devices fashioned by man, the brain, like other bodily organs, emerged as the consequence of natural processes of self-organization; the complexity of its structure and function was not prescribed in some top-down manner as solutions to pre-specified computational problem(s). The only “force” available to construct something ideal is natural selection, which can only select the best option from whatever is available, even if that is nothing more than a collection of hacks. As for the “computational level” theory, it is far from evident that brains can be accurately characterized as performing computations any more than one can claim that planets compute their orbits, or rocks rolling down hills compute their trajectory. Our formal models are the language that we use to try to capture the causal entailments of the natural world with the inferential entailments embodied in the formal language of mathematics (Rosen Reference Rosen1991). The assertion that a computational level of analysis exists is merely an assertion grounded on the latest technological metaphor. If this assertion is false, then models that rely on its veracity (like Bayesian models of cognition) are also false.
But even if we accept Marr's levels of analysis, we have gained very little theoretical leverage into how to proceed. We must now guess what the computational problems are that brains solve. There is no principled Bayesian method to make these guesses, and this is not where the guessing game ends. Once a computational problem is identified, we must guess a hypothesis space, priors, likelihoods, and cost functions that, when multiplied together, supply the problem's solution. The majority of this guess-work is shaped by the very data that cognitive models are attempting to explain: The “richly structured representations” of cognition often seem like little more than a re-description of the structure in the data, recast as post hoc priors and likelihoods that now pose as theory.
Similar problems arise in Bayesian models of perception, although some of the guess-work can be constrained by generative models of the input and the statistics of natural environments. The Bayesian approach has been cast into “natural systems analysis” (Geisler & Ringach Reference Geisler and Ringach2009), which asserts that perceptual systems should be analyzed by specifying the natural tasks an animal performs and the information used to perform them. The specification of “natural tasks” plays essentially the same role as Marr's computational level analysis. Once a task is defined, an ideal observer is constructed: a hypothetical device that performs a task optimally given the “available information.” It is difficult to argue with the logic of this approach, as it is equivalent to stating that we should study the perceptual abilities that were responsible for our evolutionary survival. But how do we discriminate between the natural tasks that were the product of natural selection from those that merely came along for the ride? This problem is not unique to the analysis of psychological systems; it is a general problem of evolutionary biology (i.e., distinguishing products of adaptive selection from “spandrels” – by-products of the selective adaptation of some other trait).
It is unclear how natural systems analysis provides any theoretical leverage into any of these deep problems (i.e., how the modifier “natural” constrains “systems analysis”). We must first guess what counts as the “natural tasks” to determine the appropriate objects of study. We then guess what information is available (and used) to perform that task ideally. The ideal observers so articulated are only “ideal” to the extent that we have correctly identified both the available information and a task that the perceptual system actually performs. We then construct an experiment to compare biological performance with ideal performance. And although natural systems analysis begins by considering properties of natural scenes, the majority of the experimental paradigms that assess natural tasks are largely indistinguishable from the larger body of perceptual research. In order to achieve adequate experimental control, most of the complexity of natural scenes has been abstracted away, and we are left with displays and methods that could have (and typically were) invented without any explicit reference to, or consideration of, Bayes theorem.
In the end, we are left trying to understand what animals do and how they do it. The hard problems remain inaccessible to the tools of Bayesian analysis, which merely provide a means to select an answer once the hard problem of specifying the list of possible answers has been solved (or at least prescribed). Bayesian analysis assures us that there is some way to conceive of our perceptual, cognitive, and motor abilities as “rational” or “ideal.” Like J&L, I fail to experience any insight in this reassurance. And I am left wondering how to reconcile such views with the seemingly infinite amount of irrationality I encounter in my daily life.
The Bayesian framework asserts that problems of perception, action, and cognition can be understood as (approximations to) ideal rational inference. Bayes’ rule is a direct consequence of the definition of conditional probability, and is reasonably captured as the simple “vote counting” procedure outlined in the target article by Jones & Love (J&L). This is clearly not where the interesting science lies. The real scientific problems for a Bayesian analysis arise in defining the appropriate hypothesis space (the “candidates” for whom votes will be cast), and a principled means of assigning priors, likelihoods, and cost functions that will, when multiplied, determine the distribution of votes (and the ultimate winner[s]).
Bayesian models of cognition begin by asserting that brains are devices that compute, and that it is possible to dissociate what they compute from how they compute. David Marr's (Reference Marr1982) now infamous dissociation of the computational, algorithmic, and implementation “levels of analysis” is usually invoked to justify this belief, and inspires attempts to “reverse engineer” the mind (Tenenbaum et al. Reference Tenenbaum, Kemp, Griffiths and Goodman2011). It is no coincidence that Marr's levels resemble the stages of someone writing a computer program, which are granted some (unspecified) form of ontological status: A problem is defined, code is written to solve it, and a device is employed to run the code. But unlike the computational devices fashioned by man, the brain, like other bodily organs, emerged as the consequence of natural processes of self-organization; the complexity of its structure and function was not prescribed in some top-down manner as solutions to pre-specified computational problem(s). The only “force” available to construct something ideal is natural selection, which can only select the best option from whatever is available, even if that is nothing more than a collection of hacks. As for the “computational level” theory, it is far from evident that brains can be accurately characterized as performing computations any more than one can claim that planets compute their orbits, or rocks rolling down hills compute their trajectory. Our formal models are the language that we use to try to capture the causal entailments of the natural world with the inferential entailments embodied in the formal language of mathematics (Rosen Reference Rosen1991). The assertion that a computational level of analysis exists is merely an assertion grounded on the latest technological metaphor. If this assertion is false, then models that rely on its veracity (like Bayesian models of cognition) are also false.
But even if we accept Marr's levels of analysis, we have gained very little theoretical leverage into how to proceed. We must now guess what the computational problems are that brains solve. There is no principled Bayesian method to make these guesses, and this is not where the guessing game ends. Once a computational problem is identified, we must guess a hypothesis space, priors, likelihoods, and cost functions that, when multiplied together, supply the problem's solution. The majority of this guess-work is shaped by the very data that cognitive models are attempting to explain: The “richly structured representations” of cognition often seem like little more than a re-description of the structure in the data, recast as post hoc priors and likelihoods that now pose as theory.
Similar problems arise in Bayesian models of perception, although some of the guess-work can be constrained by generative models of the input and the statistics of natural environments. The Bayesian approach has been cast into “natural systems analysis” (Geisler & Ringach Reference Geisler and Ringach2009), which asserts that perceptual systems should be analyzed by specifying the natural tasks an animal performs and the information used to perform them. The specification of “natural tasks” plays essentially the same role as Marr's computational level analysis. Once a task is defined, an ideal observer is constructed: a hypothetical device that performs a task optimally given the “available information.” It is difficult to argue with the logic of this approach, as it is equivalent to stating that we should study the perceptual abilities that were responsible for our evolutionary survival. But how do we discriminate between the natural tasks that were the product of natural selection from those that merely came along for the ride? This problem is not unique to the analysis of psychological systems; it is a general problem of evolutionary biology (i.e., distinguishing products of adaptive selection from “spandrels” – by-products of the selective adaptation of some other trait).
It is unclear how natural systems analysis provides any theoretical leverage into any of these deep problems (i.e., how the modifier “natural” constrains “systems analysis”). We must first guess what counts as the “natural tasks” to determine the appropriate objects of study. We then guess what information is available (and used) to perform that task ideally. The ideal observers so articulated are only “ideal” to the extent that we have correctly identified both the available information and a task that the perceptual system actually performs. We then construct an experiment to compare biological performance with ideal performance. And although natural systems analysis begins by considering properties of natural scenes, the majority of the experimental paradigms that assess natural tasks are largely indistinguishable from the larger body of perceptual research. In order to achieve adequate experimental control, most of the complexity of natural scenes has been abstracted away, and we are left with displays and methods that could have (and typically were) invented without any explicit reference to, or consideration of, Bayes theorem.
In the end, we are left trying to understand what animals do and how they do it. The hard problems remain inaccessible to the tools of Bayesian analysis, which merely provide a means to select an answer once the hard problem of specifying the list of possible answers has been solved (or at least prescribed). Bayesian analysis assures us that there is some way to conceive of our perceptual, cognitive, and motor abilities as “rational” or “ideal.” Like J&L, I fail to experience any insight in this reassurance. And I am left wondering how to reconcile such views with the seemingly infinite amount of irrationality I encounter in my daily life.