The target article claims that cognitive science “should abandon any emphasis on optimality or suboptimality” (Rahnev & Denison [R&D], sect. 1, para. 4). In contrast, we argue that a radically increased emphasis on (bounded) optimality is crucial to the success of cognitive science. This commentary draws on a significant literature on bounded optimality, an idea that is borrowed from artificial intelligence (Russell and Subramanian Reference Russell and Subramanian1995). It argues that comparing the behavior of bounded optimal (also known as computationally rational) models to human behavior is a better way to progress the science of the mind than the authors’ “observer models.” Observer models are a form of descriptive model. In contrast, bounded optimal models can be predictive and explanatory (Howes et al. Reference Howes, Lewis and Vera2009; Reference Howes, Warren, Farmer, El-Deredy and Lewis2016; Lewis et al. Reference Lewis, Howes and Singh2014).
Lewis et al. (Reference Lewis, Howes and Singh2014) proposed computational rationality as an alternative to the standard use of optimality (rational analysis) in cognitive science. Computational rationality is a framework for testing mechanistic theories of the mind. The framework is based on the idea that behaviors are generated by cognitive mechanisms that are adapted to the structure of the external environment and also to the structure of the mind and brain itself. In this framework, theories of vision, cognition, and action are specified as “optimal program problems,” defined by an adaptation environment, a bounded machine, and a utility function. This optimal program problem is then solved by optimization (one of the utility maximizing programs is identified and selected), and the resulting behavior is compared to human behavior. Success is not used to conclude that people are either optimal or suboptimal, but rather, success indicates evidence in favor of the theory of the environment, bounded machine, and utility function.
One example of how computational rationality can be used to test theories was provided by Howes et al. (Reference Howes, Lewis and Vera2009). Consider an elementary dual-task scenario in which a manual response must be given to a visual pattern and a verbal response to the pitch of a tone. This task, known as a psychological refractory period (PRP) task, has been used extensively in an effort to understand whether cognition is strictly serial or whether it permits parallel processing (Meyer and Kieras Reference Meyer and Kieras1997). Although many theories had been proposed prior to Howes et al.’s work, they were sufficiently flexible that both serial and parallel models could be fitted to a large range of PRP data. A key source of the flexibility was the cognitive program (the strategy) by which elementary cognitive, perceptual, and motor processes were organized. Before Howes et al. (Reference Howes, Lewis and Vera2009), different (but plausible) programs were used to fit models to a wide range of data irrespective of whether cognition was assumed to be serial or parallel. Similarly, in perceptual decision-making tasks, the decision rule (criterion or threshold) is a simple example of a strategy or program. Freed from the requirements that the decision rule is optimal for the defined problem, almost any data might be fitted.
For the PRP task, Howes et al. used separate computational theories of the serial and parallel bounded machine and derived the optimal program for each. The optimal program was used to predict behavior. Optimality was not under test, but rather it was used as a principled method of selecting cognitive programs, independently of the observed data. What was under test was whether either serial and/or a parallel cognition could predict the observed behavior. On the basis of detailed quantitative analysis, their conclusion was that it was the serial theory that offered a better explanation of the data. Similarly, Myers et al. (Reference Myers, Lewis and Howes2013) tested the implications of noise in peripheral vision for human visual search. They found that a particular model of noise in peripheral vision predicts well-known visual search effects.
Illustrated with the previous examples and supported by the analysis of Lewis et al. (Reference Lewis, Howes and Singh2014), we can see that computational rationality has the following similarities and differences to the authors’ observer models:
Neither computationally rational models nor observer models set out to test whether people are optimal or suboptimal. In both cases, the aim is to test the collective set of theoretical assumptions in the model.
Unlike observer models, the computational rationality framework assumes that a program (a strategy or decision rule) will be determined, by the scientist, using an optimization algorithm. In so doing, it allows a quantitative and causal relationship to be established between theoretical assumptions and behavior. In contrast, with observer models, the analyst is permitted to pick any “plausible” decision rule (step 2 of Box 1 of R&D). As a consequence, despite their desire to reduce the perceived flexibility of the optimality approach (Bowers & Davis's [Reference Bowers and Davis2012a] just-so stories), R&D permit potentially extensive flexibility through the informal notion of plausibility.
By virtue of the fact that programs are determined through optimization, computational rationality supports prediction, whereas observer models are descriptive. For example, in Howes et al. (Reference Howes, Lewis and Vera2009), model parameters (e.g., noise level) were calibrated to single-task scenarios; optimal strategies were determined for dual-task scenarios; and test variables, including dual-task duration, were predicted by executing the optimized model. In contrast, observer models are descriptive by virtue of the admission of plausible decision rules. The potential arbitrariness of plausible decision rules dooms step 4 (Box 1), “specify how the conclusions depend on the assumptions,” to be an informal process of constructing just-so stories. In other words, observer models cannot be said to make predictions if the analyst must intervene to determine what is and what is not plausible.
In summary, although we agree with R&D that the focus of the behavioral sciences should be on testing theories of how the mind processes information, we believe that optimization, used by a scientist as a tool to determine the adaptive consequences of theoretical assumptions, offers a better way forward for psychological science than the proposed (descriptive) observer models.
The target article claims that cognitive science “should abandon any emphasis on optimality or suboptimality” (Rahnev & Denison [R&D], sect. 1, para. 4). In contrast, we argue that a radically increased emphasis on (bounded) optimality is crucial to the success of cognitive science. This commentary draws on a significant literature on bounded optimality, an idea that is borrowed from artificial intelligence (Russell and Subramanian Reference Russell and Subramanian1995). It argues that comparing the behavior of bounded optimal (also known as computationally rational) models to human behavior is a better way to progress the science of the mind than the authors’ “observer models.” Observer models are a form of descriptive model. In contrast, bounded optimal models can be predictive and explanatory (Howes et al. Reference Howes, Lewis and Vera2009; Reference Howes, Warren, Farmer, El-Deredy and Lewis2016; Lewis et al. Reference Lewis, Howes and Singh2014).
Lewis et al. (Reference Lewis, Howes and Singh2014) proposed computational rationality as an alternative to the standard use of optimality (rational analysis) in cognitive science. Computational rationality is a framework for testing mechanistic theories of the mind. The framework is based on the idea that behaviors are generated by cognitive mechanisms that are adapted to the structure of the external environment and also to the structure of the mind and brain itself. In this framework, theories of vision, cognition, and action are specified as “optimal program problems,” defined by an adaptation environment, a bounded machine, and a utility function. This optimal program problem is then solved by optimization (one of the utility maximizing programs is identified and selected), and the resulting behavior is compared to human behavior. Success is not used to conclude that people are either optimal or suboptimal, but rather, success indicates evidence in favor of the theory of the environment, bounded machine, and utility function.
One example of how computational rationality can be used to test theories was provided by Howes et al. (Reference Howes, Lewis and Vera2009). Consider an elementary dual-task scenario in which a manual response must be given to a visual pattern and a verbal response to the pitch of a tone. This task, known as a psychological refractory period (PRP) task, has been used extensively in an effort to understand whether cognition is strictly serial or whether it permits parallel processing (Meyer and Kieras Reference Meyer and Kieras1997). Although many theories had been proposed prior to Howes et al.’s work, they were sufficiently flexible that both serial and parallel models could be fitted to a large range of PRP data. A key source of the flexibility was the cognitive program (the strategy) by which elementary cognitive, perceptual, and motor processes were organized. Before Howes et al. (Reference Howes, Lewis and Vera2009), different (but plausible) programs were used to fit models to a wide range of data irrespective of whether cognition was assumed to be serial or parallel. Similarly, in perceptual decision-making tasks, the decision rule (criterion or threshold) is a simple example of a strategy or program. Freed from the requirements that the decision rule is optimal for the defined problem, almost any data might be fitted.
For the PRP task, Howes et al. used separate computational theories of the serial and parallel bounded machine and derived the optimal program for each. The optimal program was used to predict behavior. Optimality was not under test, but rather it was used as a principled method of selecting cognitive programs, independently of the observed data. What was under test was whether either serial and/or a parallel cognition could predict the observed behavior. On the basis of detailed quantitative analysis, their conclusion was that it was the serial theory that offered a better explanation of the data. Similarly, Myers et al. (Reference Myers, Lewis and Howes2013) tested the implications of noise in peripheral vision for human visual search. They found that a particular model of noise in peripheral vision predicts well-known visual search effects.
Illustrated with the previous examples and supported by the analysis of Lewis et al. (Reference Lewis, Howes and Singh2014), we can see that computational rationality has the following similarities and differences to the authors’ observer models:
Neither computationally rational models nor observer models set out to test whether people are optimal or suboptimal. In both cases, the aim is to test the collective set of theoretical assumptions in the model.
Unlike observer models, the computational rationality framework assumes that a program (a strategy or decision rule) will be determined, by the scientist, using an optimization algorithm. In so doing, it allows a quantitative and causal relationship to be established between theoretical assumptions and behavior. In contrast, with observer models, the analyst is permitted to pick any “plausible” decision rule (step 2 of Box 1 of R&D). As a consequence, despite their desire to reduce the perceived flexibility of the optimality approach (Bowers & Davis's [Reference Bowers and Davis2012a] just-so stories), R&D permit potentially extensive flexibility through the informal notion of plausibility.
By virtue of the fact that programs are determined through optimization, computational rationality supports prediction, whereas observer models are descriptive. For example, in Howes et al. (Reference Howes, Lewis and Vera2009), model parameters (e.g., noise level) were calibrated to single-task scenarios; optimal strategies were determined for dual-task scenarios; and test variables, including dual-task duration, were predicted by executing the optimized model. In contrast, observer models are descriptive by virtue of the admission of plausible decision rules. The potential arbitrariness of plausible decision rules dooms step 4 (Box 1), “specify how the conclusions depend on the assumptions,” to be an informal process of constructing just-so stories. In other words, observer models cannot be said to make predictions if the analyst must intervene to determine what is and what is not plausible.
In summary, although we agree with R&D that the focus of the behavioral sciences should be on testing theories of how the mind processes information, we believe that optimization, used by a scientist as a tool to determine the adaptive consequences of theoretical assumptions, offers a better way forward for psychological science than the proposed (descriptive) observer models.