We appreciated the diverse range of views reflected in the comments and we now face the task of composing the best response we can produce given the constraints imposed by our deadline, the word limit, and our opportunity cost. So, we will try to put our theory into practice. The commentaries raised a wide range of questions, concerns, and suggestions. To address them efficiently, we have grouped them into six sections. In the first section, we apply ideas from resource rational analysis (RRA) to commentators’ examples of human errors and argue that RRA is useful even if people are only roughly resource rational. In the second section, we respond to commentaries that were concerned with the normative status of resource rationality and the philosophical and evolutionary foundations of RRA. In the third section, we synthesize and discuss the commentators’ proposals for augmenting RRA with limits on what can be postulated as a cognitive constraint. In the fourth section, we discuss that RRA can be applied to different types of cognitive architectures. In the fifth section, we synthesize and discuss the commentators’ thoughts on how incorporating cognitive constraints into rational models can broaden the scope of phenomena to which they are applicable. In the sixth section, we discuss how RRA can be extended beyond the cognition of a single individual. We conclude with a summary and future directions.
R1. RRA is useful even if people are only roughly resource rational
Several commentators pointed to behavioral results, thought experiments, and anecdotes about human judgments and decisions that deviate from certain intuitions or models of optimality. An RRA would leverage these findings to refine the model of what people are trying to do or identify how and why their heuristics fall short of the optimal solution. In the example of the apparently anti-Bayesian size–weight illusion mentioned by Mandelbaum, Won, Gross, & Firestone, a rational analysis might hypothesize that what people do is to first estimate the volume and the density of the object(s) from noisy observations and then multiply those estimates to estimate the object's pass. As illustrated in Figure R1, the reasonable assumption that people incorporate some prior knowledge according to which densities usually lie between those of the three light boxes and that of the heaviest box and volumes usually lie in between those of the single box and the combined volume of the three boxes – provides a parsimonious explanation for the size–weight illusion. (Similar results can be produced if mass is also assumed to be noisily observed in addition to volume and density.) For Mandelbaum et al.'s second example of a seemingly anti-Bayesian inference, belief polarization, multiple rational analyses have already been published (e.g., Cook & Lewandowsky Reference Cook and Lewandowsky2016; Jern et al. Reference Jern, Chang and Kemp2014).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200310134820219-0794:S0140525X19002012:S0140525X19002012_fig1.png?pub-status=live)
Figure R1. The “anti-Bayesian” perceptual illusion described by Mandelbaum et al. can be produced by a simple Bayesian model. This model infers the density and volume of an object based on noisy observations. The mass is then calculated from the inferred density and volume. If the prior favors larger volumes than the single container (A) and larger densities than the three containers (ABC) then the inferred mass will be higher for the single container. For simplicity, volume is measured in units of containers, density in grams per container.
Contrary to the misconstrued framing by Davis and Marcus, RRA is not a retreat in an imaginary battle about whether people are rational. Rather, RRA is a methodological advance from modeling the function of cognitive abilities to modeling the underlying cognitive mechanisms. It moves forward the research program that David Marr (Reference Marr1982) initiated to reach an integrated understanding of the brain in which theories of its functions (computational level of analysis) inform and are informed by models of the underlying cognitive mechanisms (algorithmic level of analysis) and their biophysical realization in neural circuits (implementation level). It is critical to understand that, as Rahnev pointed out, RRA's assumption of bounded optimality is not a substantive claim about the mind but a methodological device to efficiently search through the endless space of possible mechanisms. Contrary to what Davis and Marcus claim, rationalizing irrationalities is NOT the goal of RRA. Rahnev correctly noted that (i) even if people were resource rational, RRA would be unable to prove this is the case and that (ii) even the successes of RRA cannot prove the assumption of resource rationality correct, they only prove it useful. The latter is not an accidental design flaw in our methodology. Rather, it reflects the fact that proving people to be rational has never been the goal. Instead, the goal of rational analysis and RRA has always been to understand what the mind is trying to do, how it does that, and why it does it that way. We agree that people are not perfectly resource rational and may be far from rational in certain situations. But as Rahnev correctly noted, RRA is useful even if we are trying to characterize non-resource rational humans, or as George Box put it, “All models are wrong but some are useful.” Furthermore, RRA is useful not only as a methodology for understanding the mind, but also as a guideline for how to improve it as our preliminary work on cognitive tutors illustrates (Lieder et al. Reference Lieder, Callaway, Jain, Krueger, Das, Gul and Griffiths2019a).
As Rahnev and our target article point out, there are many methodological challenges to testing substantive claims about resource (ir)rationality that nobody has been able to overcome yet. Any conclusions about people's resource (ir)rationality – including those of Davis and Marcus – thus come with serious methodological caveats. Some of these caveats can be addressed in future work, and we agree that measuring the relevant constraints is a valuable step toward enabling more accurate estimates of the extent to which the brain is resource (ir)rational.
Davis and Marcus claim that “a serious version of the bounded rationality view must presume that the tradeoff between effort and decision-making is made optimally” and then dismiss this hypothesis by appealing to common sense. Both are problematic. The former is problematic because demanding that people always optimally tradeoff the quality of their decisions against their cost is an unattainably high ideal that is incompatible with the idea of resource rationality. Achieving this ideal would require optimal meta-decision-making which is even more computationally intractable than optimal decision-making itself. Resource rationality takes the cost of meta-decision-making into account and this means that even a resource rational decision maker will sometimes be too impulsive and other times overthink their decision. The resource rational framework makes it possible to derive precise predictions about the circumstances under which we should expect to see impulsivity and the circumstances in which we can expect to see overthinking. The underlying principle is that the mechanism by which the brain allocates control between different decision systems should make optimal use of the meta-decision system's limited computational resources on average across all of the situations that the decision maker might encounter in the environment to which they are adapted. This means that it is not logically sound to refute resource rationality by pointing to individual situations in which people appear to occasionally make suboptimal tradeoffs. The distinction between a cognitive mechanism's expected performance across all situations a person might encounter in their life versus its performance in one particular situation (or a handful of particular situations) invalidates the logic of the arguments that Davis and Marcus based on occasional errors in particular situations and thereby invalidates their strong conclusions that “bounded optimality casts virtually no light on what is and is not easy” and that RRA “predicts very little of the texture of actual human decision-making.”
We agree with Davis and Marcus that their examples of failures of human memory are not inevitable consequences of neural capacity constraints alone. This is why (resource-)rational analyses of memory emphasize the ecological distribution of problems to which memory is adapted. Our memory mechanisms appear to be optimized for evolutionarily important problems, such as navigation and social interaction, at the expense of being less effective for evolutionary less important problems, such as memorizing 10-digit numbers – even though there are real-life situations in which being able to memorize a 10-digit number would have high utility. Within the framework of RRA, one could hypothesize that people's memory mechanisms are boundedly optimal for evolutionary environments rather than the environment of the twenty-first century.
Finally, Davis and Marcus's misconstrual of the goal of our target article as arguing that people are rational led them to falsely accuse us of confirmation bias, claiming that we selectively reviewed research that can be construed as evidence for the hypothesis that cognitive biases are rational consequences of bounded cognitive resources. The truth is that our goal was to synthesize recent methodological advances. Within the constraints of our word limit, we have tried to provide a comprehensive – and thus, fair and unbiased – survey of previous applications of RRAs regardless of their conclusions about human (ir)rationality. In doing so, we have included several RRAs that identified interesting deviations from resource rationality. A grain of truth in the charge of confirmation bias might be that each methodology is usually preferentially applied in those areas where it is most useful. So, it is possible that there is a sampling bias or a publication in the literature, in the sense that researchers publish RRAs primarily about phenomena that are roughly resource rational but do not attempt RRAs of phenomena that seem hopelessly irrational or let unsuccessful RRAs disappear in their metaphorical file drawers. So far, we have seen no evidence of this, but it seems plausible that this would happen with any new methodology.
R2. On RRA's philosophical and evolutionary foundations
Several commentators raised questions and concerns about the role of normative considerations in RRA as well as their nature, justification, and compatibility with evolutionary theory.
Colombo wondered how exactly the normative status of resource rationality can be justified. As highlighted in Equation 2, the normative status of resource rationality originates from the normative status of expected utility theory. That is, resource rational minds are optimal because they maximize the agent's utility in the long run within the limits of what the agent is capable of. Starting from this principle, we derived the definition of resource rational heuristics for a known environment (Equation 3) and then extended it to account for limited information about the structure of the environment (Equation 4). The environments or distributions over environments assumed in RRA are the ecological and evolutionary environments to which the agent is adapted and the utility function is meant to encode the goals of the organism. For this reason, we regard resource rationality as a (non-standard) version of ecological rationality. We would welcome future work that evaluates resource rationality against the desiderata for theories of ecological rationality highlighted by Colombo. Colombo characterized Equation 4 as saying that “rational agents ought to act so as to maximize some sort of expected utility, taking into account the costs of computation, time pressures, and limitations in the processing of relevant information available in the environment.” We would thus like to clarify two things. First, Equation 4 defines rational cognitive mechanisms rather than rational behavior. Second, the agent is not expected to perform a cost-benefit analysis weighing the benefits of better decisions against the computational cost required to arrive at them. It is merely expected to carry out a simple heuristic that may have been discovered by evolution, learnt from past experiences, or copied from other people. Later on, Colombo rightfully highlights the need to specify under which conditions deviations from using the heuristic that would be most effective in a particular situation can or cannot be attributed to resource-rational heuristics for choosing heuristics. This problem can be solved by extending resource rationality by the addition of bounded-optimal meta-decision-making.
Colombo and Kalbach also raised the question whether and under which conditions violations of resource rationality should be considered as errors or cognitive biases. Kalbach took issue with us redefining the concept of “cognitive bias” as a violation of resource rationality while simultaneously endorsing inductive biases as a necessity for good scientific inferences. We would like to clarify that despite the lexical similarity inductive biases and cognitive biases are very different concepts from different fields. Inductive biases are neutral in valence – some kind of bias is necessary to support learning – whereas cognitive biases are defined by their deviation from some normative standard. Colombo highlights that there are many different types of rationality, such as epistemic rationality versus practical rationality, and that agents differ widely in their goals and cognitive constraints. He concludes that one, therefore, cannot diagnose irrationality from an agent's deviations from any single normative standard. We agree that this plurality renders blanket statements about people's (ir)rationality rather meaningless. As a constructive alternative, we would like to propose that (ir)rationality should always be measured relative to the individual's goals, preferences, and cognitive constraints, and the structure of their environment(s). Furthermore, the resulting assessment should be carefully qualified by exactly which type of rationality is being assessed and under which assumptions. The RRA framework can be used to incorporate many of these desiderata by adjusting its utility function, the bounds and costs on cognitive resource, and the distribution over possible environments. From an ethical perspective, we think it is important that these desiderata are considered if resource rationality is to be used to measure a person's rationality for purposes similar to those that IQ tests and personality inventories are used for.
Kalbach appeared to object to resource rationality as a normative standard because he thought that the resource rational cognitive mechanism is always a simple heuristic. But this is simply not true, because the optimal amount of thinking strongly depends on the person's utility function. That is, when the person's utility function and the nature of the situation make accuracy sufficiently more important than time and a more deliberate strategy performs sufficiently better than its heuristic alternatives, then extensive deliberation would be resource rational. In that case, relying on a simple heuristic would be resource-irrational. Thus, for a person who values advancing the frontiers of human knowledge above everything else, investing thousands of hours into astrophysics can be completely resource rational, even though for a person who does not value this kind of knowledge at all, the resource rational way of thinking might lead to serious misconceptions about the nature of the universe.
Given how diverse and flexible notions of rationality are, we second Colombo's recommendation that researchers who use resource rationality to revisit the debate about human rationality should be very clear and precise about exactly what norms they are testing people against and word their conclusions accordingly. Contrary to what Davis and Marcus might think, we have absolutely no interest in perpetuating pointless debates about rationality based on terminological confusions.
Kalbach expressed serious concerns about the role of normative considerations in the descriptive enterprise of understanding the mind as it is. In his view, RRA is predicated on the naturalistic fallacy because it conflates what is with what ought to be. We would like to clarify that RRA clearly distinguishes between the optimal solutions to the problems solved by cognitive systems (i.e., what they “ought” to do) versus the cognitive/neural mechanisms they employ to realize that function (i.e., what actually “is” happening in the brain). We regard them as qualitatively different kinds of questions with different answers. So we do NOT confuse what is with what ought to be (or vice versa). But we do follow the legacy of David Marr (Reference Marr1982) in making the methodological assumption that to understand what a cognitive system does it is useful to attribute a function to it. This is a purely methodological device rather than a theoretical assumption. We fully subscribe to Darwinian evolution and we agree that there is no physical reality to concepts such as “purpose” and “function,” but we think that these concepts are nevertheless useful for understanding the mind, developing models, and making predictions. We regret that our use of these terms was confusing, and we would like to clarify that the seemingly teleological components of RRA are purely methodological. That is, we use methodological assumptions of bounded optimality as a heuristic for generating hypotheses about the mind/brain and then test them empirically. As the studies we reviewed in our target article illustrate, this approach has been very useful so far.
Szollosi and Newell also challenge the methodology of ascribing functions to the mind's cognitive systems. Their concern seems to be that unlike a cash register, the human mind does not have a fixed function to solve a given problem in a given representation but can invent its own problems and choose its own representations. In our view, the mind's capacity to flexibly adapt its representations is a computational resource that it can employ to realize the functions it has evolved to fulfill. From this perspective, RRA could be used to understand how and why people construct mental representations in the way they do. To the extent that people learn to solve their self-defined problems efficiently, RRA could also be used to model how people solve the problems they invented for themselves as if they were evolutionarily-engrained functions.
Contrary to Kalbach and Szollosi and Newell, Theriault, Young & Barrett and Schulz see great value in starting from evolutionarily-engrained functions. According to Theriault et al., one of the main functions that the brain evolved to serve is to regulate the distribution and delivery of limited resources throughout the body. We agree that regulating resource usage is an important function of the brain and would be happy to see RRA being applied to understand how the brain realizes this function.
Haas and Klein and Theriault et al. advocate extending RRA from individual cognitive processes to the entire brain. Haas and Klein argue that this is necessary to accurately capture how resource constraints emerge from and are negotiated by the competition between multiple processes, networks, or systems over multiple timescales. We welcome their proposal for holistic RRA and are happy to note that ongoing work by Musslick et al. (Reference Musslick, Dey, Ozcimder, Patwary, Willke and Cohen2016; Reference Musslick, Saxe, Ozcimder, Dey, Henselman and Cohen2017) and Segev et al. (Reference Segev, Musslick, Niv and Cohen2018) has already begun to implement it. Theriault et al. argue that “aspects of cognition” must be recognized as parts of a whole, and modeled in the context of the brain's general function within organisms. We agree that a complete theory of any component of the mind must encompass the entire organism and its environment. But since understanding complex systems can be very challenging, we also think that it is methodologically useful to initially focus on one of the system's modules as if it was an independent sub-system with a function of its own. This may be why, so far, RRA has been primarily applied to sub-systems that have been identified and isolated in previous psychological research.
Although Kalbach appeared to view the explanatory principle of resource rationality to be incompatible with Darwinian evolution, Schulz and Haas and Klein argued that RRA can and should be grounded in the theory of evolution. Schulz argues that the strong correlation between cognitive and neurobiological and metabolic efficiency provides an evolutionary foundation for the role of resource constraints and costs in RRA, and Theriault et al. emphasize that making efficient use of limited resources is essential from an evolutionary perspective. Schulz's evolutionary perspective also addresses Kalbach's misconception that deliberation can never be resource rational. Schulz argues that the evolutionary selection for the ability to adapt to changing environments makes deliberate reasoning resource rational in situations that cannot be handled by evolved simple heuristics. We agree with Schulz's perspective and we are looking forward to future work that will enrich RRA with evolutionary theory and RRAs of how people adapt to changing environments.
Haas and Klein point to additional insights from the study of evolution can inform RRA: satisficing, path dependencies, and competition between evolving and existing neural systems. We think that the insight that what can evolve easily strongly depends on what has evolved already might be an especially useful addition to RRA that speaks to the generous inclusion of capacities that appeared early in evolution in the cognitive architecture to which RRA is applied. We agree that re-use and overlap of neural pathways are critical for understanding why the capacity of certain cognitive systems is more constrained than the capacity of others.
R3. Introducing constraints on constraints
Several commentators (Bates, Sims, and Jacobs (Bates et al.); Dimov; Sanborn, Zhu, Spicer, and Chater (Sanborn et al.); Ma & Woodford) have correctly pointed out that identifying resource limitations is a critical bottleneck of RRA. Ma and Woodford pointed out there is currently no principled way to make those assumptions and that, consequently, extant RRAs differ widely in their assumptions about the nature of people's cognitive resources and their constraints. We agree with these commentators that this makes developing a principled methodology for identifying cognitive constraints an important direction for future work on RRA. Identifying cognitive constraints is challenging because any sub-optimality in performance could either result from a sub-optimal cognitive strategy, resource constraints, or a combination of both. We agree with Dimov that RRA itself can help us overcome this problem because the methodological assumption of bounded optimality solves the non-identifiability problem that usually arises when both the process and the cognitive architecture must be inferred at the same time.
Bates et al. proposed to require that all constraints must be formulated in terms of the information theoretic notion of channel capacity. We agree that channel capacity could provide a unifying language for modeling representational constraints. However, not all constraints are about representation. Some are also about how much computation can be performed on any given representation. Dimov proposed to ground assumptions about cognitive resources and their constraints in cognitive architectures such as ACT-R. We agree that this is a useful approach for leveraging the empirical findings that have already been built into these cognitive architectures, but there may be other computational resources and constraints that cognitive architectures do not capture yet. For instance, Sanborn et al. propose that one of those computational resources is sampling. Furthermore, Ma and Woodford point to RRAs where the relevant computational constraints are specified in terms of biophysical limits. We believe that different phenomena are best explained at different levels of analysis and/or abstraction. Furthermore, different cognitive systems (e.g., vision vs. relational reasoning) differ in their computational architectures and computational constraints. Thus, unlike Bates et al., we believe that there truly are different types of cognitive constraints. For instance, time constraints are conceptually different from limited working memory capacity. We therefore think that it makes sense that different RRAs emphasize different types of cognitive constraints.
Despite this, we do see great value in developing methodological principles for determining what the resource limitations are in a given domain at a given level of abstraction and to build bridges between the assumptions made at different levels of analysis. We hope that our target article and the range of perspectives offered in the commentaries will help start an interdisciplinary conversation that will lead toward a unification of methodologies and a more principled approach to modeling cognitive constraints. Although we welcome Bates et al.'s idea to extended RRA with stronger constraints on what can be postulated as a constraint, it is not true that RRA does not have any constraints on constraints and runs the risk of overfitting and “just-so” theorizing. To the contrary, RRA already avoids overfitting and just-so stories by putting constraints on constraints; it does so by demanding that assumed constraints should be empirically grounded or empirically tested.
Ma and Woodford raised the question whether the sampling models from the one-and-done analysis, the resource rational anchoring-and-adjustment model, and the utility-weighted sampling model really optimize a linear combination of performance and resource cost. We can confirm that all three of these RRAs can be expressed in terms of Equation 3. In the case of the one-and-done analysis where the optimal number of samples is chosen so as to maximize expected performance minus the time cost of generating samples. In the resource rational anchoring-and-adjustment model, the number of adjustments is chosen so as to maximize expected reward minus the opportunity cost of time. The UWS model maximizes performance subject to a hard constraint on the number of samples. However, this is just a special case of Equation 3 in the target article where the cost of computation is constant across all heuristics.
We strongly agree with Lewis and Howes that RRA should be augmented with a principled theory guiding the modeler's assumptions about the utility function. Lewis and Howes argue that the utility function should reflect the agent's internal state (as in Equation 3) and view this as being inconsistent with the standard formulation of bounded optimality in Equation 2. To resolve this apparent inconsistency, we would like to clarify that although the utility function in Equation 2 scores the agent's entire life, the utility function in Equation 3 only scores its performance in making a single decision or judgment. One critical difference between these two settings is that the agent's belief state becomes worthless when the agent dies whereas the intermediate belief state following a single decision or judgment is valuable because it can inform future decisions. In our formulation, the value of the agent's belief states is grounded in the expected improvement in the value of world states brought about by its impact on future decisions. Thus, far from being unrelated or inconsistent, Equation 3 is a mathematical consequence of Equation 2. This reconciles Lewis and Howes's intuition that the agent's utility should depend on the agent's internal state with the original formulation of bounded optimality in Equation 2. Furthermore, we agree with Lewis and Howes that what we call the agent's belief state b in Equation 3 should be taken to include other aspects of the agent's internal state beyond its beliefs. Concretely, it should include all aspects of the agent's internal state that might impact its future decisions.
R4. Proposed computational architectures
RRA does not make a commitment to a particular computational architecture, specifying the terms of the tradeoff between utility and computational costs but not the kinds of computations or the way that those costs are denominated. Several commentaries proposed specific computational architectures, including sampling (Sanborn et al.), quantum computation (Atmanspacher Basieva, Busemeyer, Khrennikov, Pothos, Shiffrin, and Wang (Atmanspacher et al.); Moreira, Fell, Dehdashti, Bruza, and Wichert (Moreira et al.), and rule-based systems (Dimov).
The sampling approach advocated by Sanborn et al. is one to which we are very sympathetic, and it has been featured in many of our own resource rational models (e.g., Lieder et al. Reference Lieder, Callaway, Krueger, Das, Griffiths and Gul2018a; Reference Lieder, Griffiths and Hsu2018b). As they point out, the sampling approach is conducive to RRA. Since sampling is typically carried out sequentially, the cost of computation can be naturally formalized in terms of the opportunity cost of the time spent sampling. In addition to the reasons highlighted by Sanborn et al., sampling also gains psychological plausibility from the numerous natural psychological mechanisms that can instantiate it, including attending to the perceptual properties of an object (Gold & Shadlen Reference Gold and Shadlen2007; Krajbich et al. Reference Krajbich, Lu, Camerer and Rangel2012), retrieving experiences from memory in order to make a decision (Shadlen & Shohamy Reference Shadlen and Shohamy2016), and mentally simulating the outcome of an interaction between physical objects (Battaglia et al. Reference Battaglia, Hamrick and Tenenbaum2013).
Quantum computation provides an interesting alternative. As pointed out by Atmanspacher et al. and Moreira et al., quantum probability takes a different approach to efficiently using resources, focusing on being able to capture a wide range of probabilistic outcomes without a significant increase in the representational resources required. However, we see this as presenting an alternative to traditional mechanisms of probabilistic computation rather than alternative to resource rationality itself. It is still possible to formulate resource rational models in the quantum framework. As Sanborn et al. point out, the relevant computational costs can be representational rather than algorithmic. Alternatively, we might imagine formulating resource rational problems of quantum computation, where the goal is to achieve the best possible result under a constraint on the number of quantum operations that can be performed (or equivalently, the size of a quantum circuit). As hinted at by Atmanspacher et al., the adoption of quantum probability may in itself be viewed as a solution to a problem of resource rationality: given the computational constraint that all computations need to be represented as operations on a vector space, quantum probability emerges as the appropriate way to perform probabilistic inference.
As Dimov points out and we discussed above, the identification of the computational architecture and corresponding costs is a challenging aspect of RRA. Dimov sees the solution as coming from the adoption of a universal cognitive architecture, reviving one of the classic goals of cognitive science. Historically, these cognitive architectures have focused on rule-based formalisms such as production systems to describe the generative capacity of human behavior, using chronometric analysis to link each of those computations with the time that it takes a human being to execute. We agree that given an architecture of this kind, RRA is particularly well-defined. The work of Lewis and Howes (Lewis et al. Reference Lewis, Howes and Singh2014) provides some compelling examples of the value of this approach. We agree with Dimov that the refinement of a unified theory of the mind should be pursued in parallel with RRA, with the two approaches being uniquely informative to one another.
R5. Considering constraints broadens the scope of rational models
We were encouraged by the wide range of applications that commentators envisaged for RRA. These applications include motor control (Dounskaia & Shimansky), psycholinguistics (Dingemanse), cognitive development (Bejjanki & Aslin; Persaud, Bass, Colantonio, Macias, and Bonawitz [Persaud et al.]), mental health (Russek, Moran, McNamee, Reiter, Liu, Dolan, and Huys [Russek et al.]), and even history (Cowles & Kreiner). Although we had not anticipated all the creative applications identified by the commentators, we did anticipate that integrating resource constraints into rational analysis would expand the scope of phenomena that it can explain. Accordingly, some of the application areas – specifically, cognitive development and mental health – did not come as a surprise. Although we do admit that history managed to sneak up on us.
In cognitive science, traditional rational models have up to three degrees of freedom: the prior, the data, and the utility function. But it has been thoroughly demonstrated that tweaking the utility function is not even enough to explain the variation of a single person's preferences within minutes (e.g., Allais Reference Allais1953; Kahneman & Tversky Reference Kahneman and Tversky1979). Similarly, because the outcome of Bayesian inference is a direct result of what goes into it (the posterior probability of a hypothesis is directly proportional to the product of its prior probability and the likelihood reflecting the probability of the observed data), all inter-individual differences in beliefs and inferences would have to be explained as a consequence of variation in the priors of those agents or the data to which they were exposed. But in research areas where the goal is to explain variation, either across human lifetimes or as a result of mental illness, variation in priors, data, and utility functions may not be enough to capture these phenomena.
RRA adds two additional degrees of freedom: the computational resources available to an agent and their corresponding costs. These extra degrees of freedom are exactly the kind of thing that can be expected to vary across the lifespan or be influenced by mental illness. As a child grows up, the repertoire of computations available to them will expand, and the computational costs of particular operations may decrease as a consequence of practice or maturation. In addition, as Persaud et al. discuss, the goals of the child might change over time, and as pointed out by Bejjanki and Aslin, developmental resource constraints may themselves support more effective learning. In cases of mental illness, the availability of cognitive resources may be diminished and the computational costs of engaging in certain kinds of cognition may increase. Russek et al. highlight some concrete examples of cases where exactly such changes are known to happen in specific forms of mental illness.
We are also sympathetic to Russek et al.'s suggestion that other forms of mental illness, including mood disorders, might be best understood as systematic deviations from resource rationality. This suggests that uncovering deviations from resource rationality could be as useful for elucidating the cognitive distortions and aberrant processes that constitute specific mental illnesses as demonstrating deviations from classical notions of rationality has been for advancing our understanding of the heuristics and biases of healthy people (Tversky & Kahneman Reference Tversky and Kahneman1974). Once these systematic deviations from resource rational strategies have been identified, it will be especially interesting to understand how they were learned, how they can be unlearned, and how people can learn to think, learn, and decide according to more effective, near-resource rational strategies instead. To address these questions, we are currently developing models of metacognitive reinforcement learning (Krueger et al. Reference Krueger, Lieder and Griffiths2017; Jain et al., under review).
Metacognitive learning of more resource rational cognitive strategies might be one of the primary effect mechanisms of effective psychotherapy, and we agree with Russek et al. that cognitive behavior therapy can be understood as teaching people more resource rational cognitive strategies. This is congruent with the view that the goal of cognitive therapy is to make people more rational (Baron et al. Reference Baron, Baron, Barber and Nolen-Hoekseman1990). Taking this perspective one step further, we would like to suggest that resource rationality could even be used as a prescriptive principle to guide the development of more effective therapies. That is, RRA could be used to create a curriculum of adaptive cognitive strategies for healthy and resilient thinking, learning, and decision-making. Our current work on automatic strategy discovery (Callaway et al. Reference Callaway, Gul, Krueger, Griffiths and Lieder2018a; Gul et al. Reference Gul, Krueger, Callaway, Griffiths and Lieder2018; Lieder et al. Reference Lieder and Griffiths2017) and cognitive tutors teach people resource rational cognitive strategies (Lieder et al. Reference Lieder, Callaway, Jain, Krueger, Das, Gul and Griffiths2019a) is a step in this direction.
We were intrigued by the suggestion from Cowles and Kreiner that resource rationality might have an equivalently valuable role to play for understanding history, but in retrospect, this application draws on the same principle as the applications to cognitive development and mental illness. In a historical context, the variation across individuals does not occur within a single human life, or in a snapshot of a society, but across societies over time. Again, the extra degree of freedom provided by considering the cognitive resources available to agents provides a way to engage with this variation. As Cowles and Kreiner point out, this provides the capacity to understand the decisions of historical agents and how they might differ from our contemporary intuitions because their cognitive tools were different and because their environments taxed their cognitive resources in a different way. We anticipate that a similarly fruitful analysis could be applied across contemporary cultures, extending the scope of resource rational models through space as well as time.
R6. Beyond individual cognition
Several commentaries observed that our focus in introducing resource rationality and in surveying related literature was on the cognitive states of individuals. This focus is consistent with the historical emphasis of cognitive psychology, from which many of the studies we summarized were drawn, but we do not view it as a fundamental limitation of the framework. In particular, the directions highlighted in the commentaries – recognizing that minds are embodied, that cognition interacts with emotion, and that individuals are part of societies – represent interesting frontiers for research on resource rationality.
Spurrett highlights the role that the physical body plays in specifying utility functions and imposing computational costs. We are sympathetic to this argument. One of the merits of resource rationality is that it provides a framework in which to explore the tradeoffs between these utilities and costs. While only implicit in the target article, we also view physical embodiment as playing an important role in defining the kinds of computational problems that human beings have to solve. For example, one significant constrained resource is being able to resolve visual information with high fidelity in only a small portion of the retina, turning the control of eye movements during decision-making into a problem that can be analyzed from the perspective of resource rationality. We view the problem of appropriately integrating biological constraints into resource rational models as an interesting direction for future research. Indeed, Dounskaia and Shimansky provide a nice example of such an approach.
Kauffman is concerned with the place of emotion in RRA. Rationality and emotion have long been held up as being at odds with one another. But there is also a tradition of pointing out the role that emotional responses can play in producing adaptive behavior, particularly in the context of interpersonal interaction (e.g., Frank Reference Frank1988). Resource rationality provides a path to the resolution of this apparent contradiction, because the apparent antagonism between rationality versus emotion does not carry over into the resource rational framework. To the contrary, the computational efficiency of emotional mechanisms might make them resource rational in time-critical situations, and in certain situations, emotional mechanisms may be resource rational because they lead to better decisions than deliberation. Furthermore, emotions, such as anxiety, can guide the efficient allocation of cognitive resources to important problems, such as planning how to survive (Gagne et al. Reference Gagne, Dayan and Bishop2018).
We agree with Russek et al. that emotions can be understood in terms of resource-efficient computational mechanisms, but we would like to clarify being resource rational does not require solving the meta-decision-making problem optimally – instead, a resource rational agent would select computations by a boundedly optimal heuristic. Furthermore, RRA can also illuminate how emotions and cognition interact (Krueger & Griffiths Reference Krueger and Griffiths2018). An extensive body of work that has emphasized that there are at least three distinct decision systems: the instinctive Pavlovian system that is responsible for emotional biases, a deliberative system that supports effective goal pursuit through flexible reasoning, and a model-free reinforcement learning system that leads to inflexible habits (van der Meer et al. Reference van der Meer, Kurth-Nelson and Redish2012). Exactly how those systems interact is an open problem that RRA could be used to solve. One proposal is that the model-based system generates simulated data – through a kind of introspection – that is then used to refine model-free learning (Gershman et al. Reference Gershman, Markman and Otto2014). Another proposal is that deliberation is used to refine the valuation of past experiences in the light of new information and to update the agent's habits accordingly (Krueger & Griffiths Reference Krueger and Griffiths2018). This latter perspective instantiates the idea that our emotions teach us how to become more resource rational by allowing our regrets to improve our computationally-efficient, habitual response tendencies.
Both Ross and Dingemanse point out that the formulation of RRA in the target article assumes an agent facing a problem that is generated by nature, while many of the problems that human beings have to solve require interacting with other agents. This creates a situation where the strategies adopted by one agent influence the environment experienced by another – a situation that is very familiar to any student of game theory. We do not foresee any fundamental obstacles to extending resource rationality to such situations. Indeed, we anticipate that this approach can be used to define models like those currently used in behavioral game theory (e.g., Camerer & Hua Ho Reference Camerer and Hua Ho1999), but derived from the principle of optimization that underlies resource rationality. First steps in this direction have been taken by Halpern and Pass (Reference Halpern and Pass2015). Beyond game theory, we agree with Dingemanse that language use represents a particularly rich territory for exploring this approach, including examining the extent to which speakers modify their linguistic choices based on assumptions about the cognitive load experienced by listeners.
R7. Summary and Conclusion
RRA is a new modeling paradigm that integrates the top-down approach that starts from the function of cognitive systems with the bottom-up approach that starts from insights into the mind's cognitive architecture and its constraints. Combining the strengths of both approaches makes RRA a promising methodology for reverse-engineering the mechanisms and representations of human cognition. RRA is an important step toward realizing David Marr's vision that theories formulated at different levels of analysis can inform and mutually constrain each other. RRA contributes to this vision by bringing insights about the function of cognitive systems (computational level) and empirical findings about the system's constraints (implementational level) to bear on models of cognitive mechanisms (algorithmic level of analysis). RRA accomplishes this in a principled way that uniquely specifies what the cognitive mechanism should be according to its function and the constraints of the available cognitive architecture. As Dimov noted, this addresses the fundamental non-identifiability problems that have been holding back progress on uncovering cognitive architectures and cognitive mechanisms for a long time. The commentaries revealed that RRA is even more broadly applicable than our target article suggested. We are looking forward to seeing RRA facilitating progress in fields ranging from cognitive development to history. We are especially excited to see RRA applied to understanding mental illness and improving people's mental health.
We appreciated the commentators’ suggestions for future methodological developments, including the establishment of limits on the constraints that can be postulated by RRA and the integration of insights from extant cognitive architectures and evolutionary theory. RRA is a brand-new modeling paradigm that will undoubtedly mature and develop and the dialogue started by our target article will likely accelerate this process. The commentaries also gave us the opportunity to clarify the methodological nature of the teleological and optimality assumptions of RRA. We hope that this has made it clear that we are not arguing that the human mind is (resource) rational but offering a methodology for understanding the human mind's somewhat suboptimal cognitive systems in terms of their function, mechanisms, and representations.
Target article
Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources
Related commentaries (25)
Another claim for cognitive history
Beginning with biology: “Aspects of cognition” exist in the service of the brain's overall function as a resource-regulator
Can resources save rationality? “Anti-Bayesian” updating in cognition and perception
Cognitively bounded rational analyses and the crucial role of theories of subjective utility
Computational limits don't fully explain human cognitive limitations
Generalization of the resource-rationality principle to neural control of goal-directed movements
Heuristics and the naturalistic fallacy
Holistic resource-rational analysis
Multiple conceptions of resource rationality
Opportunities and challenges integrating resource-rational analysis with developmental perspectives
Opportunities for emotion and mental health research in the resource-rationality framework
Optimal, resource-rational or sub-optimal? Insights from cognitive development
Representing utility and deploying the body
Resource-rational analysis versus resource-rational humans
Resource-rationality and dynamic coupling of brains and social environments
Resource-rationality as a normative standard of human rationality
Resource-rationality beyond individual minds: the case of interactive language use
Sampling as a resource-rational constraint
The biology of emotion is missing
The evolutionary foundations of resource-rational analysis
The importance of constraints on constraints
Towards a quantum-like cognitive architecture for decision-making
Uncovering cognitive constraints is the bottleneck in resource-rational analysis
What are the appropriate axioms of rationality for reasoning under uncertainty with resource-constrained systems?
What is the purpose of cognition?
Author response
Advancing rational analysis to the algorithmic level