Lieder and Griffiths's resource-rational analysis aims to provide researchers with a methodological device to model many different kinds of cognitive phenomena in a precise way – similarly to Reinforcement Learning or Bayesian modelling (Colombo & Hartmann Reference Colombo and Hartmann2017; Colombo & Seriès Reference Colombo and Seriès2012). Although Lieder and Griffiths explain that “resource rationality is not a fully fleshed out theory of cognition, designed as a new standard of normativity against which human judgements can be assessed” (sect. 3, para. 7), they also point out that resource-rationality can be used as “a more realistic normative standard” to revisit the debate about the scope and limits of human rationality (sect. 5.4).
Understood as a normative standard, the notion of resource-rationality encapsulated in Lieder and Griffiths's Equation 4 says that rational agents ought to act so as to maximise some sort of expected utility, taking into account the costs of computation, time pressures, and limitations in the processing of relevant information available in the environment. To contribute productively to the debate about human rationality, researchers who endorse resource-rationality as a normative standard should answer two sets of questions. First, in virtue of what does the resource-rationality standard have normative force? Why, and in what sense, is it a requirement of rationality? Second, given this standard, what does it take for an agent to make an error, to be biased or irrational?
One potentially helpful distinction to begin address these questions is between constructivist and ecological models of rationality (Colombo Reference Colombo, Sprevak and Colombo2019; Smith Reference Smith2008, sect. 5). Constructivist models assume that rational agents comply with general-purpose norms for successfully solving well-defined problems. Ecological models assume that rational agents are adapted to specific types of environments, where their chances of survival and their rate of reproduction are higher compared to other types of environments. Where constructivist models allow researchers to evaluate behaviour against norms, ecological models allow researchers to evaluate behaviour against organisms’ objective goals of survival and reproduction.
If resource-rationality is to be understood as a constructivist normative standard, then one might try to ground its normative force in some argument similar to those typically cited in support of constructivist models like expected utility maximisation (cf., Briggs Reference Briggs and Zalta2017; Hájek Reference Hájek2008, sect. 2; Colombo, Lee & Hartmann, Reference Colombo, Elkin and Hartmannforthcoming, sect. 3.2). There are, for example, arguments based on representation theorems, which say that if all your preferences satisfy certain “rationality” constraints, then there is some representation of you as an expected utility maximiser. There are long-run arguments, according to which if you always maximise expected utility, then, in the long run, you are likely to maximise actual utility. There are “Dutch book” arguments, which say that if your beliefs are probabilistically incoherent, there exists a set of bets you consider fair, but that guarantee your loss. And there are arguments based on accuracy considerations, which establish that if your beliefs are probabilistically incoherent, there is some probability function representing a different set of beliefs that is more accurate than your beliefs in every possible situation. There are several objections against these arguments; and in any case, it is not obvious these arguments carry over into resource-rationality.
If resource-rationality is an ecological normative standard, then the challenge is to show that, in specific types of environments, specific behavioural strategies for maximising the sort of utility encapsulated in Equation 4 promote an organism's goals of survival and reproduction. In particular, for an ecological understanding of resource-rationality to have normative teeth, researchers should show that certain strategies, which possess some epistemically good feature such as reliability, accuracy, or coherence, or which promote an organism's happiness, well-being or capabilities, approximate the resource-rationality maximum more closely than alternative strategies in many different types of realistic situations (e.g., Cooper Reference Cooper2001; Gintis Reference Gintis2009). And researchers should also show that humans employing those strategies are more likely to survive and reproduce. On pain of circularity, one cannot ground the normative force of resource-rationality by just “[p]erforming cost-benefit analyses similar to those defined in Equation 4 to determine to which extent evolution has succeeded to design resource-rational neural hardware” (sect. 5.4, para. 3).
Whether we understand the standard of resource-rationality as a constructivist norm or as an ecological goal (or both), it is not clear when violations of this standard constitute errors, or cognitive biases. There are several different norms of epistemic and practical rationality; and there probably are different kinds of goals (or cost functions) agents (or their brains) may optimise (Marblestone et al. Reference Marblestone, Wayne and Kording2016). Considering this plurality, violations of resource-rationality do not provide us with sufficient grounds for diagnosing irrationality. Furthermore, resource-rational agents “might have to rely on heuristics for choosing heuristics to approximate the prescriptions” of resource-rationality in some situations (sect. 3, para. 6). Deviating from resource-rationality cannot count as an error or a cognitive bias in those situations, unless we have a proposal about how closely behaviour should approximate the resource-rational maximum to count as (ir)rational.
One peril of using resource-rationality as a normative standard for reconsidering the debate about human rationality is that it may reiterate fruitless rationality wars. In recent years, this debate has invited “rationality wars” characterised by “rhetorical flourishes” concealing substantial empirical agreement (Samuels, Stich & Bishop Reference Samuels, Stich, Bishop and Elio2002, 241), ambiguous use of terms such as “optimality” and “rationality” (Rahnev & Denison Reference Rahnev and Denison2018a, 49–50), and confusion concerning the nature and methodological role of modelling approaches such as Bayesian decision theory (cf., Bowers & Davis Reference Bowers and Davis2012a; Reference Bowers and Davis2012b). To avoid this peril, researchers who are going to appeal to resource-rationality as a normative standard of rationality and contribute to the debate about human rationality should be clear on what considerations ground the normative force of resource-rationality and when deviations from this standard count as irrational errors.
Lieder and Griffiths's resource-rational analysis aims to provide researchers with a methodological device to model many different kinds of cognitive phenomena in a precise way – similarly to Reinforcement Learning or Bayesian modelling (Colombo & Hartmann Reference Colombo and Hartmann2017; Colombo & Seriès Reference Colombo and Seriès2012). Although Lieder and Griffiths explain that “resource rationality is not a fully fleshed out theory of cognition, designed as a new standard of normativity against which human judgements can be assessed” (sect. 3, para. 7), they also point out that resource-rationality can be used as “a more realistic normative standard” to revisit the debate about the scope and limits of human rationality (sect. 5.4).
Understood as a normative standard, the notion of resource-rationality encapsulated in Lieder and Griffiths's Equation 4 says that rational agents ought to act so as to maximise some sort of expected utility, taking into account the costs of computation, time pressures, and limitations in the processing of relevant information available in the environment. To contribute productively to the debate about human rationality, researchers who endorse resource-rationality as a normative standard should answer two sets of questions. First, in virtue of what does the resource-rationality standard have normative force? Why, and in what sense, is it a requirement of rationality? Second, given this standard, what does it take for an agent to make an error, to be biased or irrational?
One potentially helpful distinction to begin address these questions is between constructivist and ecological models of rationality (Colombo Reference Colombo, Sprevak and Colombo2019; Smith Reference Smith2008, sect. 5). Constructivist models assume that rational agents comply with general-purpose norms for successfully solving well-defined problems. Ecological models assume that rational agents are adapted to specific types of environments, where their chances of survival and their rate of reproduction are higher compared to other types of environments. Where constructivist models allow researchers to evaluate behaviour against norms, ecological models allow researchers to evaluate behaviour against organisms’ objective goals of survival and reproduction.
If resource-rationality is to be understood as a constructivist normative standard, then one might try to ground its normative force in some argument similar to those typically cited in support of constructivist models like expected utility maximisation (cf., Briggs Reference Briggs and Zalta2017; Hájek Reference Hájek2008, sect. 2; Colombo, Lee & Hartmann, Reference Colombo, Elkin and Hartmannforthcoming, sect. 3.2). There are, for example, arguments based on representation theorems, which say that if all your preferences satisfy certain “rationality” constraints, then there is some representation of you as an expected utility maximiser. There are long-run arguments, according to which if you always maximise expected utility, then, in the long run, you are likely to maximise actual utility. There are “Dutch book” arguments, which say that if your beliefs are probabilistically incoherent, there exists a set of bets you consider fair, but that guarantee your loss. And there are arguments based on accuracy considerations, which establish that if your beliefs are probabilistically incoherent, there is some probability function representing a different set of beliefs that is more accurate than your beliefs in every possible situation. There are several objections against these arguments; and in any case, it is not obvious these arguments carry over into resource-rationality.
If resource-rationality is an ecological normative standard, then the challenge is to show that, in specific types of environments, specific behavioural strategies for maximising the sort of utility encapsulated in Equation 4 promote an organism's goals of survival and reproduction. In particular, for an ecological understanding of resource-rationality to have normative teeth, researchers should show that certain strategies, which possess some epistemically good feature such as reliability, accuracy, or coherence, or which promote an organism's happiness, well-being or capabilities, approximate the resource-rationality maximum more closely than alternative strategies in many different types of realistic situations (e.g., Cooper Reference Cooper2001; Gintis Reference Gintis2009). And researchers should also show that humans employing those strategies are more likely to survive and reproduce. On pain of circularity, one cannot ground the normative force of resource-rationality by just “[p]erforming cost-benefit analyses similar to those defined in Equation 4 to determine to which extent evolution has succeeded to design resource-rational neural hardware” (sect. 5.4, para. 3).
Whether we understand the standard of resource-rationality as a constructivist norm or as an ecological goal (or both), it is not clear when violations of this standard constitute errors, or cognitive biases. There are several different norms of epistemic and practical rationality; and there probably are different kinds of goals (or cost functions) agents (or their brains) may optimise (Marblestone et al. Reference Marblestone, Wayne and Kording2016). Considering this plurality, violations of resource-rationality do not provide us with sufficient grounds for diagnosing irrationality. Furthermore, resource-rational agents “might have to rely on heuristics for choosing heuristics to approximate the prescriptions” of resource-rationality in some situations (sect. 3, para. 6). Deviating from resource-rationality cannot count as an error or a cognitive bias in those situations, unless we have a proposal about how closely behaviour should approximate the resource-rational maximum to count as (ir)rational.
One peril of using resource-rationality as a normative standard for reconsidering the debate about human rationality is that it may reiterate fruitless rationality wars. In recent years, this debate has invited “rationality wars” characterised by “rhetorical flourishes” concealing substantial empirical agreement (Samuels, Stich & Bishop Reference Samuels, Stich, Bishop and Elio2002, 241), ambiguous use of terms such as “optimality” and “rationality” (Rahnev & Denison Reference Rahnev and Denison2018a, 49–50), and confusion concerning the nature and methodological role of modelling approaches such as Bayesian decision theory (cf., Bowers & Davis Reference Bowers and Davis2012a; Reference Bowers and Davis2012b). To avoid this peril, researchers who are going to appeal to resource-rationality as a normative standard of rationality and contribute to the debate about human rationality should be clear on what considerations ground the normative force of resource-rationality and when deviations from this standard count as irrational errors.
Acknowledgments
I am grateful to Dominik Klein for helpful conversations on (ir)rationality, and to the Alexander von Humboldt Foundation for financial support.