Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-05T23:59:24.211Z Has data issue: false hasContentIssue false

Resource-rationality as a normative standard of human rationality

Published online by Cambridge University Press:  11 March 2020

Matteo Colombo*
Affiliation:
Tilburg Center for Logic, Ethics and Philosophy of Science, Tilburg University, 5000LE Tilburg, The Netherlands. m.colombo@uvt.nlhttps://mteocolphi.wordpress.com/

Abstract

Lieder and Griffiths introduce resource-rational analysis as a methodological device for the empirical study of the mind. But they also suggest resource-rationality serves as a normative standard to reassess the limits and scope of human rationality. Although the methodological status of resource-rational analysis is convincing, its normative status is not.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2020

Lieder and Griffiths's resource-rational analysis aims to provide researchers with a methodological device to model many different kinds of cognitive phenomena in a precise way – similarly to Reinforcement Learning or Bayesian modelling (Colombo & Hartmann Reference Colombo and Hartmann2017; Colombo & Seriès Reference Colombo and Seriès2012). Although Lieder and Griffiths explain that “resource rationality is not a fully fleshed out theory of cognition, designed as a new standard of normativity against which human judgements can be assessed” (sect. 3, para. 7), they also point out that resource-rationality can be used as “a more realistic normative standard” to revisit the debate about the scope and limits of human rationality (sect. 5.4).

Understood as a normative standard, the notion of resource-rationality encapsulated in Lieder and Griffiths's Equation 4 says that rational agents ought to act so as to maximise some sort of expected utility, taking into account the costs of computation, time pressures, and limitations in the processing of relevant information available in the environment. To contribute productively to the debate about human rationality, researchers who endorse resource-rationality as a normative standard should answer two sets of questions. First, in virtue of what does the resource-rationality standard have normative force? Why, and in what sense, is it a requirement of rationality? Second, given this standard, what does it take for an agent to make an error, to be biased or irrational?

One potentially helpful distinction to begin address these questions is between constructivist and ecological models of rationality (Colombo Reference Colombo, Sprevak and Colombo2019; Smith Reference Smith2008, sect. 5). Constructivist models assume that rational agents comply with general-purpose norms for successfully solving well-defined problems. Ecological models assume that rational agents are adapted to specific types of environments, where their chances of survival and their rate of reproduction are higher compared to other types of environments. Where constructivist models allow researchers to evaluate behaviour against norms, ecological models allow researchers to evaluate behaviour against organisms’ objective goals of survival and reproduction.

If resource-rationality is to be understood as a constructivist normative standard, then one might try to ground its normative force in some argument similar to those typically cited in support of constructivist models like expected utility maximisation (cf., Briggs Reference Briggs and Zalta2017; Hájek Reference Hájek2008, sect. 2; Colombo, Lee & Hartmann, Reference Colombo, Elkin and Hartmannforthcoming, sect. 3.2). There are, for example, arguments based on representation theorems, which say that if all your preferences satisfy certain “rationality” constraints, then there is some representation of you as an expected utility maximiser. There are long-run arguments, according to which if you always maximise expected utility, then, in the long run, you are likely to maximise actual utility. There are “Dutch book” arguments, which say that if your beliefs are probabilistically incoherent, there exists a set of bets you consider fair, but that guarantee your loss. And there are arguments based on accuracy considerations, which establish that if your beliefs are probabilistically incoherent, there is some probability function representing a different set of beliefs that is more accurate than your beliefs in every possible situation. There are several objections against these arguments; and in any case, it is not obvious these arguments carry over into resource-rationality.

If resource-rationality is an ecological normative standard, then the challenge is to show that, in specific types of environments, specific behavioural strategies for maximising the sort of utility encapsulated in Equation 4 promote an organism's goals of survival and reproduction. In particular, for an ecological understanding of resource-rationality to have normative teeth, researchers should show that certain strategies, which possess some epistemically good feature such as reliability, accuracy, or coherence, or which promote an organism's happiness, well-being or capabilities, approximate the resource-rationality maximum more closely than alternative strategies in many different types of realistic situations (e.g., Cooper Reference Cooper2001; Gintis Reference Gintis2009). And researchers should also show that humans employing those strategies are more likely to survive and reproduce. On pain of circularity, one cannot ground the normative force of resource-rationality by just “[p]erforming cost-benefit analyses similar to those defined in Equation 4 to determine to which extent evolution has succeeded to design resource-rational neural hardware” (sect. 5.4, para. 3).

Whether we understand the standard of resource-rationality as a constructivist norm or as an ecological goal (or both), it is not clear when violations of this standard constitute errors, or cognitive biases. There are several different norms of epistemic and practical rationality; and there probably are different kinds of goals (or cost functions) agents (or their brains) may optimise (Marblestone et al. Reference Marblestone, Wayne and Kording2016). Considering this plurality, violations of resource-rationality do not provide us with sufficient grounds for diagnosing irrationality. Furthermore, resource-rational agents “might have to rely on heuristics for choosing heuristics to approximate the prescriptions” of resource-rationality in some situations (sect. 3, para. 6). Deviating from resource-rationality cannot count as an error or a cognitive bias in those situations, unless we have a proposal about how closely behaviour should approximate the resource-rational maximum to count as (ir)rational.

One peril of using resource-rationality as a normative standard for reconsidering the debate about human rationality is that it may reiterate fruitless rationality wars. In recent years, this debate has invited “rationality wars” characterised by “rhetorical flourishes” concealing substantial empirical agreement (Samuels, Stich & Bishop Reference Samuels, Stich, Bishop and Elio2002, 241), ambiguous use of terms such as “optimality” and “rationality” (Rahnev & Denison Reference Rahnev and Denison2018a, 49–50), and confusion concerning the nature and methodological role of modelling approaches such as Bayesian decision theory (cf., Bowers & Davis Reference Bowers and Davis2012a; Reference Bowers and Davis2012b). To avoid this peril, researchers who are going to appeal to resource-rationality as a normative standard of rationality and contribute to the debate about human rationality should be clear on what considerations ground the normative force of resource-rationality and when deviations from this standard count as irrational errors.

Acknowledgments

I am grateful to Dominik Klein for helpful conversations on (ir)rationality, and to the Alexander von Humboldt Foundation for financial support.

References

Bowers, J. S. & Davis, C. J. (2012a) Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin 138:389414.CrossRefGoogle Scholar
Bowers, J. S. & Davis, C. J. (2012b) Is that what Bayesians believe? Reply to Griffiths, Chater, Norris, and Pouget. Psychological Bulletin 138:423–26.CrossRefGoogle Scholar
Briggs, R. A. (2017) Normative theories of rational choice: Expected utility. In: The Stanford Encyclopedia of Philosophy, ed. Zalta, E. N., Metaphysics Research Lab. Stanford University.Google Scholar
Colombo, M. (2019) Learning and reasoning. In: The Routledge handbook of the computational mind, ed. Sprevak, M. & Colombo, M., pp. 381–96. Routledge.Google Scholar
Colombo, M., Elkin, L. & Hartmann, S. (forthcoming) Being realist about Bayes, and the predictive processing theory of mind. The British Journal for the Philosophy of Science (first online 03 August 2018). Available at: https://doi.org/10.1093/bjps/axy059.Google Scholar
Colombo, M. & Hartmann, S. (2017) Bayesian cognitive science, unification, and explanation. The British Journal for Philosophy of Science 68:451–84.Google Scholar
Colombo, M. & Seriès, P. (2012) Bayes in the brain. On Bayesian modelling in neuroscience. The British Journal for Philosophy of Science 63:697723.CrossRefGoogle Scholar
Cooper, W. S. (2001) The evolution of reason. Cambridge University Press.CrossRefGoogle Scholar
Gintis, H. (2009) The bounds of reason. Princeton University Press.Google Scholar
Griffiths, T. L., Chater, N., Norris, D. & Pouget, A. (2012) How the Bayesians got their beliefs (and what those beliefs actually are): Comments on Bower and Davis. Psychological Bulletin 138:415–22.CrossRefGoogle Scholar
Hájek, A. (2008) Arguments for − or against − probabilism? British Journal for the Philosophy of Science 59:793819.CrossRefGoogle Scholar
Marblestone, A. H., Wayne, G. & Kording, K. P. (2016) Toward an integration of deep learning and neuroscience. Frontiers in Computational Neuroscience 10:94. doi:10.3389/fncom.2016.00094.CrossRefGoogle ScholarPubMed
Rahnev, D. & Denison, R. N. (2018a) Suboptimality in perceptual decision making. Behavioral and Brain Sciences 41:e223, 166. doi:10.1017/S0140525X18000936.CrossRefGoogle Scholar
Samuels, R., Stich, S. & Bishop, M. (2002) Ending the rationality wars: How to make disputes about human rationality disappear. In: Common sense, reasoning and rationality, ed. Elio, R., pp. 236–68. Oxford University Press.CrossRefGoogle Scholar
Smith, V. L. (2008) Rationality in economics: Constructivist and ecological forms. Cambridge University Press.Google Scholar