Advocates of the oversold view that human cognition is “optimal” are in the midst of a strategic retreat. If it no longer looks like human cognition is optimal, might it be “bounded optimal,” optimal relative to inherent limits on information, computational resources, cognitive capacity, or neuronal architecture? In the limit, the claim is meaningless: whatever the brain does is constrained by whatever the architecture is. But even a less vague version, centered around limits on memory, information, or computational power as an explanation for cognitive flaws, yields little traction.
Consider the laziness doctrine. The flagship class of theories in Lieder and Griffiths’ paper is a large body of work on decision making. Many departures from normatively optimal decision-making can be explained on the supposition that finding the optimal action requires more mental effort than seems worthwhile. But a serious version of the bounded rationality view must presume that the tradeoff between effort and decision-making is made optimally. Any reader who thinks honestly about their own decision making will probably recognize occasions on which they have occurred incur large, foreseeable costs, because they were too impatient; a bland claim that people manage informational trades optimally is at odd with everyday reality.
Toward the end of the paper, Lieder and Griffiths raise the issue of “everyday observations of seemingly irrational beliefs and behaviors,” but then give the comforting explanation that those must be beliefs of no adaptive significance, like whether the world is flat, so the human is wise not to spend any cognitive effort on them. But that does not explain behaviors that foolishly risk one's life, such as drunk or careless driving, or the hundreds of people who have died taking selfies, misjudging fatal risks in the pursuit of a few more followers on Instagram. The trouble is, Lieder and Griffith's approach sounds nice but predicts very little of the texture of actual human decision making.
Lieder and Griffiths also cite numerous studies claiming that human memory is bounded-optimal in some respects. In fact, as one of us (Marcus Reference Marcus2008) has argued at length, memory is a very clear case of a suboptimal system. Memory lapses of salient and important realities are notoriously common and often costly; parachutists have been known to forget to pull their ripcords, and airline pilots have checklists precisely because human memory cannot be trusted in life or death situations. Meanwhile, the existence of mnemonic tricks like the method of the loci show that the mental limitations of ordinary humans are not inevitable limitations of a neuronal architecture, because, with training and practice, ordinary limits can be substantially overcome. That said, our default memory systems just aren't that good. And the notion of bounded optimality casts virtually no light on what is and is not easy. It tells us little about why, say, we can recognize hundreds of faces of people in high school that we haven't seen for decades, yet fail to remember a 10-digit passport number or where we parked three hours earlier in a shopping mall parking lot.
An addiction to the presumption that all must be optimal, if only the right resource-limitation can be found, such that erroneous behavior can be executed, leads to all kinds of weird reasoning. Lieder and Griffiths write, for example, that “Rational models … have provided surprisingly good explanation of cognitive biases …. Includ[ing] the confirmation bias,” and cite Oaksford and Chater (Reference Oaksford and Chater1994) and Austerweil and Griffiths (Reference Austerweil and Griffiths2011) in support. To get there, Austerweil and Griffiths narrowly define the confirmation bias as “the tendency to test outcomes that are predicted by our current theory” and demonstrate that that is an optimal strategy if one is testing deterministic causal laws; Oaksford and Chater's analysis is similar But the usual meaning of confirmation bias is much broader, for example (Plous Reference Plous1993): “the tendency to search for, interpret, favor, and recall information in a way that confirms one's preexisting beliefs or hypotheses.” We all know, and it has often been systematically demonstrated, that someone who believes that the moon landings were faked (say) is likely to attend to, emphasize, and remember any and all evidence that supports this theory and to ignore, discount, and forget all contrary evidence; and no kind of argumentation will convince us that this is rational. These two studies are hardly enough to address the broader sense. In fact the target paper by Lieder and Griffiths is itself an instance of confirmation bias in this broader sense: it is an enumeration of cases that might possibly be construed of as limitation-induced cognitive bias, without anything like careful analysis of the scope of other cases that might fall outside that scope.
Advocates of the oversold view that human cognition is “optimal” are in the midst of a strategic retreat. If it no longer looks like human cognition is optimal, might it be “bounded optimal,” optimal relative to inherent limits on information, computational resources, cognitive capacity, or neuronal architecture? In the limit, the claim is meaningless: whatever the brain does is constrained by whatever the architecture is. But even a less vague version, centered around limits on memory, information, or computational power as an explanation for cognitive flaws, yields little traction.
Consider the laziness doctrine. The flagship class of theories in Lieder and Griffiths’ paper is a large body of work on decision making. Many departures from normatively optimal decision-making can be explained on the supposition that finding the optimal action requires more mental effort than seems worthwhile. But a serious version of the bounded rationality view must presume that the tradeoff between effort and decision-making is made optimally. Any reader who thinks honestly about their own decision making will probably recognize occasions on which they have occurred incur large, foreseeable costs, because they were too impatient; a bland claim that people manage informational trades optimally is at odd with everyday reality.
Toward the end of the paper, Lieder and Griffiths raise the issue of “everyday observations of seemingly irrational beliefs and behaviors,” but then give the comforting explanation that those must be beliefs of no adaptive significance, like whether the world is flat, so the human is wise not to spend any cognitive effort on them. But that does not explain behaviors that foolishly risk one's life, such as drunk or careless driving, or the hundreds of people who have died taking selfies, misjudging fatal risks in the pursuit of a few more followers on Instagram. The trouble is, Lieder and Griffith's approach sounds nice but predicts very little of the texture of actual human decision making.
Lieder and Griffiths also cite numerous studies claiming that human memory is bounded-optimal in some respects. In fact, as one of us (Marcus Reference Marcus2008) has argued at length, memory is a very clear case of a suboptimal system. Memory lapses of salient and important realities are notoriously common and often costly; parachutists have been known to forget to pull their ripcords, and airline pilots have checklists precisely because human memory cannot be trusted in life or death situations. Meanwhile, the existence of mnemonic tricks like the method of the loci show that the mental limitations of ordinary humans are not inevitable limitations of a neuronal architecture, because, with training and practice, ordinary limits can be substantially overcome. That said, our default memory systems just aren't that good. And the notion of bounded optimality casts virtually no light on what is and is not easy. It tells us little about why, say, we can recognize hundreds of faces of people in high school that we haven't seen for decades, yet fail to remember a 10-digit passport number or where we parked three hours earlier in a shopping mall parking lot.
An addiction to the presumption that all must be optimal, if only the right resource-limitation can be found, such that erroneous behavior can be executed, leads to all kinds of weird reasoning. Lieder and Griffiths write, for example, that “Rational models … have provided surprisingly good explanation of cognitive biases …. Includ[ing] the confirmation bias,” and cite Oaksford and Chater (Reference Oaksford and Chater1994) and Austerweil and Griffiths (Reference Austerweil and Griffiths2011) in support. To get there, Austerweil and Griffiths narrowly define the confirmation bias as “the tendency to test outcomes that are predicted by our current theory” and demonstrate that that is an optimal strategy if one is testing deterministic causal laws; Oaksford and Chater's analysis is similar But the usual meaning of confirmation bias is much broader, for example (Plous Reference Plous1993): “the tendency to search for, interpret, favor, and recall information in a way that confirms one's preexisting beliefs or hypotheses.” We all know, and it has often been systematically demonstrated, that someone who believes that the moon landings were faked (say) is likely to attend to, emphasize, and remember any and all evidence that supports this theory and to ignore, discount, and forget all contrary evidence; and no kind of argumentation will convince us that this is rational. These two studies are hardly enough to address the broader sense. In fact the target paper by Lieder and Griffiths is itself an instance of confirmation bias in this broader sense: it is an enumeration of cases that might possibly be construed of as limitation-induced cognitive bias, without anything like careful analysis of the scope of other cases that might fall outside that scope.