Resource rationality takes seemingly irrational behaviors and reframes them as rational or optimal given other constraints on agents. For example, anchoring-and-adjustment and overestimating extreme events turn out be “rational” after all, by reflecting the rational allocation of cognitive resources. Thus, even for such classically irrational phenomena, “the resulting train of thought eventually converges to the Bayes-optimal inference” (p. 38).
In such cases, reasoners fall short of perfectly rational updating, and it is illuminating that resource- and performance-based constraints can accommodate such suboptimal reasoning. But what about cases where we behave not merely suboptimally, but rather against the norms of Bayesian inference? Here, we explore cases where the mind is moved by prior knowledge in precisely the reverse direction of what a rational analysis would recommend. These cases are not merely suboptimal, but rather “anti-Bayesian,” for actively defying Bayesian norms of inference. We consider two such phenomena: belief polarization and sensory integration (Fig. 1). Can resource rationality handle them?
Figure 1. Examples of “anti-Bayesian” updating in the mind. (A) Under conditions of cognitive dissonance, acquiring – and affirming – evidence against one’s beliefs can cause those beliefs to strengthen (Batson Reference Batson1975), whereas Bayesian norms of inference recommend softening those beliefs. (B) In the size-weight illusion, one is shown two objects of different sizes but equal weights; when one lifts them up, the smaller one feels illusorily heavier than the larger one (Buckingham Reference Buckingham2014; Charpentier Reference Charpentier1891; Won et al. Reference Won, Gross and Firestone2019). In other words, ambiguous sensory data about which of two objects is heavier is resolved “against” one’s prior expectations, rather than in favor of one’s priors as recommended by Bayesian norms of inference. Can resource rationality accommodate such paradigmatically “irrational” phenomena?
First, belief polarization: Receiving evidence contrary to your beliefs should soften those beliefs, even if ever-so-slightly. But, this isn't what actually happens when the beliefs in question are central to one's identity – in belief polarization, contrary or disconfirming evidence causes more extreme beliefs, not more moderate ones. A classic example was vividly documented by Festinger et al. (Reference Festinger, Riecken and Schachter1956): Cult members who predict the world will end on some date – but who then see that date come and go with no cataclysm – end up strengthening their beliefs in the cult's tenets, not softening them. In other words, credible evidence against their worldview only makes them hold that worldview more strongly – directly defying Bayesian inference norms.
The same phenomenon can be found under laboratory conditions. For example, one study exposed people who believe that Jesus is the Son of God to a (fake) news article reporting that archeologists had unearthed carbon-dated letters from the New-Testament authors; the letters said the Bible was fraudulent and that its authors knew Jesus was not divinely born (Batson Reference Batson1975). Subjects who did not believe the article's content left their beliefs about Jesus unchanged; but, fascinatingly, subjects who did believe the article's content ended up strengthening their belief that Jesus was the Son of God. In other words, affirming new evidence against Jesus's divine birth (~P) caused stronger beliefs in Jesus's divine birth (P). Similar “backwards” updating is also observed for beliefs about nuclear safety (Plous Reference Plous1991), health (Liberman & Chaiken Reference Liberman and Chaiken1992), and affirmative action and gun control (Taber & Lodge Reference Taber and Lodge2006; see also Mandelbaum Reference Mandelbaum2019).
Why does this happen? In fact, belief polarization is not so mysterious: It has been known for decades, and it is even a predictable consequence of dissonance theory – “the psychological immune system” (Gilbert et al. Reference Gilbert, Pinel, Wilson, Blumberg and Wheatley1998) – applied to one's values. What is mysterious is why this should occur in a Bayesian mind – even one constrained by “resources.” Belief polarization is irrational not because people are insufficiently moved by evidence, but rather because people are moved in the direction opposite the one they should be. And, importantly, these patterns cannot be explained by biased attitudes toward the evidence's source. For example, Bayesian models of milder forms of belief polarization (e.g., Jern et al. Reference Jern, Chang and Kemp2014) suggest that subjects infer that contrary evidence must have come from unreliable sources (e.g., biased testimony); but this seems inapplicable to the above cases, where the sources are either nature itself (e.g., the world failing to end), or evidence the subject has actively accepted (e.g., news articles they endorsed).
Indeed, “anti-Bayesian” updating is widespread, occurring even in basic perceptual processes. When we have prior expectations about new and uncertain sensory data, rational norms of inference say we should interpret such data with respect for those priors; “people should leverage their prior knowledge about the statistics of the world to resolve perceptual uncertainty” (p. 40). But, sensory integration frequently occurs the opposite way. Consider the size-weight illusion, wherein subjects see two equally weighted objects – one large and one small – and then lift them both to feel their weight. Which feels heavier? We “should” resolve the ambiguous haptic evidence about which object is heavier in favor of our priors; but instead, the classic and much-replicated finding is that we experience the smaller object as heavier than the equally-weighted larger object (Buckingham Reference Buckingham2014; Charpentier Reference Charpentier1891). This too is “irrational” – not for falling short of Bayesian norms of inference, but for proceeding opposite to them, because we resolve the ambiguous sensory evidence – two equally weighted objects – against the larger-is-heavier prior, not in favor of it (Brayanov & Smith Reference Brayanov and Smith2010; Buckingham & Goodale Reference Buckingham and Goodale2013). Indeed, this backwards pattern of updating is so strong that it can produce outcomes that are not merely odd or improbable, but even “impossible” (Won et al. Reference Won, Gross and Firestone2019): If subjects are shown three boxes in a stack – Boxes A, B, and C – such that Box A is heavy (250 g) but Boxes B and C are light (30 g), then subjects who lift Box A alone and then Boxes A+B+C together report that Box A feels heavier than Boxes A+B+C – an “impossible” experience of weight (because a group could never weigh less than a member of that group).
How can a “rational” account – even a resource-rational one – explain this? Lieder and Griffiths accommodate other sensory “repulsion” effects (Wei & Stocker Reference Wei and Stocker2015; Reference Wei and Stocker2017), but that modeling work seems inapplicable to the size-weight illusion. And whereas the original size-weight illusion could perhaps have a tortuous Bayesian explanation (Peters et al. Reference Peters, Ma and Shams2016), Won et al.'s modification seemingly cannot: First, it's unclear if previous models of simultaneous lifting apply to Won et al.'s temporally-extended case; but second, there is just no logical chain of reasoning that should end with A alone being heavier than A+B+C together.
More generally: What are the principles that lead to perverse “anti-Bayesian” updating? Perhaps resource rationality wasn't intended to cover all cases (in which case it is not an “Imperial Bayesian” theory; Mandelbaum Reference Mandelbaum2019). But, the problem isn't merely that there are counterexamples to resource rationality, but rather that these are predictable, law-like counterexamples that do not reflect performance constraints between interacting mental processes. Indeed, when it comes to these more entrenched patterns, even “resources” may not save rationality.
Resource rationality takes seemingly irrational behaviors and reframes them as rational or optimal given other constraints on agents. For example, anchoring-and-adjustment and overestimating extreme events turn out be “rational” after all, by reflecting the rational allocation of cognitive resources. Thus, even for such classically irrational phenomena, “the resulting train of thought eventually converges to the Bayes-optimal inference” (p. 38).
In such cases, reasoners fall short of perfectly rational updating, and it is illuminating that resource- and performance-based constraints can accommodate such suboptimal reasoning. But what about cases where we behave not merely suboptimally, but rather against the norms of Bayesian inference? Here, we explore cases where the mind is moved by prior knowledge in precisely the reverse direction of what a rational analysis would recommend. These cases are not merely suboptimal, but rather “anti-Bayesian,” for actively defying Bayesian norms of inference. We consider two such phenomena: belief polarization and sensory integration (Fig. 1). Can resource rationality handle them?
Figure 1. Examples of “anti-Bayesian” updating in the mind. (A) Under conditions of cognitive dissonance, acquiring – and affirming – evidence against one’s beliefs can cause those beliefs to strengthen (Batson Reference Batson1975), whereas Bayesian norms of inference recommend softening those beliefs. (B) In the size-weight illusion, one is shown two objects of different sizes but equal weights; when one lifts them up, the smaller one feels illusorily heavier than the larger one (Buckingham Reference Buckingham2014; Charpentier Reference Charpentier1891; Won et al. Reference Won, Gross and Firestone2019). In other words, ambiguous sensory data about which of two objects is heavier is resolved “against” one’s prior expectations, rather than in favor of one’s priors as recommended by Bayesian norms of inference. Can resource rationality accommodate such paradigmatically “irrational” phenomena?
First, belief polarization: Receiving evidence contrary to your beliefs should soften those beliefs, even if ever-so-slightly. But, this isn't what actually happens when the beliefs in question are central to one's identity – in belief polarization, contrary or disconfirming evidence causes more extreme beliefs, not more moderate ones. A classic example was vividly documented by Festinger et al. (Reference Festinger, Riecken and Schachter1956): Cult members who predict the world will end on some date – but who then see that date come and go with no cataclysm – end up strengthening their beliefs in the cult's tenets, not softening them. In other words, credible evidence against their worldview only makes them hold that worldview more strongly – directly defying Bayesian inference norms.
The same phenomenon can be found under laboratory conditions. For example, one study exposed people who believe that Jesus is the Son of God to a (fake) news article reporting that archeologists had unearthed carbon-dated letters from the New-Testament authors; the letters said the Bible was fraudulent and that its authors knew Jesus was not divinely born (Batson Reference Batson1975). Subjects who did not believe the article's content left their beliefs about Jesus unchanged; but, fascinatingly, subjects who did believe the article's content ended up strengthening their belief that Jesus was the Son of God. In other words, affirming new evidence against Jesus's divine birth (~P) caused stronger beliefs in Jesus's divine birth (P). Similar “backwards” updating is also observed for beliefs about nuclear safety (Plous Reference Plous1991), health (Liberman & Chaiken Reference Liberman and Chaiken1992), and affirmative action and gun control (Taber & Lodge Reference Taber and Lodge2006; see also Mandelbaum Reference Mandelbaum2019).
Why does this happen? In fact, belief polarization is not so mysterious: It has been known for decades, and it is even a predictable consequence of dissonance theory – “the psychological immune system” (Gilbert et al. Reference Gilbert, Pinel, Wilson, Blumberg and Wheatley1998) – applied to one's values. What is mysterious is why this should occur in a Bayesian mind – even one constrained by “resources.” Belief polarization is irrational not because people are insufficiently moved by evidence, but rather because people are moved in the direction opposite the one they should be. And, importantly, these patterns cannot be explained by biased attitudes toward the evidence's source. For example, Bayesian models of milder forms of belief polarization (e.g., Jern et al. Reference Jern, Chang and Kemp2014) suggest that subjects infer that contrary evidence must have come from unreliable sources (e.g., biased testimony); but this seems inapplicable to the above cases, where the sources are either nature itself (e.g., the world failing to end), or evidence the subject has actively accepted (e.g., news articles they endorsed).
Indeed, “anti-Bayesian” updating is widespread, occurring even in basic perceptual processes. When we have prior expectations about new and uncertain sensory data, rational norms of inference say we should interpret such data with respect for those priors; “people should leverage their prior knowledge about the statistics of the world to resolve perceptual uncertainty” (p. 40). But, sensory integration frequently occurs the opposite way. Consider the size-weight illusion, wherein subjects see two equally weighted objects – one large and one small – and then lift them both to feel their weight. Which feels heavier? We “should” resolve the ambiguous haptic evidence about which object is heavier in favor of our priors; but instead, the classic and much-replicated finding is that we experience the smaller object as heavier than the equally-weighted larger object (Buckingham Reference Buckingham2014; Charpentier Reference Charpentier1891). This too is “irrational” – not for falling short of Bayesian norms of inference, but for proceeding opposite to them, because we resolve the ambiguous sensory evidence – two equally weighted objects – against the larger-is-heavier prior, not in favor of it (Brayanov & Smith Reference Brayanov and Smith2010; Buckingham & Goodale Reference Buckingham and Goodale2013). Indeed, this backwards pattern of updating is so strong that it can produce outcomes that are not merely odd or improbable, but even “impossible” (Won et al. Reference Won, Gross and Firestone2019): If subjects are shown three boxes in a stack – Boxes A, B, and C – such that Box A is heavy (250 g) but Boxes B and C are light (30 g), then subjects who lift Box A alone and then Boxes A+B+C together report that Box A feels heavier than Boxes A+B+C – an “impossible” experience of weight (because a group could never weigh less than a member of that group).
How can a “rational” account – even a resource-rational one – explain this? Lieder and Griffiths accommodate other sensory “repulsion” effects (Wei & Stocker Reference Wei and Stocker2015; Reference Wei and Stocker2017), but that modeling work seems inapplicable to the size-weight illusion. And whereas the original size-weight illusion could perhaps have a tortuous Bayesian explanation (Peters et al. Reference Peters, Ma and Shams2016), Won et al.'s modification seemingly cannot: First, it's unclear if previous models of simultaneous lifting apply to Won et al.'s temporally-extended case; but second, there is just no logical chain of reasoning that should end with A alone being heavier than A+B+C together.
More generally: What are the principles that lead to perverse “anti-Bayesian” updating? Perhaps resource rationality wasn't intended to cover all cases (in which case it is not an “Imperial Bayesian” theory; Mandelbaum Reference Mandelbaum2019). But, the problem isn't merely that there are counterexamples to resource rationality, but rather that these are predictable, law-like counterexamples that do not reflect performance constraints between interacting mental processes. Indeed, when it comes to these more entrenched patterns, even “resources” may not save rationality.