We endorse Cesario's call for more research into the complexities of “real-world” decisions and the comparative power of different causes of group disparities (Brownstein, Madva, & Gawronski, Reference Brownstein, Madva and Gawronski2020; Cesario et al. Reference Cesario, Plaks, Hagiwara, Navarrete and Higgins2010; Davidson & Kelly, Reference Davidson and Kelly2020). Unfortunately, these reasonable suggestions are overshadowed by a barrage of non sequiturs, misdirected criticisms of methodology, and unsubstantiated claims about the assumptions and inferences of social psychologists. We leave the latter issue aside, except to express frustration that the purportedly ubiquitous “logic among social psychologists” is documented with a mere three citations (sect. 1., para. 1), while a later discussion of real-world group differences – for example – is supported with twenty-nine (sect. 3.2, para. 2).
Cesario's “Missing Forces Flaw” alleges that social psychologists dismiss potential causes of group disparities other than bias, such as gender differences in science, technology, engineering, and mathematics (STEM) abilities or neighborhood crime rates in the case of police shootings. Far from ignoring such causes, however, many social psychologists assume them. A commonplace in social psychology is that biases are symptoms or mirror-like reflections of social reality (e.g., Dasgupta, Reference Dasgupta2013; Forscher et al., Reference Forscher, Lai, Axt, Ebersole, Herman, Devine and Nosek2019; Glaser, Reference Glaser2014; cf. Madva, Reference Madva2016a, Reference Madva2017; Payne, Vuletich, & Lundberg, Reference Payne, Vuletich and Lundberg2017). It makes little sense for Cesario to claim that social psychologists fail to interpret “experimental categorical effects in light of other known forces on group outcomes” (sect. 3.2, para. 4) when social psychologists also argue that experimental categorical effects are reflections of other known forces on group outcomes. We happen to be skeptical of the social determinism implied by talk of “mirror-like reflections,” but examining this idea requires more research into the nature of categorical biases and the ways they interact with broader social context, not less.
Acknowledging the need for more research does not, thankfully, commit us to the dubious claim that existing lab studies “cannot” provide information about real-world decisions and group disparities. Cesario's all-or-nothing claims about the in-principle uninformativeness of lab studies obscure more difficult questions about how much researchers should update their beliefs about group disparities based on different lab studies. Despite one passing reference to Bayes, Cesario has no discussion of what it can mean for x to “provide information about” or “be evidence of” y, or, crucially, the difference between deductive, absolutist reasoning and inductive, probabilistic reasoning. Thus, ironically, Cesario inductively infers from one set of limited-information lab studies that other limited-information lab studies are entirely uninformative about the “real world.” Instead of accusing social psychologists of drawing fallacious deductive conclusions, perhaps Cesario's criticisms could be reformulated to say that researchers are updating their beliefs sometimes more (when it comes to the explanatory power of bias) and sometimes less (when it comes to the explanatory power of other factors) than they should. But evaluating such claims about more fine-grained epistemic responses to the evolving evidence would require arguments and evidence Cesario hasn't provided.
Cesario also commits a version of the fundamental attribution error he attributes to social psychologists. His view is that lab-based studies on bias ignore wider context. But other than a brief mention of “reward structure” (sect. 8, para. 7), one is left with the impression that social psychologists' fallacious inferences are the cause of the problem. Cesario ignores the myriad structural incentives and constraints – the context! – guiding research choices. There is, for example, evidence to suggest that the very-warranted pressure to produce more replicable results has made social psychology less ecologically valid and more reliant on limited-information online studies (Sassenberg & Ditrich, Reference Sassenberg and Ditrich2019). An alternative version of the target article could have explored the tradeoffs and consequences accompanying these shifting structural incentives.
If correct, Cesario's arguments would impugn not just social psychology, but much of experimental science. In medical and pharmacological research, a decontextualized lab study testing how mice respond to a vaccine provides tentative evidence for how other mammals, like humans, will react outside the lab. Researchers adjust their prior beliefs accordingly, despite much “missing information,” and eventually take their research outside the lab. Social psychology lacks something analogous to phase 2 and phase 3 clinical trials presumably because it is not funded by capital or supported by government like medical research, not because of its “logic.”
Cesario also accuses social psychologists of “methodological trickery” (sect. 5, para. 5) by treating probabilistic information people use in ordinary life as bias during experiments. But this is not trickery; it isn't even ecologically invalid. There are many real-world contexts in which people do and should suspend knowledge of probabilities, for both epistemic and moral reasons (Madva, Reference Madva, Brownstein and Saul2016b). When serving on a jury, you are reasonably restricted from considering certain information (e.g., the perceived criminality of members of the defendant's social group). Or consider anonymous review in academic journals and “prestige bias.” Suppose the prestige of an author's university affiliation predicts, in some way, the quality of her submission. It would still be a separate and legitimate question whether the author's affiliation should be taken into consideration by journal editors.
Similarly, it isn't a flaw of an experimental paradigm – or “blank slate worldism” (sect. 5, para. 7) – if it tests whether participants can bracket some of what they know in order to discover something about their minds. Asking participants in a shooter task to ignore background base rates, such as the likelihood, given their race, that a person is holding a gun, is entirely appropriate for the epistemic aim of determining that bias exists and for learning how it operates under certain conditions. Learning this about bias is different than learning about what causes it to exist or what effects it has under other conditions, but all of this is worth knowing.
Setting aside the target article's non sequiturs and melodrama, what remains are familiar challenges faced by any science striving to generalize and apply its results. A final irony, then, is that many of the improvements to the experimental and theoretical paradigms that Cesario discusses – simulator studies of shooting decisions, recognition that implicit biases aren't unconscious – are because of the kind of work done by social psychologists and their fellow travelers in adjacent disciplines. Continued progress on such challenges will very likely be the result of more, not less, of the relevant research.
We endorse Cesario's call for more research into the complexities of “real-world” decisions and the comparative power of different causes of group disparities (Brownstein, Madva, & Gawronski, Reference Brownstein, Madva and Gawronski2020; Cesario et al. Reference Cesario, Plaks, Hagiwara, Navarrete and Higgins2010; Davidson & Kelly, Reference Davidson and Kelly2020). Unfortunately, these reasonable suggestions are overshadowed by a barrage of non sequiturs, misdirected criticisms of methodology, and unsubstantiated claims about the assumptions and inferences of social psychologists. We leave the latter issue aside, except to express frustration that the purportedly ubiquitous “logic among social psychologists” is documented with a mere three citations (sect. 1., para. 1), while a later discussion of real-world group differences – for example – is supported with twenty-nine (sect. 3.2, para. 2).
Cesario's “Missing Forces Flaw” alleges that social psychologists dismiss potential causes of group disparities other than bias, such as gender differences in science, technology, engineering, and mathematics (STEM) abilities or neighborhood crime rates in the case of police shootings. Far from ignoring such causes, however, many social psychologists assume them. A commonplace in social psychology is that biases are symptoms or mirror-like reflections of social reality (e.g., Dasgupta, Reference Dasgupta2013; Forscher et al., Reference Forscher, Lai, Axt, Ebersole, Herman, Devine and Nosek2019; Glaser, Reference Glaser2014; cf. Madva, Reference Madva2016a, Reference Madva2017; Payne, Vuletich, & Lundberg, Reference Payne, Vuletich and Lundberg2017). It makes little sense for Cesario to claim that social psychologists fail to interpret “experimental categorical effects in light of other known forces on group outcomes” (sect. 3.2, para. 4) when social psychologists also argue that experimental categorical effects are reflections of other known forces on group outcomes. We happen to be skeptical of the social determinism implied by talk of “mirror-like reflections,” but examining this idea requires more research into the nature of categorical biases and the ways they interact with broader social context, not less.
Acknowledging the need for more research does not, thankfully, commit us to the dubious claim that existing lab studies “cannot” provide information about real-world decisions and group disparities. Cesario's all-or-nothing claims about the in-principle uninformativeness of lab studies obscure more difficult questions about how much researchers should update their beliefs about group disparities based on different lab studies. Despite one passing reference to Bayes, Cesario has no discussion of what it can mean for x to “provide information about” or “be evidence of” y, or, crucially, the difference between deductive, absolutist reasoning and inductive, probabilistic reasoning. Thus, ironically, Cesario inductively infers from one set of limited-information lab studies that other limited-information lab studies are entirely uninformative about the “real world.” Instead of accusing social psychologists of drawing fallacious deductive conclusions, perhaps Cesario's criticisms could be reformulated to say that researchers are updating their beliefs sometimes more (when it comes to the explanatory power of bias) and sometimes less (when it comes to the explanatory power of other factors) than they should. But evaluating such claims about more fine-grained epistemic responses to the evolving evidence would require arguments and evidence Cesario hasn't provided.
Cesario also commits a version of the fundamental attribution error he attributes to social psychologists. His view is that lab-based studies on bias ignore wider context. But other than a brief mention of “reward structure” (sect. 8, para. 7), one is left with the impression that social psychologists' fallacious inferences are the cause of the problem. Cesario ignores the myriad structural incentives and constraints – the context! – guiding research choices. There is, for example, evidence to suggest that the very-warranted pressure to produce more replicable results has made social psychology less ecologically valid and more reliant on limited-information online studies (Sassenberg & Ditrich, Reference Sassenberg and Ditrich2019). An alternative version of the target article could have explored the tradeoffs and consequences accompanying these shifting structural incentives.
If correct, Cesario's arguments would impugn not just social psychology, but much of experimental science. In medical and pharmacological research, a decontextualized lab study testing how mice respond to a vaccine provides tentative evidence for how other mammals, like humans, will react outside the lab. Researchers adjust their prior beliefs accordingly, despite much “missing information,” and eventually take their research outside the lab. Social psychology lacks something analogous to phase 2 and phase 3 clinical trials presumably because it is not funded by capital or supported by government like medical research, not because of its “logic.”
Cesario also accuses social psychologists of “methodological trickery” (sect. 5, para. 5) by treating probabilistic information people use in ordinary life as bias during experiments. But this is not trickery; it isn't even ecologically invalid. There are many real-world contexts in which people do and should suspend knowledge of probabilities, for both epistemic and moral reasons (Madva, Reference Madva, Brownstein and Saul2016b). When serving on a jury, you are reasonably restricted from considering certain information (e.g., the perceived criminality of members of the defendant's social group). Or consider anonymous review in academic journals and “prestige bias.” Suppose the prestige of an author's university affiliation predicts, in some way, the quality of her submission. It would still be a separate and legitimate question whether the author's affiliation should be taken into consideration by journal editors.
Similarly, it isn't a flaw of an experimental paradigm – or “blank slate worldism” (sect. 5, para. 7) – if it tests whether participants can bracket some of what they know in order to discover something about their minds. Asking participants in a shooter task to ignore background base rates, such as the likelihood, given their race, that a person is holding a gun, is entirely appropriate for the epistemic aim of determining that bias exists and for learning how it operates under certain conditions. Learning this about bias is different than learning about what causes it to exist or what effects it has under other conditions, but all of this is worth knowing.
Setting aside the target article's non sequiturs and melodrama, what remains are familiar challenges faced by any science striving to generalize and apply its results. A final irony, then, is that many of the improvements to the experimental and theoretical paradigms that Cesario discusses – simulator studies of shooting decisions, recognition that implicit biases aren't unconscious – are because of the kind of work done by social psychologists and their fellow travelers in adjacent disciplines. Continued progress on such challenges will very likely be the result of more, not less, of the relevant research.
Financial support
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.
Conflict of interest
None.