In 1946, Lewin wrote about the importance of conducting “action research” that could improve intergroup relations. Lewin and his contemporaries recognized that to do action research well, psychologists could not work alone. To do so would limit their ability to answer three critical questions regarding the phenomenon under study: “(1) What is the present situation? (2) What are the dangers? (3) And most important of all, what shall we do?” (Lewin, Reference Lewin1946, p. 34). They learned that rigorous and relevant social psychological research requires collaborating not only with scientists in other disciplines to understand the full range of forces acting upon a person in a social system, but also with community partners, governments, and other local stakeholders who have direct access to information and insights about how those forces operate in the specific context at hand (IJzerman et al., Reference IJzerman, Lewis, Przybylski, Weinstein, DeBruine, Ritchie and Anvari2020). Indeed, a growing consensus across disciplines recognizes the value of a collaborative, multidisciplinary, and inclusive approach to science (Albornoz, Posada, Okune, Hillyer, & Chan, Reference Albornoz, Posada, Okune, Hillyer and Chan2017; Disis & Slattery, Reference Disis and Slattery2010; Ledgerwood et al., Reference Ledgerwood, Hudson, Lewis, Maddox, Pickett, Remedios and Wilkins2021; Murphy et al., Reference Murphy, Mejia, Mejia, Yan, Cheryan, Dasgupta and Pestilli2020).
The importance of a collaborative approach was well-known in the early days of psychology but has been neglected in the modern era (Cialdini, Reference Cialdini2009). Neglecting the true powers of the situation the cultural, economic, historical, political, and sociological forces that affect the mind (including the minds of psychologists) limits the rigor and relevance of the discipline's research, and hampers psychologists' ability to truly understand the conditions under which our work is or is not relevant for social issues.
In his target article, Cesario discusses challenges he perceives in social psychological experiments on bias, and concludes that we should abandon such experiments. While we agree that many experiments have flaws, our view is that Cesario's own critique suffers from three flaws that render his conclusion premature (Table 1). We further suggest that these flaws could have been avoided by collaborating with multidisciplinary experts or even experts in other areas of psychology.
Table 1. Three common flaws in solo science illustrated by the target article
The first flaw is the biased search flaw: When people's expectations lead them to consider an incomplete set of possibilities or to search through available information in a manner shaped by personal expectations (Cameron & Trope, Reference Cameron and Trope2004). This flaw is costly because it leads to mistaken conclusions based on an incomplete survey of possible alternatives. For example, the target article correctly notes that effect sizes depend on the paradigm used to study them (Kennedy, Simpson, & Gelman, Reference Kennedy, Simpson and Gelman2019; McShane & Böckenholt, Reference McShane and Böckenholt2014). However, it discusses only the possibility that effect sizes observed in the lab would diminish in the world, and omits the possibility that they would be magnified. After all, in the real world, effects of discrimination compound over time (Krieger & Sidney, Reference Krieger and Sidney1996; Mays, Cochran, & Barnes, Reference Mays, Cochran and Barnes2007); small effects can become large when compounded across many decisions (Funder & Ozer, Reference Funder and Ozer2019). Similarly, although lab studies typically only manipulate a single dimension of bias, in the world, dimensions of bias can intersect to produce compounded or unique effects (Berdahl & Moore, Reference Berdahl and Moore2006; Remedios & Sanchez, Reference Remedios and Sanchez2018; Settles & Buchanan, Reference Settles, Buchanan, Benet-Martínez and Hong2014). Moreover, research suggests that biases can be magnified when people have access to rich information (as in the real world) that can be marshaled to elaborate and rationalize initial expectations (Darley & Gross, Reference Darley and Gross1983; Taber & Lodge, Reference Taber and Lodge2006).
The second flaw is the beginner's bubble flaw: when people know a little about a topic but overestimate how well they understand it (Sanchez & Dunning, Reference Sanchez and Dunning2018). This flaw is costly because it leads scholars to misapply or miss insights developed in other areas. For example, the target article relies heavily on the idea that in the real world, people use information that “may be probabilistically accurate in everyday life” (sect. 5, para. 7) and that using demographic information (e.g., race) to fill in the blanks when full information is unavailable is rational in a Bayesian sense and therefore unbiased. This vague and imprecise assertion muddies waters that have already been clarified at length in adjacent literatures, including in-depth discussions by cognitive modelers on the limits of Bayesian theorizing (Bowers & Davis, Reference Bowers and Davis2012; Jones & Love, Reference Jones and Love2011) and clear distinctions between truth and bias developed in social psychological models of judgment (West & Kenny, Reference West and Kenny2011). Even advocates of Bayesian cognitive models do not claim a behavior is rational or justifiable simply by virtue of being Bayesian (Griffiths, Chater, Norris, & Pouget, Reference Griffiths, Chater, Norris and Pouget2012; Tauber, Navarro, Perfors, & Steyvers, Reference Tauber, Navarro, Perfors and Steyvers2017). A prior is not the same thing as a base rate, nor is it the same thing as truth (Welsh & Navarro, Reference Welsh and Navarro2012). Just because a belief can sometimes lead to correct decisions does not mean it is accurate or optimal to use that belief for all decisions.
The third flaw is the old wine in new bottles flaw: when scholars approach a well-studied idea without recognizing relevant prior work. This flaw is costly because it impedes cumulative and integrative science. For example, discussions of how to connect the world and the lab can and should be grounded in the rich, interdisciplinary work on these questions (Aronson & Carlsmith, Reference Aronson, Carlsmith, Lindzey and Aronson1968; Bauer, Damschroder, Hagedorn, Smith, & Kilbourne, Reference Bauer, Damschroder, Hagedorn, Smith and Kilbourne2015; IJzerman et al., Reference IJzerman, Lewis, Przybylski, Weinstein, DeBruine, Ritchie and Anvari2020; Lewin, Reference Lewin1946; Premachandra & Lewis, Reference Premachandra and Lewis2021). Similarly, previous discussions of external validity have inspired considerable research that helpfully spans the “troubling…gap” (p. 42) between highly controlled studies of bias and disparate treatment in complex real-world contexts (e.g., Dupas, Modestino, Niederle, & Wolfers, Reference Dupas, Modestino, Niederle and Wolfers2021; Sarsons, Reference Sarsons2017).
These three flaws illustrate common pitfalls for researchers who attempt to tackle large and complex problems from a single vantage point, but they can be mitigated or avoided by working collaboratively in diverse teams (Ledgerwood et al., Reference Ledgerwood, Hudson, Lewis, Maddox, Pickett, Remedios and Wilkins2021; Murphy et al., Reference Murphy, Mejia, Mejia, Yan, Cheryan, Dasgupta and Pestilli2020). The key to successfully connecting the lab with the real world is not to abandon experiments on socially relevant topics, but instead for social psychologists to form collaborative partnerships with organizations that can provide on-the-ground insights that lead us to design better experiments (IJzerman et al., Reference IJzerman, Lewis, Przybylski, Weinstein, DeBruine, Ritchie and Anvari2020).
In 1946, Lewin wrote about the importance of conducting “action research” that could improve intergroup relations. Lewin and his contemporaries recognized that to do action research well, psychologists could not work alone. To do so would limit their ability to answer three critical questions regarding the phenomenon under study: “(1) What is the present situation? (2) What are the dangers? (3) And most important of all, what shall we do?” (Lewin, Reference Lewin1946, p. 34). They learned that rigorous and relevant social psychological research requires collaborating not only with scientists in other disciplines to understand the full range of forces acting upon a person in a social system, but also with community partners, governments, and other local stakeholders who have direct access to information and insights about how those forces operate in the specific context at hand (IJzerman et al., Reference IJzerman, Lewis, Przybylski, Weinstein, DeBruine, Ritchie and Anvari2020). Indeed, a growing consensus across disciplines recognizes the value of a collaborative, multidisciplinary, and inclusive approach to science (Albornoz, Posada, Okune, Hillyer, & Chan, Reference Albornoz, Posada, Okune, Hillyer and Chan2017; Disis & Slattery, Reference Disis and Slattery2010; Ledgerwood et al., Reference Ledgerwood, Hudson, Lewis, Maddox, Pickett, Remedios and Wilkins2021; Murphy et al., Reference Murphy, Mejia, Mejia, Yan, Cheryan, Dasgupta and Pestilli2020).
The importance of a collaborative approach was well-known in the early days of psychology but has been neglected in the modern era (Cialdini, Reference Cialdini2009). Neglecting the true powers of the situation the cultural, economic, historical, political, and sociological forces that affect the mind (including the minds of psychologists) limits the rigor and relevance of the discipline's research, and hampers psychologists' ability to truly understand the conditions under which our work is or is not relevant for social issues.
In his target article, Cesario discusses challenges he perceives in social psychological experiments on bias, and concludes that we should abandon such experiments. While we agree that many experiments have flaws, our view is that Cesario's own critique suffers from three flaws that render his conclusion premature (Table 1). We further suggest that these flaws could have been avoided by collaborating with multidisciplinary experts or even experts in other areas of psychology.
Table 1. Three common flaws in solo science illustrated by the target article
The first flaw is the biased search flaw: When people's expectations lead them to consider an incomplete set of possibilities or to search through available information in a manner shaped by personal expectations (Cameron & Trope, Reference Cameron and Trope2004). This flaw is costly because it leads to mistaken conclusions based on an incomplete survey of possible alternatives. For example, the target article correctly notes that effect sizes depend on the paradigm used to study them (Kennedy, Simpson, & Gelman, Reference Kennedy, Simpson and Gelman2019; McShane & Böckenholt, Reference McShane and Böckenholt2014). However, it discusses only the possibility that effect sizes observed in the lab would diminish in the world, and omits the possibility that they would be magnified. After all, in the real world, effects of discrimination compound over time (Krieger & Sidney, Reference Krieger and Sidney1996; Mays, Cochran, & Barnes, Reference Mays, Cochran and Barnes2007); small effects can become large when compounded across many decisions (Funder & Ozer, Reference Funder and Ozer2019). Similarly, although lab studies typically only manipulate a single dimension of bias, in the world, dimensions of bias can intersect to produce compounded or unique effects (Berdahl & Moore, Reference Berdahl and Moore2006; Remedios & Sanchez, Reference Remedios and Sanchez2018; Settles & Buchanan, Reference Settles, Buchanan, Benet-Martínez and Hong2014). Moreover, research suggests that biases can be magnified when people have access to rich information (as in the real world) that can be marshaled to elaborate and rationalize initial expectations (Darley & Gross, Reference Darley and Gross1983; Taber & Lodge, Reference Taber and Lodge2006).
The second flaw is the beginner's bubble flaw: when people know a little about a topic but overestimate how well they understand it (Sanchez & Dunning, Reference Sanchez and Dunning2018). This flaw is costly because it leads scholars to misapply or miss insights developed in other areas. For example, the target article relies heavily on the idea that in the real world, people use information that “may be probabilistically accurate in everyday life” (sect. 5, para. 7) and that using demographic information (e.g., race) to fill in the blanks when full information is unavailable is rational in a Bayesian sense and therefore unbiased. This vague and imprecise assertion muddies waters that have already been clarified at length in adjacent literatures, including in-depth discussions by cognitive modelers on the limits of Bayesian theorizing (Bowers & Davis, Reference Bowers and Davis2012; Jones & Love, Reference Jones and Love2011) and clear distinctions between truth and bias developed in social psychological models of judgment (West & Kenny, Reference West and Kenny2011). Even advocates of Bayesian cognitive models do not claim a behavior is rational or justifiable simply by virtue of being Bayesian (Griffiths, Chater, Norris, & Pouget, Reference Griffiths, Chater, Norris and Pouget2012; Tauber, Navarro, Perfors, & Steyvers, Reference Tauber, Navarro, Perfors and Steyvers2017). A prior is not the same thing as a base rate, nor is it the same thing as truth (Welsh & Navarro, Reference Welsh and Navarro2012). Just because a belief can sometimes lead to correct decisions does not mean it is accurate or optimal to use that belief for all decisions.
The third flaw is the old wine in new bottles flaw: when scholars approach a well-studied idea without recognizing relevant prior work. This flaw is costly because it impedes cumulative and integrative science. For example, discussions of how to connect the world and the lab can and should be grounded in the rich, interdisciplinary work on these questions (Aronson & Carlsmith, Reference Aronson, Carlsmith, Lindzey and Aronson1968; Bauer, Damschroder, Hagedorn, Smith, & Kilbourne, Reference Bauer, Damschroder, Hagedorn, Smith and Kilbourne2015; IJzerman et al., Reference IJzerman, Lewis, Przybylski, Weinstein, DeBruine, Ritchie and Anvari2020; Lewin, Reference Lewin1946; Premachandra & Lewis, Reference Premachandra and Lewis2021). Similarly, previous discussions of external validity have inspired considerable research that helpfully spans the “troubling…gap” (p. 42) between highly controlled studies of bias and disparate treatment in complex real-world contexts (e.g., Dupas, Modestino, Niederle, & Wolfers, Reference Dupas, Modestino, Niederle and Wolfers2021; Sarsons, Reference Sarsons2017).
These three flaws illustrate common pitfalls for researchers who attempt to tackle large and complex problems from a single vantage point, but they can be mitigated or avoided by working collaboratively in diverse teams (Ledgerwood et al., Reference Ledgerwood, Hudson, Lewis, Maddox, Pickett, Remedios and Wilkins2021; Murphy et al., Reference Murphy, Mejia, Mejia, Yan, Cheryan, Dasgupta and Pestilli2020). The key to successfully connecting the lab with the real world is not to abandon experiments on socially relevant topics, but instead for social psychologists to form collaborative partnerships with organizations that can provide on-the-ground insights that lead us to design better experiments (IJzerman et al., Reference IJzerman, Lewis, Przybylski, Weinstein, DeBruine, Ritchie and Anvari2020).
Acknowledgements
This study was supported by NSF #BCS-1941440 to A.L. and a Faculty Fellowship from the Cornell Center for Social Sciences to N.L. The authors thank Katherine Weltzien, Paul Eastwick, Srilaxmi Pappoppula, Stephanie Goodwin, and Sylvia Liu for their help.
Conflict of interest
None.