Hostname: page-component-745bb68f8f-g4j75 Total loading time: 0 Render date: 2025-02-11T12:55:05.070Z Has data issue: false hasContentIssue false

The unbearable limitations of solo science: Team science as a path for more rigorous and relevant research

Published online by Cambridge University Press:  13 May 2022

Alison Ledgerwood
Affiliation:
Department of Psychology, University of California, Davis, CA95616, USA. aledgerwood@ucdavis.eduhttp://www.alisonledgerwood.com/
Cynthia Pickett
Affiliation:
Office of the Provost, DePaul University, Chicago, IL60604, USAcindy.pickett@depaul.eduhttps://csh.depaul.edu/faculty-staff/faculty-a-z/Pages/psychology/cynthia-pickett.aspx
Danielle Navarro
Affiliation:
Department of Psychology, University of New South Wales, 2052Sydney, Australiad.navarro@unsw.edu.auhttps://djnavarro.net/
Jessica D. Remedios
Affiliation:
Department of Psychology, Tufts University, Medford, MA02155, USAJessica.Remedios@tufts.eduhttps://as.tufts.edu/psychology/social-identity-and-stigma-lab
Neil A. Lewis Jr.
Affiliation:
Department of Communication, Cornell University, Ithaca, NY14853, USA. nlewisjr@cornell.eduhttps://neillewisjr.com/

Abstract

Both early social psychologists and the modern, interdisciplinary scientific community have advocated for diverse team science. We echo this call and describe three common pitfalls of solo science illustrated by the target article. We discuss how a collaborative and inclusive approach to science can both help researchers avoid these pitfalls and pave the way for more rigorous and relevant research.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

In 1946, Lewin wrote about the importance of conducting “action research” that could improve intergroup relations. Lewin and his contemporaries recognized that to do action research well, psychologists could not work alone. To do so would limit their ability to answer three critical questions regarding the phenomenon under study: “(1) What is the present situation? (2) What are the dangers? (3) And most important of all, what shall we do?” (Lewin, Reference Lewin1946, p. 34). They learned that rigorous and relevant social psychological research requires collaborating not only with scientists in other disciplines to understand the full range of forces acting upon a person in a social system, but also with community partners, governments, and other local stakeholders who have direct access to information and insights about how those forces operate in the specific context at hand (IJzerman et al., Reference IJzerman, Lewis, Przybylski, Weinstein, DeBruine, Ritchie and Anvari2020). Indeed, a growing consensus across disciplines recognizes the value of a collaborative, multidisciplinary, and inclusive approach to science (Albornoz, Posada, Okune, Hillyer, & Chan, Reference Albornoz, Posada, Okune, Hillyer and Chan2017; Disis & Slattery, Reference Disis and Slattery2010; Ledgerwood et al., Reference Ledgerwood, Hudson, Lewis, Maddox, Pickett, Remedios and Wilkins2021; Murphy et al., Reference Murphy, Mejia, Mejia, Yan, Cheryan, Dasgupta and Pestilli2020).

The importance of a collaborative approach was well-known in the early days of psychology but has been neglected in the modern era (Cialdini, Reference Cialdini2009). Neglecting the true powers of the situation the cultural, economic, historical, political, and sociological forces that affect the mind (including the minds of psychologists) limits the rigor and relevance of the discipline's research, and hampers psychologists' ability to truly understand the conditions under which our work is or is not relevant for social issues.

In his target article, Cesario discusses challenges he perceives in social psychological experiments on bias, and concludes that we should abandon such experiments. While we agree that many experiments have flaws, our view is that Cesario's own critique suffers from three flaws that render his conclusion premature (Table 1). We further suggest that these flaws could have been avoided by collaborating with multidisciplinary experts or even experts in other areas of psychology.

Table 1. Three common flaws in solo science illustrated by the target article

The first flaw is the biased search flaw: When people's expectations lead them to consider an incomplete set of possibilities or to search through available information in a manner shaped by personal expectations (Cameron & Trope, Reference Cameron and Trope2004). This flaw is costly because it leads to mistaken conclusions based on an incomplete survey of possible alternatives. For example, the target article correctly notes that effect sizes depend on the paradigm used to study them (Kennedy, Simpson, & Gelman, Reference Kennedy, Simpson and Gelman2019; McShane & Böckenholt, Reference McShane and Böckenholt2014). However, it discusses only the possibility that effect sizes observed in the lab would diminish in the world, and omits the possibility that they would be magnified. After all, in the real world, effects of discrimination compound over time (Krieger & Sidney, Reference Krieger and Sidney1996; Mays, Cochran, & Barnes, Reference Mays, Cochran and Barnes2007); small effects can become large when compounded across many decisions (Funder & Ozer, Reference Funder and Ozer2019). Similarly, although lab studies typically only manipulate a single dimension of bias, in the world, dimensions of bias can intersect to produce compounded or unique effects (Berdahl & Moore, Reference Berdahl and Moore2006; Remedios & Sanchez, Reference Remedios and Sanchez2018; Settles & Buchanan, Reference Settles, Buchanan, Benet-Martínez and Hong2014). Moreover, research suggests that biases can be magnified when people have access to rich information (as in the real world) that can be marshaled to elaborate and rationalize initial expectations (Darley & Gross, Reference Darley and Gross1983; Taber & Lodge, Reference Taber and Lodge2006).

The second flaw is the beginner's bubble flaw: when people know a little about a topic but overestimate how well they understand it (Sanchez & Dunning, Reference Sanchez and Dunning2018). This flaw is costly because it leads scholars to misapply or miss insights developed in other areas. For example, the target article relies heavily on the idea that in the real world, people use information that “may be probabilistically accurate in everyday life” (sect. 5, para. 7) and that using demographic information (e.g., race) to fill in the blanks when full information is unavailable is rational in a Bayesian sense and therefore unbiased. This vague and imprecise assertion muddies waters that have already been clarified at length in adjacent literatures, including in-depth discussions by cognitive modelers on the limits of Bayesian theorizing (Bowers & Davis, Reference Bowers and Davis2012; Jones & Love, Reference Jones and Love2011) and clear distinctions between truth and bias developed in social psychological models of judgment (West & Kenny, Reference West and Kenny2011). Even advocates of Bayesian cognitive models do not claim a behavior is rational or justifiable simply by virtue of being Bayesian (Griffiths, Chater, Norris, & Pouget, Reference Griffiths, Chater, Norris and Pouget2012; Tauber, Navarro, Perfors, & Steyvers, Reference Tauber, Navarro, Perfors and Steyvers2017). A prior is not the same thing as a base rate, nor is it the same thing as truth (Welsh & Navarro, Reference Welsh and Navarro2012). Just because a belief can sometimes lead to correct decisions does not mean it is accurate or optimal to use that belief for all decisions.

The third flaw is the old wine in new bottles flaw: when scholars approach a well-studied idea without recognizing relevant prior work. This flaw is costly because it impedes cumulative and integrative science. For example, discussions of how to connect the world and the lab can and should be grounded in the rich, interdisciplinary work on these questions (Aronson & Carlsmith, Reference Aronson, Carlsmith, Lindzey and Aronson1968; Bauer, Damschroder, Hagedorn, Smith, & Kilbourne, Reference Bauer, Damschroder, Hagedorn, Smith and Kilbourne2015; IJzerman et al., Reference IJzerman, Lewis, Przybylski, Weinstein, DeBruine, Ritchie and Anvari2020; Lewin, Reference Lewin1946; Premachandra & Lewis, Reference Premachandra and Lewis2021). Similarly, previous discussions of external validity have inspired considerable research that helpfully spans the “troubling…gap” (p. 42) between highly controlled studies of bias and disparate treatment in complex real-world contexts (e.g., Dupas, Modestino, Niederle, & Wolfers, Reference Dupas, Modestino, Niederle and Wolfers2021; Sarsons, Reference Sarsons2017).

These three flaws illustrate common pitfalls for researchers who attempt to tackle large and complex problems from a single vantage point, but they can be mitigated or avoided by working collaboratively in diverse teams (Ledgerwood et al., Reference Ledgerwood, Hudson, Lewis, Maddox, Pickett, Remedios and Wilkins2021; Murphy et al., Reference Murphy, Mejia, Mejia, Yan, Cheryan, Dasgupta and Pestilli2020). The key to successfully connecting the lab with the real world is not to abandon experiments on socially relevant topics, but instead for social psychologists to form collaborative partnerships with organizations that can provide on-the-ground insights that lead us to design better experiments (IJzerman et al., Reference IJzerman, Lewis, Przybylski, Weinstein, DeBruine, Ritchie and Anvari2020).

Acknowledgements

This study was supported by NSF #BCS-1941440 to A.L. and a Faculty Fellowship from the Cornell Center for Social Sciences to N.L. The authors thank Katherine Weltzien, Paul Eastwick, Srilaxmi Pappoppula, Stephanie Goodwin, and Sylvia Liu for their help.

Conflict of interest

None.

References

Albornoz, D., Posada, A., Okune, A., Hillyer, R., & Chan, L. (2017). Co-constructing an open and collaborative manifesto to reclaim the open science narrative. Expanding Perspectives on Open Science: Communities, Cultures and Diversity in Concepts and Practices, 293304.Google Scholar
Aronson, E., & Carlsmith, J. M. (1968). Experimentation in social psychology. In Lindzey, G. & Aronson, E. (Eds.), The handbook of social psychology (2nd ed., Vol. 2, pp. 179). Addison-Wesley.Google Scholar
Bauer, M. S., Damschroder, L., Hagedorn, H., Smith, J., & Kilbourne, A. M. (2015). An introduction to implementation science for the non-specialist. BMC Psychology 3(1):112.CrossRefGoogle ScholarPubMed
Berdahl, J. L., & Moore, C. (2006). Workplace harassment: Double jeopardy for minority women. Journal of Applied Psychology 91(2):426436.CrossRefGoogle ScholarPubMed
Bowers, J. S., & Davis, C. J. (2012). Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin 138:389414.CrossRefGoogle ScholarPubMed
Cameron, J. A., & Trope, Y. (2004). Stereotype-biased search and processing of information about group members. Social Cognition 22:650672.CrossRefGoogle Scholar
Cialdini, R. B. (2009). We have to break up. Perspectives on Psychological Science 4:56.CrossRefGoogle ScholarPubMed
Darley, J. M., & Gross, P. H. (1983). A hypothesis-confirming bias in labeling effects. Journal of Personality and Social Psychology 44:20.CrossRefGoogle Scholar
Disis, M. L., & Slattery, J. T. (2010). The road we must take: Multidisciplinary team science. Science Translational Medicine 2(22):22cm922cm9.CrossRefGoogle ScholarPubMed
Dupas, P., Modestino, A. S., Niederle, M., & Wolfers, J. (2021). Gender and the dynamics of economics seminars (No. w28494). National Bureau of Economic Research.CrossRefGoogle Scholar
Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science 2:156168.CrossRefGoogle Scholar
Griffiths, T. L., Chater, N., Norris, D., & Pouget, A. (2012). How the Bayesians got their beliefs (and what those beliefs actually are): Comment on Bowers and Davis (2012). Psychological Bulletin 138:415422.CrossRefGoogle Scholar
IJzerman, H., Lewis, N. A., Przybylski, A. K., Weinstein, N., DeBruine, L., Ritchie, S. J., … Anvari, F. (2020). Use caution when applying behavioural science to policy. Nature Human Behaviour 4(11):10921094.CrossRefGoogle ScholarPubMed
Jones, M., & Love, B. C. (2011). Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences 34:169188.CrossRefGoogle ScholarPubMed
Kennedy, L., Simpson, D., & Gelman, A. (2019). The experiment is just as important as the likelihood in understanding the prior: A cautionary note on robust cognitive modeling. Computational Brain & Behavior 2(3):210217.CrossRefGoogle Scholar
Krieger, N., & Sidney, S. (1996). Racial discrimination and blood pressure: The CARDIA study of young black and white adults. American Journal of Public Health 86(10):13701378.CrossRefGoogle ScholarPubMed
Ledgerwood, A., Hudson, S. T. J., Lewis, N. A. Jr., Maddox, K. B., Pickett, C. L., Remedios, J. D., … Wilkins, C. L. (2021). The pandemic as a portal: Reimagining psychological science as truly open and inclusive, Perspectives on Psychological Science. https://doi.org/10.31234/osf.io/gdzue.Google Scholar
Lewin, K. (1946). Action research and minority problems. Journal of Social Issues 2(4):3446.CrossRefGoogle Scholar
Mays, V. M., Cochran, S. D., & Barnes, N. W. (2007). Race, race-based discrimination, and health outcomes among African Americans. Annual Review of Psychology 58:201225.CrossRefGoogle ScholarPubMed
McShane, B. B., & Böckenholt, U. (2014). You cannot step into the same river twice: When power analyses are optimistic. Perspectives on Psychological Science 9:612625.CrossRefGoogle ScholarPubMed
Murphy, M. C., Mejia, A. F., Mejia, J., Yan, X., Cheryan, S., Dasgupta, N., … Pestilli, F. (2020). Open science, communal culture, and women's participation in the movement to improve science. Proceedings of the National Academy of Sciences 117(39):2415424164.CrossRefGoogle ScholarPubMed
Premachandra, B., & Lewis, N. Jr (2021). Do we report the information that is necessary to give psychology away? A scoping review of the psychological intervention literature 2000–2018. Perspectives on Psychological Science, 17(1). https://doi.org/10.1177/1745691620974774.Google ScholarPubMed
Remedios, J. D., & Sanchez, D. T. (2018). Intersectional and dynamic social categories in social cognition. Social Cognition 36:453460.CrossRefGoogle Scholar
Sanchez, C., & Dunning, D. (2018). Overconfidence among beginners: Is a little learning a dangerous thing? Journal of Personality and Social Psychology 114:1028.CrossRefGoogle ScholarPubMed
Sarsons, H. (2017). Recognition for group work: Gender differences in academia. American Economic Review 107:141145.CrossRefGoogle Scholar
Settles, I. H., & Buchanan, N. T. (2014). Multiple groups, multiple identities, and intersectionality. In Benet-Martínez, V. & Hong, Y.-Y. (Eds.), Oxford Library of psychology. The Oxford handbook of multicultural identity (pp. 160180). Oxford University Press.Google Scholar
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science 50:755769.CrossRefGoogle Scholar
Tauber, S., Navarro, D. J., Perfors, A., & Steyvers, M. (2017). Bayesian Models of cognition revisited: Setting optimality aside and letting data drive psychological theory. Psychological Review 124:410441.CrossRefGoogle ScholarPubMed
Welsh, M. B., & Navarro, D. J. (2012). Seeing is believing: Priors, trust, and base rate neglect. Organizational Behavior and Human Decision Processes 119(1):114.CrossRefGoogle Scholar
West, T. V., & Kenny, D. A. (2011). The truth and bias model of judgment. Psychological Review 118:357378.CrossRefGoogle Scholar
Figure 0

Table 1. Three common flaws in solo science illustrated by the target article