The target article joins a long line of compelling critiques of social psychology methodology. We suspect the latest critique, like its predecessors, will have little effect on how social psychologists study discrimination. A design ethos of “experimental realism” that relies on engaging but manufactured social settings (Aronson & Carlsmith, Reference Aronson, Carlsmith, Lindzey and Aronson1968) makes gathering data much easier than an approach that demands fidelity to real-world contingencies. The retort to critics is that the goal is to find theories that generalize and experimental control is essential to theory development (e.g., Banaji & Crowder, Reference Banaji and Crowder1989).
Unfortunately, social psychologists rarely examine whether their theories do, in fact, generalize; when they do, the results are not pretty (Mitchell, Reference Mitchell2012). Nor do social psychology journals demand much evidence that experimental constructions actually measure or manipulate the hypothesized processes of interest (Chester & Lasko, Reference Chester and Lasko2021), which helps explain why more than 20 years after the racial-attitudes implicit association test was introduced, we still do not know what it actually measures (Schimmack, Reference Schimmack2021). The career calculus is clear. With journals happy for authors to speculate about the real-world implications of a statistically significant correlation or mean difference observed using convenience samples under artificial conditions, why embark on the arduous task of establishing external and construct validity? Any possible confound in design will spell doom for publication, while obvious shortcomings in the sample and manipulations chosen to test what passes for a theory will merit only cursory mention in a concluding section on limitations of the study.
As long as social psychology journals exalt internal validity over all other forms of validity, we should not expect social psychology to produce any theories that can really explain, much less help meliorate, social problems. Making passage of reality checks essential to publication (e.g., requiring comparison of an online convenience sample to a sample of persons with experience in the domain of interest or requiring that a theory be tested on archival data and not only on materials constructed for an experiment) would move the field away from exalting effects that prove to be the product of a quirky design decision that ignored key features of the situations or persons of theoretical interest. Such reality checks would serve as a form of “consistency test” of the kind that mature sciences employ (Meehl, Reference Meehl1978), and making reality checks a condition for publication would encourage greater care in theory development, pushing theorists to spell out boundary conditions and necessary auxiliary assumptions to narrow the range of reality checks that must be passed for the theory to survive.
We can understand why an exasperated Bayesian observer might conclude that until reality checks become a required part of theory validation within the field, the default assumption should be the best base-rate guess: neither social psychological theories nor effects will generalize. To those who worry that this default assumption would protect an oppressive status quo, we propose to locate the debate in signal detection framework. A false-negative error would be to dismiss a truly generalizable social psychological effect. A false-positive error would be to embrace an effect that proves to be a hot-house flower that wilts fast in the wild. We see the latter error as vastly more common today – hence our sympathy for the exasperated Bayesian. Our view is that it is better – for both the science and society – to require investigators to test the practical utility of their ideas using rigorous evaluation methods than to give politicians or consultants open-ended scientific license to invent popular or profitable interventions that they hope will work but that they never intend to subject to rigorous evaluation (see, e.g., Paluck, Porat, Clark, & Green, Reference Paluck, Porat, Clark and Green2021).
Take the case of implicit bias. To our knowledge, no implicit bias training program implemented by a police department or other organization has ever been shown to have net behavioral benefits or to be justified under any cost–benefit analysis, yet countless dollars and work hours are being spent on such programs rather than other programs that might prove more effective. Certainly the belief that implicit bias explains many group disparities is widespread, and that belief may well have positive political consequences for some groups and may even reduce discrimination through increased sensitivity to its occurrence, but that belief continues to exist despite, not because of, social psychological research on the predictive (in)validity of measures of implicit bias. If the goal of social psychology is to create an ideology, rather than a science of social behavior, then it appears to have succeeded in the short term, but we suspect that success will erode its long-term credibility and its ability to provide long-term solutions to social problems.
The target article joins a long line of compelling critiques of social psychology methodology. We suspect the latest critique, like its predecessors, will have little effect on how social psychologists study discrimination. A design ethos of “experimental realism” that relies on engaging but manufactured social settings (Aronson & Carlsmith, Reference Aronson, Carlsmith, Lindzey and Aronson1968) makes gathering data much easier than an approach that demands fidelity to real-world contingencies. The retort to critics is that the goal is to find theories that generalize and experimental control is essential to theory development (e.g., Banaji & Crowder, Reference Banaji and Crowder1989).
Unfortunately, social psychologists rarely examine whether their theories do, in fact, generalize; when they do, the results are not pretty (Mitchell, Reference Mitchell2012). Nor do social psychology journals demand much evidence that experimental constructions actually measure or manipulate the hypothesized processes of interest (Chester & Lasko, Reference Chester and Lasko2021), which helps explain why more than 20 years after the racial-attitudes implicit association test was introduced, we still do not know what it actually measures (Schimmack, Reference Schimmack2021). The career calculus is clear. With journals happy for authors to speculate about the real-world implications of a statistically significant correlation or mean difference observed using convenience samples under artificial conditions, why embark on the arduous task of establishing external and construct validity? Any possible confound in design will spell doom for publication, while obvious shortcomings in the sample and manipulations chosen to test what passes for a theory will merit only cursory mention in a concluding section on limitations of the study.
As long as social psychology journals exalt internal validity over all other forms of validity, we should not expect social psychology to produce any theories that can really explain, much less help meliorate, social problems. Making passage of reality checks essential to publication (e.g., requiring comparison of an online convenience sample to a sample of persons with experience in the domain of interest or requiring that a theory be tested on archival data and not only on materials constructed for an experiment) would move the field away from exalting effects that prove to be the product of a quirky design decision that ignored key features of the situations or persons of theoretical interest. Such reality checks would serve as a form of “consistency test” of the kind that mature sciences employ (Meehl, Reference Meehl1978), and making reality checks a condition for publication would encourage greater care in theory development, pushing theorists to spell out boundary conditions and necessary auxiliary assumptions to narrow the range of reality checks that must be passed for the theory to survive.
We can understand why an exasperated Bayesian observer might conclude that until reality checks become a required part of theory validation within the field, the default assumption should be the best base-rate guess: neither social psychological theories nor effects will generalize. To those who worry that this default assumption would protect an oppressive status quo, we propose to locate the debate in signal detection framework. A false-negative error would be to dismiss a truly generalizable social psychological effect. A false-positive error would be to embrace an effect that proves to be a hot-house flower that wilts fast in the wild. We see the latter error as vastly more common today – hence our sympathy for the exasperated Bayesian. Our view is that it is better – for both the science and society – to require investigators to test the practical utility of their ideas using rigorous evaluation methods than to give politicians or consultants open-ended scientific license to invent popular or profitable interventions that they hope will work but that they never intend to subject to rigorous evaluation (see, e.g., Paluck, Porat, Clark, & Green, Reference Paluck, Porat, Clark and Green2021).
Take the case of implicit bias. To our knowledge, no implicit bias training program implemented by a police department or other organization has ever been shown to have net behavioral benefits or to be justified under any cost–benefit analysis, yet countless dollars and work hours are being spent on such programs rather than other programs that might prove more effective. Certainly the belief that implicit bias explains many group disparities is widespread, and that belief may well have positive political consequences for some groups and may even reduce discrimination through increased sensitivity to its occurrence, but that belief continues to exist despite, not because of, social psychological research on the predictive (in)validity of measures of implicit bias. If the goal of social psychology is to create an ideology, rather than a science of social behavior, then it appears to have succeeded in the short term, but we suspect that success will erode its long-term credibility and its ability to provide long-term solutions to social problems.
Financial support
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.
Conflict of interest
None.