Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-06T10:57:22.837Z Has data issue: false hasContentIssue false

Look to the field

Published online by Cambridge University Press:  10 February 2022

Rumen Iliev
Affiliation:
Toyota Research Institute, Los Altos, CA94022, USArumen.iliev@tri.global
Douglas Medin
Affiliation:
Department of Psychology and School of Education and Social Policy, Northwestern University, Evanston, IL60208, USAmedin@northwestern.edu
Megan Bang
Affiliation:
Learning Sciences and Department of Psychology, Northwestern University, Annenberg Hall, EvanstonIL60208, USAMegan.bang@northwestern.edu

Abstract

Yarkoni's paper makes an important contribution to psychological research by its insightful analysis of generalizability. We suggest, however, that broadening research practices to include field research and the correlated use of both converging and complementary observations gives reason for optimism.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

We agree with Yarkoni's thesis that there is a “generalizability crisis” and that the mapping between verbal theoretical constructs and measures and models is the source of many difficulties. In particular, the limited variation in procedures, stimuli, contexts, and measures represents a significant challenge to generalizability. Yarkoni summarizes these concerns by suggesting that “a huge proportion of the quantitative inferences drawn in the published psychology literature are so weak as to be at best questionable and at worst utterly nonsensical.”

Although Yarkoni's arguments are compelling, we don't fully agree with the somewhat gloomy picture he paints. The generalizability crisis creates something of a paradox: If generalization claims are on such shaky grounds, why is it that many phenomena are so robust that they make for reliable classroom demonstrations and/or have been shown to have substantial practical significance?

With respect to the former, examples include a number of judgment and decision biases identified and analyzed by Kahneman, Tversky, Fischhoff, Slovic, Loewenstein, Weber, and others (e.g., availability heuristic, loss aversion, framing effects, quantity insensitivity). With respect to the latter, Cialdini (Reference Cialdini2009a, Reference Cialdini2009b) has demonstrated simple but effective manipulations that increase environmentally friendly behaviors (e.g., hotel guests reusing towels). Similarly, implementing changes default assumptions (Thaler & Sunstein, Reference Thaler and Sunstein2008) has been shown to facilitate policy goals such as increasing organ donation.

Field versus lab

We suggest that attention to the field is a critical factor supporting both relevance and generalizability. Those involved in lab research usually aim to demonstrate the presence of a particular effect, and tend to be motivated to create a specific environment or context to observe it. Lab researchers have an unlimited number of levers to establish conditions which will maximize the chances for observing desired effects. Rigorous control procedures can be implemented that are not feasible outside the lab. But this precise control may be exactly what limits generalizability.

Field researchers face the opposite problem. They typically work in environments which can be changed very little, and with populations they rarely can preselect. Field/applied researchers are routinely motivated to search for effects and manipulations which are robust enough to work in their specific context. Field research may operate as a “generalizability filter” separating tenuous effects from interventions with a higher chance for success.

Judgment and decision-making research may have benefited from the fact that much of it has been done in business schools. Business school faculty rarely have access to a “subject pool” and they tend to rely on both studies in classrooms and in the field. The participants in business school studies often are students who have experience in the business world and are seeking MBAs (or PhDs). This is just one factor that serves to increase the likelihood that research by business school faculty will make connections with corporate contexts.

Consider, for example, “sunk cost” effects. Sunk costs refer to situations where commitment of resources is continued and escalated beyond any rational considerations because one doesn't want to “waste” the prior investment. This is sometimes referred to as “throwing good money after bad.” The interest in sunk cost effects originated with real-world examples. But a careful analysis of generalizability suggests that there are other situations where the opposite of sunk cost effects can be shown (prematurely withdrawing an investment just before it starts to pay off; e.g., Drummond, Reference Drummond2014; Heath, Reference Heath1995). Instead of undermining the sunk costs construct, such findings invite attention to what factors are associated with each type of outcome. For instance, sunk cost effects for money may be different from sunk cost effects for time (Cunha, Marcus, & Caldieraro, Reference Cunha and Caldieraro2009; Soman, Reference Soman2001).

Field research may also serve as a direct test of generalizability of lab findings. For example Hofmann, Wisneski, Brant, and Skitka (Reference Hofmann, Wisneski, Brant and Skitka2014) used text messaging at varied times to assess everyday moral and immoral acts and experiences. They found moral experiences to be common and, they observed both moral licensing and moral contagion, effects that previously had been shown in lab studies.

This interplay between lab and field is useful to both. Although generalizability is important, it could be argued that variability is even more fundamental. At the heart of social science is the search for patterned variation, variation that our theories seek to understand. Attention to the field may serve to increase attention to potential interactions and undermine a main effect focus.

Field as a source of complementary evidence

As Yarkoni notes, conceptual replications (as opposed to exact replications) put assumptions of generalizability to the test and represent an effective research strategy. They also are a key tool in establishing construct validity (e.g., Grahek, Schaller, & Tackett, Reference Grahek, Schaller and Tackett2021), linking theory and measures.

Field observation offers a complementary form of converging measure that can be an important research tool. For example, lab studies suggesting that participants see nature as incompatible with human presence (nature is pristine and humans can enjoy it but are not part of it) can be complemented by analyses using Google images. For example, a search of images for “ecosystems” found that humans were present only two percent of the time and for about half of that two percent humans were outside the system looking in (Medin & Bang, Reference Medin and Bang2014). Similarly, experimental observations suggesting cultural differences in subjective proximity to nature (Bang, Medin, & Atran, Reference Bang, Medin and Atran2007) may be complemented by corresponding differences in illustrations in children's books (Bang et al., in press).

An additional benefit of complementary field observations is that they facilitate analyzing changes over time (Iliev & Ojalehto, Reference Iliev and Ojalehto2015). For example, claims about increasing cultural individualism may be paralleled by corresponding changes in cultural artifacts (Greenfield, Reference Greenfield2013, Reference Greenfield2017). In short, field observations invite complementary and coordinated observations both as a stimulus for new studies and as a guide to robustness of findings.

Financial support

This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

Conflict of interest

None.

References

Bang, M., Alfonso, J., Faber, L., Marin, A., Marin, M., Medin, D., … Woodring, J. (in press). Perspective taking and psychological distance in children's picture books: Differences between native and non-native authored books. In Nelson-Barber, S. & Chinn, P. W. U. (Eds.), Indigenous STEM education: Perspectives from the Pacific Islands, the Americas and Asia. New York, NY: Springer.Google Scholar
Bang, M., Medin, D. L., & Atran, S. (2007). Cultural mosaics and mental models of nature. Proceedings of the National Academy of Sciences, 104(35), 1386813874.CrossRefGoogle ScholarPubMed
Cialdini, R. B. (2009a). Influence: Science and practice (5th ed.). Boston: Allyn & Bacon.Google Scholar
Cialdini, R. B. (2009b). We have to break up. Perspectives on Psychological Science, 4, 56.CrossRefGoogle Scholar
Cunha, Jr. M., & Caldieraro, F. (2009). Sunk-cost effects on purely behavioral investments. Cognitive Science, 33(1), 105113.CrossRefGoogle ScholarPubMed
Drummond, H. (2014). Escalation of commitment: When to stay the course? Academy of Management Perspectives, 28(4), 430446.CrossRefGoogle Scholar
Grahek, I, Schaller, M., & Tackett, J. L. (2021). Anatomy of a psychological theory: Integrating construct-validation and computational-modeling methods to advance theorizing. Perspectives in Psychological Science, 16(4), 803815.CrossRefGoogle ScholarPubMed
Greenfield, P. M. (2013). The changing psychology of culture from 1800 through 2000. Psychological Science, 24, 17221731.CrossRefGoogle ScholarPubMed
Greenfield, P. M. (2017). Cultural change over time: Why replicability should not be the gold standard in psychological science. Perspectives on Psychological Science, 12(5), 762771.CrossRefGoogle Scholar
Heath, C. (1995). Escalation and de-escalation of commitment in response to sunk costs: The role of budgeting in mental accounting. Organizational Behavior and Human Decision Processes, 62, 3854.CrossRefGoogle Scholar
Hofmann, W., Wisneski, D. C., Brant, M. J., & Skitka, L. J. (2014). Morality in everyday life. Science, 345, 13401343.CrossRefGoogle Scholar
Iliev, R. L., & Ojalehto, B. L. (2015). Bringing history back to culture: On the missing diachronic component in the research on culture and cognition. Frontiers in Psychology 10, 14.Google Scholar
Medin, D. L., & Bang, M. (2014). Who's asking? Native science, western science and science education. MIT Press.CrossRefGoogle Scholar
Soman, D. (2001). The mental accounting of sunk time costs: Why time is not like money. Journal of Behavioral Decision Making, 14(3), 169185.CrossRefGoogle Scholar
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Penguin Books.Google Scholar