Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by Crossref.
Leentjens, Albert F.G.
and
Levenson, James L.
2013.
Ethical issues concerning the recruitment of university students as research subjects.
Journal of Psychosomatic Research,
Vol. 75,
Issue. 4,
p.
394.
Falk, Armin
Meier, Stephan
and
Zehnder, Christian
2013.
DO LAB EXPERIMENTS MISREPRESENT SOCIAL PREFERENCES? THE CASE OF SELF-SELECTED STUDENT SAMPLES.
Journal of the European Economic Association,
Vol. 11,
Issue. 4,
p.
839.
Weinberg, Jill
Freese, Jeremy
and
McElhattan, David
2014.
Comparing Data Characteristics and Results of an Online Factorial Survey between a Population-Based and a Crowdsource-Recruited Sample.
Sociological Science,
Vol. 1,
Issue. ,
p.
292.
Altmann, S.
Falk, A.
Grunewald, A.
and
Huffman, D.
2014.
Contractual Incompleteness, Unemployment, and Labour Market Segmentation.
The Review of Economic Studies,
Vol. 81,
Issue. 1,
p.
30.
Kim, Jae-Eun
and
K.P. Johnson, Kim
2014.
Shame or pride?.
European Journal of Marketing,
Vol. 48,
Issue. 7/8,
p.
1431.
Noh, Mijeong
Johnson, Kim K. P.
and
Koo, Jayoung
2015.
Building an Exploratory Model for Part‐time Sales Associates’ Turnover Intentions.
Family and Consumer Sciences Research Journal,
Vol. 44,
Issue. 2,
p.
184.
Cappelen, Alexander W.
Nygaard, Knut
Sørensen, Erik Ø.
and
Tungodden, Bertil
2015.
Social Preferences in the Lab: A Comparison of Students and a Representative Population.
The Scandinavian Journal of Economics,
Vol. 117,
Issue. 4,
p.
1306.
Königsgruber, Roland
and
Palan, Stefan
2015.
Earnings management and participation in accounting standard-setting.
Central European Journal of Operations Research,
Vol. 23,
Issue. 1,
p.
31.
Lennon, Sharron J.
and
Kim, Jung-Hwan
2015.
Making Sense of the E-Service Quality Literature.
International Journal of Service Science, Management, Engineering, and Technology,
Vol. 6,
Issue. 1,
p.
37.
Johnson, Kim K P
Lennon, Sharron J.
Mun, Jung Mee
and
Choi, Dooyoung
2015.
Fashion/clothing research: an analysis of three journals.
Journal of Fashion Marketing and Management,
Vol. 19,
Issue. 1,
p.
41.
Lennon, Sharron J.
and
Kim, Jung-Hwan
2016.
Web-Based Services.
p.
160.
Bernstein, Joseph
Kupperman, Eli
Kandel, Leonid Ari
and
Ahn, Jaimo
2016.
Shared Decision Making, Fast and Slow: Implications for Informed Consent, Resource Utilization, and Patient Satisfaction in Orthopaedic Surgery.
Journal of the American Academy of Orthopaedic Surgeons,
Vol. 24,
Issue. 7,
p.
495.
Rad, Mostafa Salari
Martingano, Alison Jane
and
Ginges, Jeremy
2018.
Toward a psychology of
Homo sapiens
: Making psychological science more representative of the human population
.
Proceedings of the National Academy of Sciences,
Vol. 115,
Issue. 45,
p.
11401.
Ravert, Russell D.
Stoddard, Nathan A.
and
Donnellan, M. Brent
2018.
A Content Analysis of the Methods Used to Study Emerging Adults in Six Developmental Journals From 2013 to 2015.
Emerging Adulthood,
Vol. 6,
Issue. 3,
p.
151.
Thöni, Christian
2018.
Cross-Cultural Behavioral Experiments: Potential and Challenges.
SSRN Electronic Journal,
Czibor, Eszter
Jimenez-Gomez, David
and
List, John A.
2019.
The Dozen Things Experimental Economists Should Do (More Of).
SSRN Electronic Journal ,
Czibor, Eszter
Jimenez‐Gomez, David
and
List, John A.
2019.
The Dozen Things Experimental Economists Should Do (More of).
Southern Economic Journal,
Vol. 86,
Issue. 2,
p.
371.
Frigau, Luca
Medda, Tiziana
and
Pelligra, Vittorio
2019.
From the field to the lab. An experiment on the representativeness of standard laboratory subjects.
Journal of Behavioral and Experimental Economics,
Vol. 78,
Issue. ,
p.
160.
Reindl, Ilona
and
Tyran, Jean-Robert
2020.
Equal Opportunities for All? How Income Redistribution Promotes Support for Economic Inclusion.
SSRN Electronic Journal ,
KÖLLE, FELIX
LANE, TOM
NOSENZO, DANIELE
and
STARMER, CHRIS
2020.
Promoting voter registration: the effects of low-cost interventions on behaviour and norms.
Behavioural Public Policy,
Vol. 4,
Issue. 1,
p.
26.
In their excellent article, Henrich et al. rightly caution us to be careful when we draw general conclusions from WEIRD subject pools, of which undergraduates are the most frequently used one, also in economics. My main comment is that the right choice of subject pool is intimately linked to the research question. Since the different behavioral sciences also have different research questions, the right choice of subject pool will also often be different across disciplines. In my own discipline, economics, students are actually often the best subject pool for quite a few (fundamental) research questions. Here is why I believe so.
Economic theories normally do not come with assumptions (or even caveats) about the restricted validity to only a specific group of people; that is, they (implicitly) assume “generality.” Like the assumption of selfishness, “generality” is a good assumption in the absence of rigorous data. The tools of experimental economics have been deployed to investigate the empirical relevance of the selfishness assumption (see, e.g., Fehr et al. Reference Fehr, Gächter and Fischbacher2002) and are now also used to probe the “generality assumption,” that is, the importance of variations of behavior across population subgroups within a given society (e.g., Bellemare et al. Reference Bellemare, Kröger and Van Soest2008) or across societies (e.g., Herrmann et al. Reference Herrmann, Thoni and Gächter2008).
However, my main point is this: The “right choice” of subject pool depends on the research question. If the researcher is interested in understanding behavioral variation between particular groups of people, then the right choice is running experiments with these people. The landmark study by Henrich et al. (Reference Henrich, Boyd, Bowles, Camerer, Fehr, Gintis, McElreath, Alvard, Barr, Ensminger, Henrich, Hill, Gil-White, Gurven, Marlowe, Patton and Tracer2005) is a shining example. Yet, at least in economics, substantial effort is also devoted to test formal theories or to detect interesting behavioral regularities (Bardsley et al. Reference Bardsley, Cubitt, Loomes, Moffatt, Starmer and Sugden2010; Croson & Gächter Reference Croson and Gächter2010; Smith Reference Smith2010). Because economic theories normally assume generality, any subject pool is in principle informative about whether theoretical predictions or assumptions contain behavioral validity. At that stage, generalizability to other subject pools is not (yet) an issue. Among the universe of potential subject pools to test a theory, students are often the perfect one: on average, students are educated, intelligent, and used to learning. These are very valuable characteristics because, in addition to the main aspect of a theory of interest to the researcher, economic theories often assume cognitive sophistication. It therefore makes sense to control for sophistication also by choice of subject pool (in addition to clear instructions), in order to minimize chances of confounding genuine behavioral reactions to the treatment of interest with lack of understanding of the basic decision situation.
Take recent theories of social preferences (as surveyed, e.g., in Fehr & Schmidt Reference Fehr, Schmidt, Kolm and Ythier2006) as an example. In addition to other-regarding preferences, these theories all assume cognitive sophistication. When testing these theories, the main point of interest is not to find out whether people are as cognitively sophisticated as the theories (maybe wrongly) assume, but to see to what extent other-regarding motives exist, holding everything else constant. Because students are typically above average with regard to cognitive sophistication, they are often a perfect subject pool for first tests of a theory. Moreover, students, unlike most other subject pools, are readily available (and cost effective). Experiments can therefore also easily be replicated, which is important to establish empirical regularity and hard to achieve with any other subject pool.
Of course, strictly speaking, observed results hold only for the subject pool from which evidence is collected. Generalizability is a generic issue in any empirical research (Falk & Heckman Reference Falk and Heckman2009). However, once a clear benchmark result is established, we can proceed by testing, for example, how age and life experience matter (e.g., Sutter & Kocher Reference Sutter and Kocher2007b), or how results extend to more representative subject pools (e.g., Bellemare et al. Reference Bellemare, Kröger and Van Soest2008; Carpenter et al. Reference Carpenter, Connolly and Myers2008). Along the way, researchers often establish whether and how students differ from the general population.
As Henrich et al. point out, understanding the potential influence of cross-societal (or cultural) differences in (economic) behavior is a particularly interesting direction for investigating generalizability. But it poses further challenges, in particular if socio-demographic factors matter (as some of the above-cited research suggests). The reason is that socio-demographic influences might be confounded with genuine societal or cultural differences. The problem is exacerbated the more subject pools are actually being compared. Again, to ensure that confounds are minimized, student subject pools are often the best available choice (Bohnet et al. Reference Bohnet, Greig, Herrmann and Zeckhauser2008; Herrmann et al. Reference Herrmann, Thoni and Gächter2008) to establish a clean benchmark result on how people from different societal/cultural backgrounds behave in the exact same decision situation – a fundamental question from the generality perspective of economics. The benchmark can – and should(!) – then be taken as a starting point for investigating generalizability to other social groups.