Hostname: page-component-7b9c58cd5d-nzzs5 Total loading time: 0 Render date: 2025-03-16T16:20:25.363Z Has data issue: false hasContentIssue false

So you want to run an experiment, now what? Some simple rules of thumb for optimal experimental design

Published online by Cambridge University Press:  14 March 2025

John A. List
Affiliation:
University of Chicago, 1126 E. 59th Street, Chicago, IL 60637, USA NBER, Cambridge, MA, USA
Sally Sadoff*
Affiliation:
University of Chicago, 1126 E. 59th Street, Chicago, IL 60637, USA
Mathis Wagner
Affiliation:
Department of Economics, Boston College, 140 Commonwealth Avenue, Chestnut Hill, MA 02467, USA

Abstract

Experimental economics represents a strong growth industry. In the past several decades the method has expanded beyond intellectual curiosity, now meriting consideration alongside the other more traditional empirical approaches used in economics. Accompanying this growth is an influx of new experimenters who are in need of straightforward direction to make their designs more powerful. This study provides several simple rules of thumb that researchers can apply to improve the efficiency of their experimental designs. We buttress these points by including empirical examples from the literature.

JEL classification

Type
Original Paper
Copyright
Copyright © 2011 Economic Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Atkinson, A. C., & Donev, A. N. (1992). Optimum experimental designs. Oxford: Clarendon Press.CrossRefGoogle Scholar
Berry, D. A. (2004). Bayesian statistics and the efficiency and ethics of clinical trials. Statistical Science, 19, 175187.CrossRefGoogle Scholar
Bloom, H. S. (2005). Randomizing Groups to Evaluate Place-Based Programs. In Bloom, H. S. (Ed.), Learning more from social experiments: evolving analytic approaches. New York: Russell Sage Foundation.Google Scholar
Blundell, R., & Costa Dias, M. (2002). Alternative approaches to evaluation in empirical microeconomics. Portuguese Economic Journal, 1, 91115.CrossRefGoogle Scholar
Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in experiments: a review and capital-labor-production framework. Journal of Risk and Uncertainty, 19, 742.CrossRefGoogle Scholar
Cochran, W. G., & Cox, G. M. (1950). Experimental designs. New York: Wiley.Google Scholar
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale: Erlbaum Associates.Google Scholar
Donner, A., & Klar, N. (2000). Design and analysis of cluster randomization in health research. London: Holder Arnold.Google Scholar
Duflo, E., Glennerster, R., & Kremer, M. (2007). Using randomization in development economics research: a toolkit (CEPR Discussion Paper No. 6059). Center for Economic Policy Research, London, England.Google Scholar
El-Gamal, M. A., & Palfrey, T. R. (1996). Economical experiments: Bayesian efficient experimental designs. International Journal of Game Theory, 25, 495517.CrossRefGoogle Scholar
El-Gamal, M. A., McKelvey, R. D., & Palfrey, T. R. (1993). A Bayesian sequential experimental study of learning in games. Journal of the American Statistical Association, 88, 428435.CrossRefGoogle Scholar
Fisher, R. A. (1935). The design of experiments. Edinburgh: Oliver and Boyd.Google Scholar
Fleiss, J. L., Levin, B., & Paik, M. C. (2003). Statistical methods for rates and proportions. Hoboken: Wiley-Interscience.CrossRefGoogle Scholar
Greenwald, A. G. (1976). Within-subjects design: to use or not to use. Psychological Bulletin, 83, 314320.CrossRefGoogle Scholar
Hahn, J., Hirano, K., & Karlan, D. (2011). Adaptive experimental design using the propensity score. Journal of Business and Economic Statistics, 29(1), 96108.CrossRefGoogle Scholar
Harrison, G., & List, J. A. (2004). Field experiments. Journal of Economic Literature, XLII, 10091055.CrossRefGoogle Scholar
Harrison, G. W., Lau, M. I., & Rutström, E. E. (2009). Risk attitudes, randomization to treatment, and self-selection into experiments. Journal of Economic Behavior & Organization, 70, 498507.CrossRefGoogle Scholar
Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81, 945960.CrossRefGoogle Scholar
Kanninen, B. J. (2002). Optimal design for multinomial choice experiments. Journal of Marketing Research, 39(2), 14227.CrossRefGoogle Scholar
Karlan, D., & List, J. A. (2007). Does price matter in charitable giving? Evidence from a large-scale natural field experiment. The American Economic Review, 97, 17741793.CrossRefGoogle Scholar
Keren, G. (1993). Between or within-subjects design: a methodological dilemma. In Keren, G. & Lewis, C. (Eds.), A handbook for data analysis in the behavioral sciences: methodological issues. Hillsdale: Erlbaum Associates.Google Scholar
Kish, L. (1965). Survey sampling. New York: Wiley.Google Scholar
Kramer, M. S., & Shapiro, S. H. (1984). Scientific challenges in the application of randomized trials. Journal of the American Medical Association, 252(19), 27392745.CrossRefGoogle ScholarPubMed
Lenth, R. V. (2001). Some practical guidelines for effective sample size determination. The American Statistician, 55, 187193.CrossRefGoogle Scholar
Lenth, R. V. (2006-2009). Java applets for power and sample size [Computer Software]. http://www.stat.uiowa.edu/rlenth/Power. Accessed February 2010.Google Scholar
Levitt, S. D., & List, J. A. (2009). Field experiments in economics: the past, the present, and the future. European Economic Review, 53(1), 118.CrossRefGoogle Scholar
Levitt, S. D., & List, J. A. (2007). What do laboratory experiments measuring social preferences tell us about the real world? The Journal of Economic Perspectives, 21(2), 153174.CrossRefGoogle Scholar
List, J. A. (2001). Do explicit warnings eliminate the hypothetical bias in elicitation procedures? Evidence from field auctions for sportscards. The American Economic Review, 91(5), 14981507.CrossRefGoogle Scholar
List, J. A. (2006). Field experiments: a bridge between lab and naturally occurring data. Advances in Economic Analysis & Policy, 6(2), 8.Google Scholar
Liu, X., Spybrook, J., Congden, R., Martinez, A., & Raudenbush, S. W. (2009). Optimal design for longitudinal and multilevel research v.2.0 [Computer Software]. Available at: http://www.wtgrantfoundation.org/resources/overview/research_tools.Google Scholar
Loomes, G. (2005). Modeling the stochastic component of behaviour in experiments: some issues for the interpretation of data. Experimental Economics, 8, 301323.CrossRefGoogle Scholar
Mead, R. (1988). The design of experiments: statistical principles for practical application. Cambridge: Cambridge University Press.Google Scholar
McClelland, G. H. (1997). Optimal design in psychological research. Psychological Methods, 2(1), 319.CrossRefGoogle Scholar
Raudenbush, S. W. (1997). Statistical analysis and optimal design for cluster randomized trials. Psychological Methods, 2(2), 173185.CrossRefGoogle Scholar
Raudenbush, S. W., Martinez, A., & Spybrook, J. (2007). Strategies for improving precision in group- randomized experiments. Educational Evaluation and Policy Analysis, 29(1), 529.CrossRefGoogle Scholar
Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies of causal effects. Biometrika, 70, 4155.CrossRefGoogle Scholar
Rubin, D.B. (1978). Bayesian inference for causal effects: the role of randomization. Annals of Statistics, 6, 3458.CrossRefGoogle Scholar
Rutström, E. E., & Wilcox, N. T. (2009). Stated beliefs versus inferred beliefs: a methodological inquiry and experimental test. Games and Economic Behavior, 67, 616632.CrossRefGoogle Scholar
Schuirmann, D. J. (1987). A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of bioavailability. Journal of Pharmacokinetics and Biopharmaceutics, 15, 657680.CrossRefGoogle ScholarPubMed
Spybrook, J., Raudenbush, S. W., Congden, R., & Martinez, A. (2009). Optimal design for longitudinal and multilevel research: documentation for the optimal design software (Working Paper). Available at: http://www.wtgrantfoundation.org/resources/overview/research_tools.Google Scholar
StataCorp (2007). Stata statistical software: Release 10. College Station: StataCorp LP.Google Scholar
Wilcox, R. R. (1996). Statistics for the social sciences. San Diego: Academic Press.Google Scholar
Wilcox, N. (2008). Stochastic models for binary discrete choice under risk: a critical primer and econometric comparison. In Cox, J. C. & Harrison, G. W. (Eds.), Research in experimental economics: Vol. 12. Risk aversion in experiments. Bingley: Emerald.Google Scholar