Hostname: page-component-7b9c58cd5d-bslzr Total loading time: 0.001 Render date: 2025-03-17T04:41:06.916Z Has data issue: false hasContentIssue false

The Limitations of Experimental Design: A Case Study Involving Monetary Incentive Effects in Laboratory Markets

Published online by Cambridge University Press:  14 March 2025

Steven J. Kachelmeier*
Affiliation:
McCombs School of Business, University of Texas at Austin, 1 University Station B6400, Austin, TX 78712, USA
Kristy L. Towry*
Affiliation:
Goizueta Business School, Emory University, 1300 Clifton Road, Atlanta, GA 30322, USA

Abstract

We replicate an influential study of monetary incentive effects by Jamal and Sunder (1991) to illustrate the difficulties of drawing causal inferences from a treatment manipulation when other features of the experimental design vary simultaneously. We first show that the Jamal and Sunder (1991) conclusions hinge on one of their laboratory market sessions, conducted only within their fixed-pay condition, that is characterized by a thin market and asymmetric supply and demand curves. When we replicate this structure multiple times under both fixed pay and pay tied to performance, our findings do not support Jamal and Sunder's (1991) conclusion about the incremental effects of performance-based compensation, suggesting that other features varied in that study likely account for their observed difference. Our ceteris paribus replication leaves us unable to offer any generalized conclusions about the effects of monetary incentives in other market structures, but the broader point is to illustrate that experimental designs that attempt to generalize effects by varying multiple features simultaneously can jeopardize the ability to draw causal inferences about the primary treatment manipulation.

Type
Research Article
Copyright
Copyright © 2005 Economic Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

Author to whom correspondence should be addressed.

References

Anderson, M.J. and Sunder, S. (1995). “Professional Traders as Intuitive Bayesians.” Organizational Behavior and Human Decision Processes. 64, 185202.CrossRefGoogle Scholar
Arkes, H.R. (1991). “Costs and Benefits of Judgment Errors: Implications for Debiasing.” Psychological Bulletin. 110, 486498.CrossRefGoogle Scholar
Bonner, S.E. and Sprinkle, G.B. (2002). “The Effects of Monetary Incentives on Effort and Task Performance: Theories, Evidence, and a Framework for Research.” Accounting, Organizations and Society. 27, 303345.CrossRefGoogle Scholar
Brandouy, O. (2001). “Laboratory Incentive Structure and Control-Test Design in an Experimental Asset Market.” Journal of Economic Psychology. 22, 126.CrossRefGoogle Scholar
Camerer, C.F. and Hogarth, R.M. (1999). “The Effects of Financial Incentives in Experiments: A Review and Capital-Labor-Production Framework.” Journal of Risk and Uncertainty. 19, 742.CrossRefGoogle Scholar
Friedman, D. and Sunder, S. (1994). Experimental Methods: A Primer for Economists. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Glantz, S.A. and Slinker, B.K. (1990). Primer of Applied Regression and Analysis of Variance. New York: McGrawHill.Google Scholar
Glass, G.V., Willson, V.L., and Gottman, J.M. (1975). Design and Analysis of Time Series Experiments. Boulder, CO: Colorado Associated University Press.Google Scholar
Hertwig, R. and Ortmann, A. (2001). “Experimental Practices in Economics: A Methodological Challenge for Psychologists?Behavioral and Brain Sciences. 24, 383451.CrossRefGoogle ScholarPubMed
Holt, C.A., Langan, L.W., and Villamil, A.P. (1986). “Market Power in Oral Double Auctions.” Economic inquiry. 24(1), 107123.CrossRefGoogle Scholar
Huynh, H. and Feldt, L.S. (1976). “Estimation of the Box Correction for Degrees of Freedom from Sample Data in the Randomized Block and Split Plot Designs.” Journal of Educational Statistics. 1, 6972.CrossRefGoogle Scholar
Jamal, K. and Sunder, S. (1991). “Money vs. Gaming: Effects of Salient Monetary Payments in Double Oral Auctions.” Organizational Behavior and Human Decision Processes. 49, 151166.CrossRefGoogle Scholar
Jamal, K. and Sunder, S. (1996). “Bayesian Equilibrium in Double Auctions Populated by Biased Heuristic Traders.” Journal of Economic Behavior and Organization. 31, 273291.CrossRefGoogle Scholar
Krahnen, J.P. and Weber, M. (2001). “Market making in the Laboratory: Does Competition Matter?Experimental Economics. 4, 5585.CrossRefGoogle Scholar
Kuehl, R.O. (1994). Statistical Principles of Research Design and Analysis. Belmont, CA: Duxbury Press.Google Scholar
McCloskey, D.N. and Ziliak, S.T. (1996). “The Standard Error of Regressions.” Journal of Economic Literature. 34, 97114.Google Scholar
Plott, C.R. (1991). “A Computerized Laboratory Market System and Research Support Systems for the Multiple Unit Double Auction.” Social Science Working Paper 783. Pasadena: California Institute of Technology.Google Scholar
Shadish, W.R., Cook, T.D., and Campbell, D.T. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin.Google Scholar
Smith, V.L. and Walker, J.M. (1993). “Monetary Rewards and Decision Cost in Experimental Economics.” Economic Inquiry. 21, 245261.CrossRefGoogle Scholar
Tung, YA. and Marsden, J.R. (2000). “Trading Volumes with and without Private Information: A Study Using Computerized Market Experiments.” Journal of Management Information Systems. 17, 3157.CrossRefGoogle Scholar