Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-11T12:00:54.112Z Has data issue: false hasContentIssue false

Psychologists should learn structural specification and experimental econometrics

Published online by Cambridge University Press:  10 February 2022

Don Ross*
Affiliation:
School of Society, Politics, and Ethics, University College Cork, Cork, T12 AW89, Ireland. don.ross931@gmail.com School of Economics, University of Cape Town, Rondebosch7701, South Africa. http://uct.academia.edu/DonRoss Center for Economic Analysis of Risk, J. Mack Robinson College of Business, Georgia State University, Atlanta, GA30303, USA

Abstract

The most plausible of Yarkoni's paths to recovery for psychology is the least radical one: psychologists need truly quantitative methods that exploit the informational power of variance and heterogeneity in multiple variables. If they drop ambitions to explain entire behaviors, they could find a box full of design and econometric tools in the parts of experimental economics that don't ape psychology.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

The methodological tradition of experimental design and analysis in psychology has generated epistemological pathology that undermines the discipline. The crisis is more serious because it is not recent and acute but old, deep, and chronic. Yarkoni's proposed responses invite very different metrics of assessment. Abandoning standard experimental psychology would obviously pitch out babies with bathwater, but trying to estimate the babies/bathwater ratio would be daunting. Resorting entirely to qualitative description and reflection would be only a slower and less transparent path to disciplinary suicide. If, as Yarkoni argues, most of the qualitative relationships that psychologists treat as hypotheses obviously obtain some of the time, then it is hard to see why we would want to maintain a whole academic discipline merely to pronounce these truisms. We already have other people better trained to unearth surprising implications of apparently protean psychological truths, namely philosophers. I therefore prefer to focus on Yarkoni's proposed path to better quantitative practice. I suggest that (some) economists offer a useful model of practice here.

I will start with a philosophical question: What exactly is experimental psychology supposed to be for? A possible if imprecise answer, which none of Yarkoni's rhetoric seems to contest, is “explaining and predicting behavior that results from biological information processing.” But we might object that almost no behavior by such an incorrigibly social and culturally embedded species as humans results only from biological information processing. This applies to even the simplest behaviors. Suppose you wanted to model the production of conversation-supporting hand gestures in a group of speakers. Some gestures will have been developed by individuals as solutions to communicative aims they idiosyncratically find difficult. More will have been copied from specific role models. Most will simply have been inherited from childhood cultural learning samples. These three kinds of data-generating processes (DGPs) involve varying combinations of psychological, cultural-environmental, and economic causal pathways. Now, also “obviously,” biological information processing plays an essential role in all three: social influences have to influence motor control systems in speakers' brains. If we seek a general theory of hand gesturing, then the contribution we ideally want from the psychologist here is to partial out this component of the overall behavior-generating process. This could mean (qualitatively) locating it in the flow-chart diagram of a putative mechanism, or (quantitatively) assigning a parameterized weight to a coefficient associated with it. By illustrative contrast, the economist's job is to figure out, for example, how relatively entrenched different gestures are in response to shifting incentives around trade-offs between signal precision and cross-audience effectiveness.

I think that the triumph of connectionist over classical artificial intelligence showed us the naïveté of the boxological approach. This is why Yarkoni's third response to the methodological crisis is where the action is. But then clearly one shouldn't try to implement it by seeing whether one can statistically reject a hypothesis that includes the assumption that everything in each DGP except the encapsulated dynamics in speakers' nervous systems, and indexicals for their specific histories, is a fixed effect in a linear model. Of course one can generate data to reject any hypothesis in this class, because they're all certainly false. Piling up such rejections takes one not a jot closer to a general model, both because the process of elimination is endless but also because the entire search takes place in the wrong solution space.

Economists are also in the business of trying to partial out one element in the behavior production function, marginal incentive changes. But because they know that incentives operate through multiple channels, including channels where information relevant to successful goal achievement are typically hidden from the subject (she copies successful bond investors, she doesn't simulate them), they are less likely to have strong priors on what might constitute a “confound.” Indeed, those experimental economists who have not, disastrously, aped psychological methodology (as have, alas, the behavioral economists who most beguile non-economist audiences) don't tend to use the language of “confounds” at all. They assume that everything on the right-hand side of a structural model specification that might vary needs its own treatment group. Instead of trying to wall extra-economic causal factors out of the lab, they expand (in effect) the boundaries of the lab under theoretical guidance. This is expensive. Fortunately, the prevailing funding ecosystem in economics has evolved a norm that good experiments usually need generous budgets.

None of this would help much if econometric estimation techniques didn't co-evolve with the complexity of the structural models that are used as identification templates in lineages of experiments. But, at least since the coming of lots of cheap data-processing power, such co-evolution has been supported by the division of labor and resources in the discipline. Yarkoni's third path would have psychologists embracing the informational power of variance and heterogeneity, both in the kinds of model specifications they build and in their experimental designs. So, I suggest, they should study Bayesian experimental econometrics (Andersen, Harrison, Lau, & Rutström, Reference Andersen, Harrison, Lau and Rutström2010; Kruschke & Liddell, Reference Kruschke and Liddell2018; Lee & Wagenmakers, Reference Lee and Wagenmakers2013). If they designed experiments so as to make full use of the resulting expansion of inferential power, individual experiments would become significantly more expensive, even in the absence of economists' special reason for needing to generously pay subjects. But it seems clear that Yarkoni agrees that a world with many fewer but much better psychological experiments would be an improved world.

A critic might reject my “partialling out” job description for psychology, and insist that psychologists aim to explain entire behavioral complexes rather than the aspects of behavioral causation that are psychological. Such hubris would greatly raise the stakes of Yarkoni's challenge: a claim that all of behavioral science should follow dead-end methodology would deserve the fiercest resistance. Economists have frequently had to unlearn disciplinary imperialism the hard way, but most now acknowledge that although people respond to economic incentives, they aren't ruled by them. Psychologists should be motivated to disciplinary modesty by the same kind of shock that eventually reformed standard practice in economics, failure to actually accumulate knowledge.

Financial support

This research received no specific grant from any funding agency.

Conflict of interest

None.

References

Andersen, S., Harrison, G. W., Lau, M. I., & Rutström, E. E. (2010). Behavioral econometrics for psychologists. Journal of Economic Psychology, 31, 553576.CrossRefGoogle Scholar
Kruschke, J., & Liddell, T. (2018). The Bayesian new statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective. Psychonomic Bulletin and Review, 25, 178206.CrossRefGoogle ScholarPubMed
Lee, M., & Wagenmakers, E.-J. (2013). Bayesian cognitive modeling: A practical course. Cambridge University Press.CrossRefGoogle Scholar