Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-02-11T15:10:19.131Z Has data issue: false hasContentIssue false

Does quantum uncertainty have a place in everyday applied statistics?

Published online by Cambridge University Press:  14 May 2013

Andrew Gelman
Affiliation:
Department of Statistics, Columbia University, New York, NY 10027. gelman@stat.columbia.edubetanalpha@gmail.comhttp://www.stat.columbia.edu/~gelman
Michael Betancourt
Affiliation:
Department of Statistics, Columbia University, New York, NY 10027. gelman@stat.columbia.edubetanalpha@gmail.comhttp://www.stat.columbia.edu/~gelman

Abstract

We are sympathetic to the general ideas presented in the article by Pothos & Busemeyer (P&B): Heisenberg's uncertainty principle seems naturally relevant in the social and behavioral sciences, in which measurements can affect the people being studied. We propose that the best approach for developing quantum probability models in the social and behavioral sciences is not by directly using the complex probability-amplitude formulation proposed in the article, but rather, more generally, to consider marginal probabilities that need not be averages over conditionals.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2013 

We are sympathetic to the proposal of modeling joint probabilities using a framework more general than the standard model (known as Boltzmann in physics, or Kolmogorov's laws in probability theory) by relaxing the law of conditional probability, p(x)=Σ y p(x|y)p(y). This identity of total probability seems perfectly natural, but is violated in quantum physics, in which the act of measurement affects what is being measured, and it is well known that one cannot explain this behavior using the standard model and latent variables. (There have been some attempts to reconcile quantum physics with classical probability, but these resolutions are based on expanding the sample space so that measurement is no longer represented by conditioning, thus defeating the simplicity of the probabilistic approach.) The generalized probability theory suggested by quantum physics might very well be relevant in the social sciences.

In standard probability theory, the whole idea of conditioning is that there is a single joint distribution – parts of which may be unobserved or even unobservable, as in much of psychometrics – and that this distribution can be treated as a fixed object measurable through conditioning (e.g., the six blind men and the elephant). A theory that allows the joint distribution to change with each measurement could be appropriate for models of context in social science, such as Mischel's idea of allowing personality traits to depend on the scenario. Just as psychologists have found subadditivity and superadditivity of probabilities in many contexts, we see the potential gain of thinking about violations of the conditional probability law. Some of our own applied work involves political science and policy, often with analysis of data from opinion polls, where there are clear issues of the measurement affecting the outcome. In politics, “measurement” includes not just survey questions but also campaign advertisements, get-out-the-vote efforts, and news events.

We propose that the best way to use ideas of quantum uncertainty in applied statistics (in psychometrics and elsewhere) is not by directly using the complex probability-amplitude formulation proposed in the article, but rather by considering marginal probabilities that need not be averages over conditionals. In particular, we are skeptical of the proposed application of quantum probability to the famous “Linda example.” Kahneman and Tversky's “representativeness heuristic” is to us a more compelling model of that phenomenon.

How exactly would we apply a quantum probability theory to social science? A logical first step would be to set up an experiment sensitive to the violation of the law of conditional probability: a two-slit-like model for a social statistics setting in which measurement affects the person or system being measured. Consider, for example, a political survey in which the outcome of interest, x, is a continuous measure of support for a candidate or political position, perhaps a 0–100 “feeling thermometer” response. An intermediate query, y, such as a positive or negative report on the state of the economy, plays the role of a measurement in the Heisenberg sense. The marginal distribution of support might well be different than the simple mixture of the two conditional distributions, and we would consequently expect p(x)≠Σ y p(x|y)p(y).

A more sophisticated approach, and at the same time a stronger test of the need for quantum probabilities, is akin to the original Stern–Gerlach experiments. Participants would be asked a series of polarizing questions and then split by their responses. Those two groups would then be asked a further series of questions, eventually returning to the initial question. If, after that intermediate series of questions, a significant number of participants changed their answer, there would be immediate evidence of the failing of classical probabilities, and a test bed for quantum probability models.

The ultimate challenge in statistics is to solve applied problems. Standard Boltzmann/Kolmogorov probability has allowed researchers to make predictive and causal inferences in virtually every aspect of quantitative cognitive and social science, as well as to provide normative and descriptive insight into decision making. If quantum probability can do the same – and we hope it can – we expect this progress will be made as before: developing and understanding models, one application at a time.