Pothos & Busemeyer (P&B) propose quantum probability (QP) theory as a general axiomatic structure for modeling incentivized response (choice). Experiments in the Kahneman–Tversky tradition suggest that the relationship between information and human behavioral choice does not respect the axioms of classical probability (CP) theory, notably monotonicity. Furthermore, sequences of choices violate the classical law of total probability. Modelers have accommodated these departures from CP by hypothesizing heuristics – procedures that are specially adapted to specific types of situations – but in so doing postpone developing a general, testable theory that formally integrates aspects of cognition, for example, similarity judgment and assessments of risk. The probability model developed by quantum physicists is a formal alternative to CP that handles many of the recalcitrant phenomena, and can reproduce successful predictions of models based on classical probability in the classical limit.
The premise that the CP-based theory of decision – which is better characterized as a theory of incentive response – has been refuted by experimental evidence is questionable. The mathematical modeling of risk and the econometric estimation of responses to changes in structural parameters of risk models, are currently enjoying technical innovation, starting with Hey and Orme (Reference Hey and Orme1994) and surveyed in Harrison and Rutstrom (Reference Harrison, Rutström, Cox and Harrison2008). There is evidence that unintended constraints derived from less sophisticated risk models may be implicated in a range of phenomena that have been thought to imply use of special-purpose heuristics (e.g., Allais violations, loss aversion, hyperbolic discounting; see Andersen et al. Reference Andersen, Harrison, Lau and Rutström2011; Harrison & Rutström Reference Harrison and Rutström2009). This research might be taken as supporting the model of Oaksford and Chater (Reference Oaksford and Chater2007) on a descriptive rather than a normative interpretation, with its ambition to underwrite optimization relativized to heterogeneous preference structures. As P&B note, the Bayesian rationality model relies on objective value maximization rather than subjective utility maximization to handle Dutch book problems, and this reliance is not acceptable to economists. Global optimality of decision procedures is surrendered once one allows that people's risk preferences are heterogeneous and circumstance specific, and that utility functions are often rank dependent. However, this undermines confidence in CP-based Bayesian models only as accounts of decisions in what Binmore (Reference Binmore2009) calls “large worlds”; they may yet be the right models for “small world” problems. The idea that any empirically adequate general theory of decision might exist for large worlds is purely conjectural.
However, P&B's main point about Bayesian rationality is more interesting. CP derives some of its support from philosophers and others imagining that thought should work well in large worlds. Once we are wondering about large worlds, we are in the territory of metaphysics. Suppose for the sake of argument that P&B are right that QP offers the correct general theory of human incentive response. This would invite the question as to whether there is some general feature of the world that explains why both fundamental physical structure and fundamental cognitive structure follow QP rather than CP. The more modest philosophy would be to regard QP in psychology as just a placeholder, which we borrow from physics if we decide we should give up on CP because, for now, it is the only formally worked out alternative.
As P&B note, one possible motivation for the immodest philosophy is the hypothesis that cognition follows QP axioms because the brain's physical structure allows it to perform quantum computation. But this is mere speculation. Another suggestion is that our view of CP as the default account results from a failure of metaphysical imagination that has been observed in other areas of science. The failure in question is the assumption that there must be a general account of reality that applies to all scales of structural complexity. This is a prejudice derived from treating human mechanical interactions with medium-sized stable “things” as the structural field we should universally project. Many philosophers presume a kind of atomism that is inconsistent with quantum physics (Ladyman & Ross Reference Ladyman and Ross2007). Arguably, the philosophy underlying CP-based accounts of cognition is what Russell (Reference Russell1921; Reference Russell1918–1924/1956, pp. 177–343) called logical atomism. Does denial of physical atomism put stress on logical atomism, or this simply sloppy homology?
Ladyman and Ross (Reference Ladyman and Ross2007) explain and defend the scale relativity of ontology. By this we mean that the persistent patterns that compress information and thus support generalizations and out-of-sample predictions – in a word, existents – arise at different scales that cross-classify reality and do not reduce to one another. Conscious, deliberate theoretical reasoning of the kind modeled by philosophers, we suggest, ignores scale relativity by generalizing the kind of atomistic decomposition that generally works well in everyday mechanical manipulations. P&B's hypothesis suggests an interesting twist. Most cognition is not deliberate theorizing; it is statistical data processing by neural networks, converging on relatively stable perceptions and expectations by drift diffusion rather than deduction. If the world does not parse neatly into components on a single scale, why should nervous systems evolved to compress and store information about that world incorporate a false restriction to the effect that all relations of similarity and difference, understood in terms of structural distance, be representable on a single scale? This perspective helps us understand the difference between the small words where Bayesian reasoning works and the large worlds where it does not: a small world is a restriction to one scale.
Challenges remain for the application of QP to cognition. The formalism and the physics may not come apart as neatly as P&B suggest. Quantum mechanics has a well-confirmed law of time evolution, the Schrödinger equation, governing systems at least when they are not being measured. P&B offer no analogous law of time evolution of cognitive states. Schrödinger time evolution is based on the assignment of a Hamiltonian representing the energy of the system, and we lack any analogue of energy as well. Quantum mechanics involves effects where Plank's constant – the quantum of action – is not negligible and the limit in which the predictions of classical mechanics are reached is that in which h goes to zero. There is no analogously well-defined limit in the application to cognition.
Pothos & Busemeyer (P&B) propose quantum probability (QP) theory as a general axiomatic structure for modeling incentivized response (choice). Experiments in the Kahneman–Tversky tradition suggest that the relationship between information and human behavioral choice does not respect the axioms of classical probability (CP) theory, notably monotonicity. Furthermore, sequences of choices violate the classical law of total probability. Modelers have accommodated these departures from CP by hypothesizing heuristics – procedures that are specially adapted to specific types of situations – but in so doing postpone developing a general, testable theory that formally integrates aspects of cognition, for example, similarity judgment and assessments of risk. The probability model developed by quantum physicists is a formal alternative to CP that handles many of the recalcitrant phenomena, and can reproduce successful predictions of models based on classical probability in the classical limit.
The premise that the CP-based theory of decision – which is better characterized as a theory of incentive response – has been refuted by experimental evidence is questionable. The mathematical modeling of risk and the econometric estimation of responses to changes in structural parameters of risk models, are currently enjoying technical innovation, starting with Hey and Orme (Reference Hey and Orme1994) and surveyed in Harrison and Rutstrom (Reference Harrison, Rutström, Cox and Harrison2008). There is evidence that unintended constraints derived from less sophisticated risk models may be implicated in a range of phenomena that have been thought to imply use of special-purpose heuristics (e.g., Allais violations, loss aversion, hyperbolic discounting; see Andersen et al. Reference Andersen, Harrison, Lau and Rutström2011; Harrison & Rutström Reference Harrison and Rutström2009). This research might be taken as supporting the model of Oaksford and Chater (Reference Oaksford and Chater2007) on a descriptive rather than a normative interpretation, with its ambition to underwrite optimization relativized to heterogeneous preference structures. As P&B note, the Bayesian rationality model relies on objective value maximization rather than subjective utility maximization to handle Dutch book problems, and this reliance is not acceptable to economists. Global optimality of decision procedures is surrendered once one allows that people's risk preferences are heterogeneous and circumstance specific, and that utility functions are often rank dependent. However, this undermines confidence in CP-based Bayesian models only as accounts of decisions in what Binmore (Reference Binmore2009) calls “large worlds”; they may yet be the right models for “small world” problems. The idea that any empirically adequate general theory of decision might exist for large worlds is purely conjectural.
However, P&B's main point about Bayesian rationality is more interesting. CP derives some of its support from philosophers and others imagining that thought should work well in large worlds. Once we are wondering about large worlds, we are in the territory of metaphysics. Suppose for the sake of argument that P&B are right that QP offers the correct general theory of human incentive response. This would invite the question as to whether there is some general feature of the world that explains why both fundamental physical structure and fundamental cognitive structure follow QP rather than CP. The more modest philosophy would be to regard QP in psychology as just a placeholder, which we borrow from physics if we decide we should give up on CP because, for now, it is the only formally worked out alternative.
As P&B note, one possible motivation for the immodest philosophy is the hypothesis that cognition follows QP axioms because the brain's physical structure allows it to perform quantum computation. But this is mere speculation. Another suggestion is that our view of CP as the default account results from a failure of metaphysical imagination that has been observed in other areas of science. The failure in question is the assumption that there must be a general account of reality that applies to all scales of structural complexity. This is a prejudice derived from treating human mechanical interactions with medium-sized stable “things” as the structural field we should universally project. Many philosophers presume a kind of atomism that is inconsistent with quantum physics (Ladyman & Ross Reference Ladyman and Ross2007). Arguably, the philosophy underlying CP-based accounts of cognition is what Russell (Reference Russell1921; Reference Russell1918–1924/1956, pp. 177–343) called logical atomism. Does denial of physical atomism put stress on logical atomism, or this simply sloppy homology?
Ladyman and Ross (Reference Ladyman and Ross2007) explain and defend the scale relativity of ontology. By this we mean that the persistent patterns that compress information and thus support generalizations and out-of-sample predictions – in a word, existents – arise at different scales that cross-classify reality and do not reduce to one another. Conscious, deliberate theoretical reasoning of the kind modeled by philosophers, we suggest, ignores scale relativity by generalizing the kind of atomistic decomposition that generally works well in everyday mechanical manipulations. P&B's hypothesis suggests an interesting twist. Most cognition is not deliberate theorizing; it is statistical data processing by neural networks, converging on relatively stable perceptions and expectations by drift diffusion rather than deduction. If the world does not parse neatly into components on a single scale, why should nervous systems evolved to compress and store information about that world incorporate a false restriction to the effect that all relations of similarity and difference, understood in terms of structural distance, be representable on a single scale? This perspective helps us understand the difference between the small words where Bayesian reasoning works and the large worlds where it does not: a small world is a restriction to one scale.
Challenges remain for the application of QP to cognition. The formalism and the physics may not come apart as neatly as P&B suggest. Quantum mechanics has a well-confirmed law of time evolution, the Schrödinger equation, governing systems at least when they are not being measured. P&B offer no analogous law of time evolution of cognitive states. Schrödinger time evolution is based on the assignment of a Hamiltonian representing the energy of the system, and we lack any analogue of energy as well. Quantum mechanics involves effects where Plank's constant – the quantum of action – is not negligible and the limit in which the predictions of classical mechanics are reached is that in which h goes to zero. There is no analogously well-defined limit in the application to cognition.