The main thesis of the target article is that the mind is based on a rational use of limited resources. We agree that this is a useful organizing principle as long as we interpret “rational reasoning” as deriving from coherent axioms. However, when the mind is constrained by limited resources, the issue of how best to choose axioms of rationality becomes a matter of debate. In particular, the target article relies heavily on Bayesian reasoning tools that encounter serious tractability problems. This is because the dimension of the probability space grows exponentially out of control as the number of variables increases. This is a well-known problem recognized by the proponents of Bayesian cognition (e.g. Tenenbaum et al. Reference Tenenbaum, Kemp, Griffiths and Goodman2011). Consequently, resource limited extensions beyond basic Bayesian reasoning are required that rely on various approximations for simplifying computations, for example, through sampling approximations (Sanborn et al. Reference Sanborn, Griffiths and Navarro2010) and/or employing Bayesian networks to truncate complex conditional dependencies (Lake et al. Reference Lake, Salakhutdinov and Tenenbaum2015). Are these approximations really resource rational? And are these the only ways to meet the resource constraints for reasoning under uncertainty?
We propose another resource rational alternative, where, as above, rational status is justified by an axiomatic foundation: quantum probability theory (e.g. Aerts et al. Reference Aerts, Gabora and Sozzo2013; Basieva et al. Reference Basieva, Khrennikova, Pothos, Asano and Khrennikov2018; Bruza et al. Reference Bruza, Wang and Busemeyer2015; Khrennikov et al. Reference Khrennikov, Basieva, Pothos and Yamato2018; Pothos & Busemeyer Reference Pothos and Busemeyer2013; Wang et al. Reference Wang, Solloway, Shiffrin and Busemeyer2014; Yukalov & Sornette Reference Yukalov and Sornette2011). One advantage of quantum probability theory is that it provides more parsimonious (less complex) descriptions than Bayesian approaches based on Kolmogorov probability theory (Atmanspacher & Römer Reference Atmanspacher and Römer2012). The dimension of the probability space does not increase exponentially, and in certain circumstances, it does not increase with increasing number of variables. How does this work?
Kolmogorov probability theory (which forms the basis of Bayesian theory) is founded on assignment of probabilities to events represented subsets of a sample space, which assumes a complete Boolean algebra of events. Quantum theory assigns probabilities to measurement outcomes, represented as subspaces of a vector space, which entails only a partial Boolean algebra. A theorem by Gleason (Reference Gleason1957) states that any additive measure used to assign probabilities to subspaces of a vector space (with dimension greater than 2) can be described as quantum probabilities. The non-Boolean aspect of quantum theory arises from the use of non-commutative observables, which implies sequence effects for the results of successive measurements. Wang et al. (Reference Wang, Solloway, Shiffrin and Busemeyer2014) demonstrated convincingly how powerful quantum modeling proves to be in this regard.
The advantage of using a vector space representation is that different measurements can be described by changing the basis used to define them. There is an infinite number of ways to select a basis within a fixed and finite vector space, which can then provide an infinite number of ways to describe concepts within a limited cognitive resource. An example will help illustrate this important point. Consider a game with two players, and each player has three moves. When planning a move, each player needs to estimate the probability of the move of the opponent and then consider the probability for his/her own move. According to a Bayesian probability model, this requires forming 3×3 = 32 joint probabilities that each of two players takes one of three actions. If there are n players, then a Bayesian model requires 3n joint probabilities, producing an exponential growth in probabilities. In contrast, according to the quantum approach, the state of the three actions by each player can be represented by a unit length vector in a 3-dimensional space. The probabilities assigned to different players can be obtained by “rotating” the basis used to describe the vector within the same 3-dimensional space. In this way, n players are described by n different bases within the same 3-dimensional space.
There is cognitive cost produced by representing different measurements using different bases which is expressed by a quantum-like uncertainty principle. In our n-person game example, it is not possible to be certain about the moves of all players simultaneously. Increasing certainty about the move of one player implies increasing uncertainty about others. In quantum physics, the uncertainty principle is a consequence of the structure of the physical world; for psychology, we propose that its relation to limited cognitive resources may be a structural feature of the mental world.
Which approach to forming a rational reasoning system under uncertainty is most appropriate? Partly, this is a computational problem (i.e. which approach provides the optimal balance between precision and simplicity), and partly this is an empirical problem (i.e. which approach predicts better apparent inconsistencies/errors in human judgments).
Going beyond competing axiomatic reasoning systems, a broader issue really needs to be addressed. It is natural to attempt to characterize and quantify human limited cognitive resources, and then to argue that the decisions are optimal in light of corresponding limitations. However, humans have evolved to make decisions when quantification is impossible. When one quantifies cognition, one specifies uncertainty in terms of specifying distributions of all and any variables in the system. This idea lies at the heart of the resource rational analysis. Yet most information in life is vague and defies quantification. When, for example, we must decide who to marry, which job offer to accept, what house to buy, or any of the normal decisions human face, we cannot specify the relevant distributions in any way we trust. This lack of precision does not mean we have no useful information – there is almost always various forms of qualitative information – we may not know how to value a house cost difference of $800 when we are considering houses costing $250,000, but know with high probability that $900,000 is out of our budgetary range. Thus, defining rationality in terms of quantification of distributions of variables, even under assumptions of cognitive limitations, may miss the key issue, that we must define rationality by the actions of humans facing vague information, vagueness that humans must have evolved to handle.
The main thesis of the target article is that the mind is based on a rational use of limited resources. We agree that this is a useful organizing principle as long as we interpret “rational reasoning” as deriving from coherent axioms. However, when the mind is constrained by limited resources, the issue of how best to choose axioms of rationality becomes a matter of debate. In particular, the target article relies heavily on Bayesian reasoning tools that encounter serious tractability problems. This is because the dimension of the probability space grows exponentially out of control as the number of variables increases. This is a well-known problem recognized by the proponents of Bayesian cognition (e.g. Tenenbaum et al. Reference Tenenbaum, Kemp, Griffiths and Goodman2011). Consequently, resource limited extensions beyond basic Bayesian reasoning are required that rely on various approximations for simplifying computations, for example, through sampling approximations (Sanborn et al. Reference Sanborn, Griffiths and Navarro2010) and/or employing Bayesian networks to truncate complex conditional dependencies (Lake et al. Reference Lake, Salakhutdinov and Tenenbaum2015). Are these approximations really resource rational? And are these the only ways to meet the resource constraints for reasoning under uncertainty?
We propose another resource rational alternative, where, as above, rational status is justified by an axiomatic foundation: quantum probability theory (e.g. Aerts et al. Reference Aerts, Gabora and Sozzo2013; Basieva et al. Reference Basieva, Khrennikova, Pothos, Asano and Khrennikov2018; Bruza et al. Reference Bruza, Wang and Busemeyer2015; Khrennikov et al. Reference Khrennikov, Basieva, Pothos and Yamato2018; Pothos & Busemeyer Reference Pothos and Busemeyer2013; Wang et al. Reference Wang, Solloway, Shiffrin and Busemeyer2014; Yukalov & Sornette Reference Yukalov and Sornette2011). One advantage of quantum probability theory is that it provides more parsimonious (less complex) descriptions than Bayesian approaches based on Kolmogorov probability theory (Atmanspacher & Römer Reference Atmanspacher and Römer2012). The dimension of the probability space does not increase exponentially, and in certain circumstances, it does not increase with increasing number of variables. How does this work?
Kolmogorov probability theory (which forms the basis of Bayesian theory) is founded on assignment of probabilities to events represented subsets of a sample space, which assumes a complete Boolean algebra of events. Quantum theory assigns probabilities to measurement outcomes, represented as subspaces of a vector space, which entails only a partial Boolean algebra. A theorem by Gleason (Reference Gleason1957) states that any additive measure used to assign probabilities to subspaces of a vector space (with dimension greater than 2) can be described as quantum probabilities. The non-Boolean aspect of quantum theory arises from the use of non-commutative observables, which implies sequence effects for the results of successive measurements. Wang et al. (Reference Wang, Solloway, Shiffrin and Busemeyer2014) demonstrated convincingly how powerful quantum modeling proves to be in this regard.
The advantage of using a vector space representation is that different measurements can be described by changing the basis used to define them. There is an infinite number of ways to select a basis within a fixed and finite vector space, which can then provide an infinite number of ways to describe concepts within a limited cognitive resource. An example will help illustrate this important point. Consider a game with two players, and each player has three moves. When planning a move, each player needs to estimate the probability of the move of the opponent and then consider the probability for his/her own move. According to a Bayesian probability model, this requires forming 3×3 = 32 joint probabilities that each of two players takes one of three actions. If there are n players, then a Bayesian model requires 3n joint probabilities, producing an exponential growth in probabilities. In contrast, according to the quantum approach, the state of the three actions by each player can be represented by a unit length vector in a 3-dimensional space. The probabilities assigned to different players can be obtained by “rotating” the basis used to describe the vector within the same 3-dimensional space. In this way, n players are described by n different bases within the same 3-dimensional space.
There is cognitive cost produced by representing different measurements using different bases which is expressed by a quantum-like uncertainty principle. In our n-person game example, it is not possible to be certain about the moves of all players simultaneously. Increasing certainty about the move of one player implies increasing uncertainty about others. In quantum physics, the uncertainty principle is a consequence of the structure of the physical world; for psychology, we propose that its relation to limited cognitive resources may be a structural feature of the mental world.
Which approach to forming a rational reasoning system under uncertainty is most appropriate? Partly, this is a computational problem (i.e. which approach provides the optimal balance between precision and simplicity), and partly this is an empirical problem (i.e. which approach predicts better apparent inconsistencies/errors in human judgments).
Going beyond competing axiomatic reasoning systems, a broader issue really needs to be addressed. It is natural to attempt to characterize and quantify human limited cognitive resources, and then to argue that the decisions are optimal in light of corresponding limitations. However, humans have evolved to make decisions when quantification is impossible. When one quantifies cognition, one specifies uncertainty in terms of specifying distributions of all and any variables in the system. This idea lies at the heart of the resource rational analysis. Yet most information in life is vague and defies quantification. When, for example, we must decide who to marry, which job offer to accept, what house to buy, or any of the normal decisions human face, we cannot specify the relevant distributions in any way we trust. This lack of precision does not mean we have no useful information – there is almost always various forms of qualitative information – we may not know how to value a house cost difference of $800 when we are considering houses costing $250,000, but know with high probability that $900,000 is out of our budgetary range. Thus, defining rationality in terms of quantification of distributions of variables, even under assumptions of cognitive limitations, may miss the key issue, that we must define rationality by the actions of humans facing vague information, vagueness that humans must have evolved to handle.