Pothos & Busemeyer (P&B) present a compelling case for quantum formalisms in cognitive modeling. This commentary is more of an addendum, about another area in which psychology meets quantum physics, this time as a result of the coincidence of formalisms independently developed and motivated. In quantum physics, they grew from the investigation of the (im)possibility of classical explanation for quantum entanglement, in psychology from the methodology of selective influences. Surprisingly, the meeting occurs entirely on classical probabilistic grounds, involving (at least so far) no quantum probability.
The issue of selective influences was introduced to psychology in Sternberg's (Reference Sternberg and Koster1969) article: the hypothesis that, for example, stimulus encoding and response selection are accomplished by different stages, with durations A and B, can be tested only in conjunction with the hypothesis that a particular factor (experimental manipulation) α influences A and not B, and that some other factor β influences B and not A. Townsend (Reference Townsend1984) was first to propose a formalization for the notion of selectively influenced process durations that are generally stochastically dependent. Townsend and Schweickert (Reference Townsend and Schweickert1989) coined the term “marginal selectivity” to designate the most conspicuous necessary condition for selectiveness under stochastic dependence: if α→A and β→B, then the marginal distribution of A does not depend on β, nor does the marginal distribution of B depend on α. This condition was generalized to arbitrary sets of inputs (factors) and outputs (response variables) in Dzhafarov (Reference Dzhafarov2003). Selectiveness of influences, however, is a stronger property, as the following example demonstrates. Let α, β be binary 0/1 inputs. Consider outputs
where N is standard-normally distributed. The influence of α on B is obvious, but B is distributed as Norm(mean=β, variance=1), that is, marginal selectivity is satisfied.
It was first suggested in Dzhafarov (Reference Dzhafarov2003) that the selectiveness of influences (α→A, β→B,…) means that A, B,… can be presented as, respectively, f(α, R), g(β, R),… where R is some random variable and f, g,… are some functions. By now, we know several classes of necessary conditions for selectiveness, that is, ways of looking at joint distributions of A, B,… at different values of inputs α, β,… and deciding that it is impossible to represent A, B,... as f(α, R), g(β, R),… (Dzhafarov & Kujala Reference Dzhafarov and Kujala2010; Reference Dzhafarov and Kujala2012b; in press b; Kujala & Dzhafarov Reference Kujala and Dzhafarov2008). For special classes of inputs and outputs we also know conditions that are both necessary and sufficient for such a representability. Thus, if A, B,…, X and α, β,…, χ all have finite numbers of values, then α→A, β→B,.., χ→X if and only if the following linear feasibility test is satisfied: the linear system MQ = P has a solution Q with non-negative components, where P is a vector of probabilities for all combinations of outputs under all combinations of inputs, and M is a Boolean matrix entirely determined by these combinations (Dzhafarov & Kujala Reference Dzhafarov and Kujala2012b).
In physics, the story also begins in the 1960s, when Bell (Reference Bell1964) found a way to analyze on an abstract level the Bohmian version of the Einstein–Podolsky–Rosen (EPR/B) paradigm. In the simplest case, the paradigm involves two spin-half entangled particles running away from each other. At some moment of time (with respect to an inertial frame of reference) each particle's spin is being measured by a detector with a given “setting,” which means a spatial orientation axis. For every axis chosen (input) the spin measurement on a particle yields either “up” or “down” (random output). Denoting these binary outputs A for one particle and B for another, let the corresponding settings be α and β. Special relativity prohibits any dependence of A on β or of B on α. Bell formalized classical determinism with this prohibition as the representability of A, B as f(α, R), g(β, R), with the same meaning of f, g, and R as above. Then he derived a necessary condition for such a representability, in the form of an inequality involving three settings (x, y for α and y, z for β). The characterizations of Bell's derivation as “one of the profound discoveries of the [20th] century” (Aspect Reference Aspect1999) and even “the most profound discovery in science” (Stapp Reference Stapp1975) are often quoted in scientific and popular literature. A generalization of this inequality to any binary α and β, known as CHSH inequalities (after Clauser et al. Reference Clauser, Horne, Shimony and Holt1969), was shown by Fine (Reference Fine1982) to be a necessary and sufficient condition for representing A, B as f(α, R), g(β, R) (assuming marginal selectivity). CHSH inequalities are a special case of the linear feasibility test mentioned previously as developed in psychology. In physics, this test is described in Werner and Wolf (Reference Werner and Wolf2001a; Reference Werner and Wolf2001b) and Basoalto and Percival (Reference Basoalto and Percival2003).
How does one explain these parallels between the two very different issues? The answer proposed in Dzhafarov and Kujala (Reference Dzhafarov and Kujala2012a; Reference Dzhafarov and Kujala2012b) is that measurements of noncommuting observables on one and the same particle are mutually exclusive, because of which they can be viewed as different values of one and the same input. Different inputs in the EPR/B paradigm are spin measurements on different particles, whereas input values are different settings for each particle. This is completely analogous to, for example, α = left flash and β = right flash in a double-detection experiment being inputs for two judgments (A = I see/don't see α, B = I see/don't see β), and the intensities of either flash being input values.
These parallels could be beneficial for both psychology and physics. Thus, the cosphericity and distance tests developed in psychology (Dzhafarov & Kujala, in press b; Kujala & Dzhafarov Reference Kujala and Dzhafarov2008) could be applicable to non-Bohmian versions of EPR, for example, involving momentum and location. We see the main challenge, however, in finding a principled way to quantify and classify degrees and forms of both compliance with and violations of selectiveness of influences (or classical determinism). We only have one alternative to classical determinism in physics: quantum mechanics. This may not be enough for biological and social behavior (Dzhafarov & Kujala, in press a).
Pothos & Busemeyer (P&B) present a compelling case for quantum formalisms in cognitive modeling. This commentary is more of an addendum, about another area in which psychology meets quantum physics, this time as a result of the coincidence of formalisms independently developed and motivated. In quantum physics, they grew from the investigation of the (im)possibility of classical explanation for quantum entanglement, in psychology from the methodology of selective influences. Surprisingly, the meeting occurs entirely on classical probabilistic grounds, involving (at least so far) no quantum probability.
The issue of selective influences was introduced to psychology in Sternberg's (Reference Sternberg and Koster1969) article: the hypothesis that, for example, stimulus encoding and response selection are accomplished by different stages, with durations A and B, can be tested only in conjunction with the hypothesis that a particular factor (experimental manipulation) α influences A and not B, and that some other factor β influences B and not A. Townsend (Reference Townsend1984) was first to propose a formalization for the notion of selectively influenced process durations that are generally stochastically dependent. Townsend and Schweickert (Reference Townsend and Schweickert1989) coined the term “marginal selectivity” to designate the most conspicuous necessary condition for selectiveness under stochastic dependence: if α→A and β→B, then the marginal distribution of A does not depend on β, nor does the marginal distribution of B depend on α. This condition was generalized to arbitrary sets of inputs (factors) and outputs (response variables) in Dzhafarov (Reference Dzhafarov2003). Selectiveness of influences, however, is a stronger property, as the following example demonstrates. Let α, β be binary 0/1 inputs. Consider outputs
where N is standard-normally distributed. The influence of α on B is obvious, but B is distributed as Norm(mean=β, variance=1), that is, marginal selectivity is satisfied.
It was first suggested in Dzhafarov (Reference Dzhafarov2003) that the selectiveness of influences (α→A, β→B,…) means that A, B,… can be presented as, respectively, f(α, R), g(β, R),… where R is some random variable and f, g,… are some functions. By now, we know several classes of necessary conditions for selectiveness, that is, ways of looking at joint distributions of A, B,… at different values of inputs α, β,… and deciding that it is impossible to represent A, B,... as f(α, R), g(β, R),… (Dzhafarov & Kujala Reference Dzhafarov and Kujala2010; Reference Dzhafarov and Kujala2012b; in press b; Kujala & Dzhafarov Reference Kujala and Dzhafarov2008). For special classes of inputs and outputs we also know conditions that are both necessary and sufficient for such a representability. Thus, if A, B,…, X and α, β,…, χ all have finite numbers of values, then α→A, β→B,.., χ→X if and only if the following linear feasibility test is satisfied: the linear system MQ = P has a solution Q with non-negative components, where P is a vector of probabilities for all combinations of outputs under all combinations of inputs, and M is a Boolean matrix entirely determined by these combinations (Dzhafarov & Kujala Reference Dzhafarov and Kujala2012b).
In physics, the story also begins in the 1960s, when Bell (Reference Bell1964) found a way to analyze on an abstract level the Bohmian version of the Einstein–Podolsky–Rosen (EPR/B) paradigm. In the simplest case, the paradigm involves two spin-half entangled particles running away from each other. At some moment of time (with respect to an inertial frame of reference) each particle's spin is being measured by a detector with a given “setting,” which means a spatial orientation axis. For every axis chosen (input) the spin measurement on a particle yields either “up” or “down” (random output). Denoting these binary outputs A for one particle and B for another, let the corresponding settings be α and β. Special relativity prohibits any dependence of A on β or of B on α. Bell formalized classical determinism with this prohibition as the representability of A, B as f(α, R), g(β, R), with the same meaning of f, g, and R as above. Then he derived a necessary condition for such a representability, in the form of an inequality involving three settings (x, y for α and y, z for β). The characterizations of Bell's derivation as “one of the profound discoveries of the [20th] century” (Aspect Reference Aspect1999) and even “the most profound discovery in science” (Stapp Reference Stapp1975) are often quoted in scientific and popular literature. A generalization of this inequality to any binary α and β, known as CHSH inequalities (after Clauser et al. Reference Clauser, Horne, Shimony and Holt1969), was shown by Fine (Reference Fine1982) to be a necessary and sufficient condition for representing A, B as f(α, R), g(β, R) (assuming marginal selectivity). CHSH inequalities are a special case of the linear feasibility test mentioned previously as developed in psychology. In physics, this test is described in Werner and Wolf (Reference Werner and Wolf2001a; Reference Werner and Wolf2001b) and Basoalto and Percival (Reference Basoalto and Percival2003).
How does one explain these parallels between the two very different issues? The answer proposed in Dzhafarov and Kujala (Reference Dzhafarov and Kujala2012a; Reference Dzhafarov and Kujala2012b) is that measurements of noncommuting observables on one and the same particle are mutually exclusive, because of which they can be viewed as different values of one and the same input. Different inputs in the EPR/B paradigm are spin measurements on different particles, whereas input values are different settings for each particle. This is completely analogous to, for example, α = left flash and β = right flash in a double-detection experiment being inputs for two judgments (A = I see/don't see α, B = I see/don't see β), and the intensities of either flash being input values.
These parallels could be beneficial for both psychology and physics. Thus, the cosphericity and distance tests developed in psychology (Dzhafarov & Kujala, in press b; Kujala & Dzhafarov Reference Kujala and Dzhafarov2008) could be applicable to non-Bohmian versions of EPR, for example, involving momentum and location. We see the main challenge, however, in finding a principled way to quantify and classify degrees and forms of both compliance with and violations of selectiveness of influences (or classical determinism). We only have one alternative to classical determinism in physics: quantum mechanics. This may not be enough for biological and social behavior (Dzhafarov & Kujala, in press a).