In their recent review, Algom and colleagues note that “Generations of cognitive psychologists appear to have been rendered oblivious to the developments in mathematical psychology on the importance and (im)possibility of distinguishing between parallel and serial processing based on straight line mean RT functions” (Algom et al. Reference Algom, Eidels, Hawkins, Jefferson, Townsend, Busemeyer, Wang, Townsend and Eidels2015, p. 88). Although H&O make a number of cogent points regarding the importance of eye movements, the nature of eccentricity and target salience, and the role of functional viewing field (FVF), when it comes to their discussion of serial and parallel search, the authors unfortunately repeat the error of overinterpreting mean RT set-size functions.
As far back as Townsend (Reference Townsend1971), researchers have demonstrated that a parallel search process can lead to increases in RT as a function of set size, whereas serial search can lead to flat RT slopes as a function of set size. Consequently, the authors' attack on item-based search, which in the visual search literature is synonymous with serial search, uses the set size-RT slope to make erroneous inferences about processing. This is an ongoing issue with visual-search data, the ambiguity of which was so great that Wolfe (Reference Wolfe1998b) declared the serial/parallel distinction to be a dead end and initiated a switch in terminology calling zero-slope functions “efficient” and positive-slope functions “inefficient” search, effectively masking but not resolving the problem.
Townsend (Reference Townsend1972; see also Townsend & Ashby Reference Townsend and Ashby1983) pointed out that a parallel model can mimic a serial model by setting the intercompletion times of items in a parallel race (i.e., the unobservable time that a parallel racing item completes its race) equal to finishing times of items in a serial process. The necessary implication is that relying on mean RTs as a function of set size (in any search task, visual or memory) does not have the inferential or discriminatory ability to differentiate serial from parallel processing. As noted by Townsend (Reference Townsend1990) and acknowledged by Wolfe et al. (Reference Wolfe, Palmer and Horowitz2010b), by utilizing factorial designs and estimating RT distributions, serial and parallel models (and several other important classes of processing models) can be distinguished.
This theory and method, collectively known as Systems Factorial Technology (SFT), was fully developed and introduced 20 years ago by Townsend and Nozawa (Reference Townsend and Nozawa1995), and it continues to be applied, developed, refined, and extended (Eidels et al. Reference Eidels, Houpt, Altieri, Pei and Townsend2011; Houpt & Townsend Reference Houpt and Townsend2012; Little et al. Reference Little, Eidels, Fific and Wang2015; Townsend & Wenger Reference Townsend and Wenger2004). SFT can differentiate serial and parallel processing by analyzing RT distributions from conditions that vary the strength or quality of the stimulus to slow down or speed up processing along each of the two dimensions (e.g., signal-modality – audition and vision, or signal location – top and bottom). Crossing two factors with two levels of strength, we obtain four conditions: LL (low salience on both dimensions), LH and HL (low salience on one dimension and high salience on the other), and HH (high salience on both dimensions). Diagnostic contrasts are computed by combining the RT distributions (i.e., survivor functions) from the four factorial conditions. Each architecture (e.g., serial or parallel) makes different predictions for the diagnostic contrasts. Serial models predict additivity; that is, the change from LL to HH should equal the sum of the changes on each dimension separately; hence, (LL−LH)−(HL−HH) = 0. By contrast, parallel models predict overadditivity (i.e., positive, for self-terminating processing) or underadditivity (for exhaustive processing). Inhibitory and facilitatory models also predict under- and overadditivity, respectively (Eidels et al. Reference Eidels, Houpt, Altieri, Pei and Townsend2011). These nonparametric tests allow for entire classes of models to be tested and falsified. For instance, a completely negative contrast rules out all serial models.
SFT also provides the ability to differentiate many other important facets of information processing in addition to architecture, including workload capacity (how processing efficiency changes in response to changes in the number of targets to process), (in)dependence (whether processing channels are mutually facilitatory or inhibitory), and stopping rule (whether processing is self-terminating or exhaustive). The last property is of particular importance to the authors' simulation because many of their results depend on the stopping rule.
These methods are particularly useful in verifying aspects of computational theories such as the one proposed by the authors. The authors assume that items within the FVF are processed in parallel and that the size of the FVF can be inferred by examining the slope of RT set-size functions. Like the theories the authors are attempting to displace, this procedure again requires too much of the set-size function. The focus on RT variability is more promising, and we applaud the general approach of breaking down the search tasks by difficulty and examining target-present and target-absent variability. However, the assumption of deterministic fixation duration influences the conclusions that the authors draw from variability. Even assuming a variance in fixation duration that is independent of task difficulty and FVF size, the model will predict higher variance with more fixations. That could mitigate, if not wash out, the decrease in variability with the highest difficulty. Ideally one would attempt to directly manipulate target-present and target-absent RT variability, which applications of SFT to visual search have done (see e.g., Fific et al. Reference Fific, Townsend and Eidels2008; Sung Reference Sung2008).
Could SFT be used to examine the properties of the FVF? Yes, and easily. One would only need to manipulate the detection difficulty of two targets in an array that either did or did not require eye movements. Related work by one of the authors (C-TY) using SFT in redundant target detection has shown that processing is parallel self-terminating, and of limited capacity when there are few eye movements. By contrast, when eye-movements to a target are forced, processing instead conforms to serial processing (at least for some observers; see also Fific et al. Reference Fific, Little and Nosofsky2010). We believe that these results lend preliminary support to the authors' inferences but rely on the more rigorous methods of SFT rather than the perilous mean RT set-size function slopes.
In their recent review, Algom and colleagues note that “Generations of cognitive psychologists appear to have been rendered oblivious to the developments in mathematical psychology on the importance and (im)possibility of distinguishing between parallel and serial processing based on straight line mean RT functions” (Algom et al. Reference Algom, Eidels, Hawkins, Jefferson, Townsend, Busemeyer, Wang, Townsend and Eidels2015, p. 88). Although H&O make a number of cogent points regarding the importance of eye movements, the nature of eccentricity and target salience, and the role of functional viewing field (FVF), when it comes to their discussion of serial and parallel search, the authors unfortunately repeat the error of overinterpreting mean RT set-size functions.
As far back as Townsend (Reference Townsend1971), researchers have demonstrated that a parallel search process can lead to increases in RT as a function of set size, whereas serial search can lead to flat RT slopes as a function of set size. Consequently, the authors' attack on item-based search, which in the visual search literature is synonymous with serial search, uses the set size-RT slope to make erroneous inferences about processing. This is an ongoing issue with visual-search data, the ambiguity of which was so great that Wolfe (Reference Wolfe1998b) declared the serial/parallel distinction to be a dead end and initiated a switch in terminology calling zero-slope functions “efficient” and positive-slope functions “inefficient” search, effectively masking but not resolving the problem.
Townsend (Reference Townsend1972; see also Townsend & Ashby Reference Townsend and Ashby1983) pointed out that a parallel model can mimic a serial model by setting the intercompletion times of items in a parallel race (i.e., the unobservable time that a parallel racing item completes its race) equal to finishing times of items in a serial process. The necessary implication is that relying on mean RTs as a function of set size (in any search task, visual or memory) does not have the inferential or discriminatory ability to differentiate serial from parallel processing. As noted by Townsend (Reference Townsend1990) and acknowledged by Wolfe et al. (Reference Wolfe, Palmer and Horowitz2010b), by utilizing factorial designs and estimating RT distributions, serial and parallel models (and several other important classes of processing models) can be distinguished.
This theory and method, collectively known as Systems Factorial Technology (SFT), was fully developed and introduced 20 years ago by Townsend and Nozawa (Reference Townsend and Nozawa1995), and it continues to be applied, developed, refined, and extended (Eidels et al. Reference Eidels, Houpt, Altieri, Pei and Townsend2011; Houpt & Townsend Reference Houpt and Townsend2012; Little et al. Reference Little, Eidels, Fific and Wang2015; Townsend & Wenger Reference Townsend and Wenger2004). SFT can differentiate serial and parallel processing by analyzing RT distributions from conditions that vary the strength or quality of the stimulus to slow down or speed up processing along each of the two dimensions (e.g., signal-modality – audition and vision, or signal location – top and bottom). Crossing two factors with two levels of strength, we obtain four conditions: LL (low salience on both dimensions), LH and HL (low salience on one dimension and high salience on the other), and HH (high salience on both dimensions). Diagnostic contrasts are computed by combining the RT distributions (i.e., survivor functions) from the four factorial conditions. Each architecture (e.g., serial or parallel) makes different predictions for the diagnostic contrasts. Serial models predict additivity; that is, the change from LL to HH should equal the sum of the changes on each dimension separately; hence, (LL−LH)−(HL−HH) = 0. By contrast, parallel models predict overadditivity (i.e., positive, for self-terminating processing) or underadditivity (for exhaustive processing). Inhibitory and facilitatory models also predict under- and overadditivity, respectively (Eidels et al. Reference Eidels, Houpt, Altieri, Pei and Townsend2011). These nonparametric tests allow for entire classes of models to be tested and falsified. For instance, a completely negative contrast rules out all serial models.
SFT also provides the ability to differentiate many other important facets of information processing in addition to architecture, including workload capacity (how processing efficiency changes in response to changes in the number of targets to process), (in)dependence (whether processing channels are mutually facilitatory or inhibitory), and stopping rule (whether processing is self-terminating or exhaustive). The last property is of particular importance to the authors' simulation because many of their results depend on the stopping rule.
These methods are particularly useful in verifying aspects of computational theories such as the one proposed by the authors. The authors assume that items within the FVF are processed in parallel and that the size of the FVF can be inferred by examining the slope of RT set-size functions. Like the theories the authors are attempting to displace, this procedure again requires too much of the set-size function. The focus on RT variability is more promising, and we applaud the general approach of breaking down the search tasks by difficulty and examining target-present and target-absent variability. However, the assumption of deterministic fixation duration influences the conclusions that the authors draw from variability. Even assuming a variance in fixation duration that is independent of task difficulty and FVF size, the model will predict higher variance with more fixations. That could mitigate, if not wash out, the decrease in variability with the highest difficulty. Ideally one would attempt to directly manipulate target-present and target-absent RT variability, which applications of SFT to visual search have done (see e.g., Fific et al. Reference Fific, Townsend and Eidels2008; Sung Reference Sung2008).
Could SFT be used to examine the properties of the FVF? Yes, and easily. One would only need to manipulate the detection difficulty of two targets in an array that either did or did not require eye movements. Related work by one of the authors (C-TY) using SFT in redundant target detection has shown that processing is parallel self-terminating, and of limited capacity when there are few eye movements. By contrast, when eye-movements to a target are forced, processing instead conforms to serial processing (at least for some observers; see also Fific et al. Reference Fific, Little and Nosofsky2010). We believe that these results lend preliminary support to the authors' inferences but rely on the more rigorous methods of SFT rather than the perilous mean RT set-size function slopes.