Hostname: page-component-745bb68f8f-g4j75 Total loading time: 0 Render date: 2025-02-12T07:43:23.735Z Has data issue: false hasContentIssue false

Eye movements are an important part of the story, but not the whole story

Published online by Cambridge University Press:  24 May 2017

Kyle R. Cave*
Affiliation:
University of Massachusetts, Amherst, Department of Psychological and Brain Sciences, Amherst, MA 01003. kcave@psych.umass.eduhttp://people.umass.edu/kcave/

Abstract

Some previous accounts of visual search have emphasized covert attention at the expense of eye movements, and others have focused on eye movements while ignoring covert attention. Both selection mechanisms are likely to contribute to many searches, and a full account of search will probably need to explain how the two interact to find visual targets.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Eye movements are an important part of many laboratory search tasks, and most real world search tasks. A complete account of visual search and visual attention will require an explanation of how eye movements are guided and how they contribute to selection. Furthermore, the ability to track eye movements has led to a succession of new insights into how attention is controlled. There are many examples, but one is Olivers et al.'s (Reference Olivers, Meijer and Theeuwes2006) work investigating the working memory representations guiding search. Future eye tracking studies will almost certainly produce valuable new insights. Thus, in terms of both theory and method, eye movements have played a key role, and will continue to do so.

In the target article's framework , eye tracking data are combined with assumptions about the functional viewing field (FVF), the area within the visual field that is actively processed. The FVF is assumed to be small in difficult visual tasks, so that attentional processing during a single fixation is confined to a small region, and many fixations are necessary to search through a large array. The FVF can be expanded for easier tasks, allowing them to be completed with fewer fixations that are farther apart. As noted in the target article, the concept has been around for some time, but nonetheless it is difficult to demonstrate experimentally that the FVF is actually adjusted during search as Hulleman & Olivers (H&O) suggest ; it cannot be measured as straightforwardly as tracking eye movements. However, H&O point out that evidence from gaze-contingent search experiments (Rayner & Fisher Reference Rayner, Fisher, O'Regan and Levy-Schoen1987; Young & Hulleman Reference Young and Hulleman2013) provide good evidence that information is being taken in from smaller regions in more difficult tasks. The FVF concept has also been useful in interpreting attentional phenomena other than search. For instance, Chen and Cave (Reference Chen and Cave2013; Reference Chen and Cave2014; Reference Chen and Cave2016) found that patterns of distractor interference that did not fit with perceptual load theory (Lavie Reference Lavie2005) or with dilution accounts (Tsal & Benoni Reference Tsal and Benoni2010; Wilson et al. Reference Wilson, Muroi and MacLeod2011) could be explained by assuming that more difficult tasks induce subjects to adopt a narrower FVF (or as it is called in those studies, attentional zoom).

Thus, I agree on the importance of combining eye tracking data with assumptions about variations in FVF to build accounts of visual search. It is also clear that attentional theories need to be able to explain search in scenes that are not easily segmented into separate items. Earlier theories were clearly limited in these respects, in part because they were originally formulated at a time when it was more difficult to track eye movements. However, the proposed framework has other limitations. It appears to be built on the assumption that once the FVF size is set, there is nothing else for covert attention to do. That seems surprising, given the abundant evidence that covert attention can select locations based on color, shape, and other simple features within a fixation (Cave & Zimmerman Reference Cave and Zimmerman1997; Hoffman & Nelson Reference Hoffman and Nelson1981; Kim & Cave Reference Kim and Cave1995). This selection can be done relatively quickly and efficiently (Mangun & Hillyard Reference Mangun, Hillyard, Rugg and Coles1995). I am not trying to argue that attentional selection is fundamentally limited to one item at a time, but it is hard to believe that covert selection would not be employed during search to lower the processing load and limit interference within each fixation. In fact, shifts in covert attention can be tracked from one hemisphere to the other in the course of visual search (Woodman & Luck Reference Woodman and Luck1999). Given that covert attention can be adjusted more quickly than a saccade can be programmed and executed, it should be able to contribute substantially in investigating potential target regions and in choosing the next saccade, as suggested by a group of studies including Deubel and Schneider (Reference Deubel and Schneider1996), and Bichot et al. (Reference Bichot, Rossi and Desimone2005).

Over the years, many attention researchers have tried to study visual search by focusing on covert attention and ignoring eye movements, while others have tried to focus on eye movements while ignoring covert attention. If the H&O framework is truly to be a hybrid approach, it seems that it should allow the possibility that many searches are accomplished through an interaction between eye movements and covert attention.

In considering the history of attention research, it is worth noting that the idea that attention can be adjusted between a broad distribution and a narrow focus has been explored in contexts other than Sanders' (Reference Sanders1970) discussion of FVF mentioned in the target article. There is, of course, Eriksen and St. James' (Reference Eriksen and James1986) zoom lens analogy, but perhaps even more relevant for this discussion is Treisman and Gormican's discussion of how attention makes information about stimulus location available. Here is their description:

Attention selects a filled location within the master map and thereby temporarily restricts the activity from each feature map to the features that are linked to the selected location. The finer the grain of the scan, the more precise the localization and, as a consequence, the more accurately conjoined the features present in different maps will be. (Treisman & Gormican Reference Treisman and Gormican1988, p. 17)

Although they do not explicitly refer to the functional field of view, it seems they had a similar concept in mind, as discussed in Cave (Reference Cave, Wolfe and Robertson2012).

Another aspect of this framework is the move away from visual input that is organized into separate items. The motivation for this is clearly spelled out, but what is not explained is how the concept of object-based attention fits into this framework. There are some circumstances in which visual selection is apparently not shaped by the boundaries defining objects and groups (Chen Reference Chen1998; Goldsmith & Yeari Reference Goldsmith and Yeari2003; Shomstein & Yantis Reference Shomstein and Yantis2002), but they are rare, and the object organization of a display often affects attentional allocation even when it is not relevant to the task (Egly et al. Reference Egly, Driver and Rafal1994; Harms & Bundesen Reference Harms and Bundesen1983). Is the claim in the target article that object and group boundaries play no role in visual search, even though their effects are difficult to avoid in other attentional tasks?

References

Bichot, N. P., Rossi, A. F. & Desimone, R. (2005) Parallel and serial neural mechanisms for visual search in macaque area V4. Science 308:529–34.Google Scholar
Cave, K. R. (2012) FIT: Foundation for an integrative theory. In: From perception to consciousness: Searching for Anne Treisman, ed. Wolfe, J. M. & Robertson, L., pp. 139–45. Oxford University Press.Google Scholar
Cave, K. R. & Zimmerman, J. M. (1997) Flexibility in spatial attention before and after practice. Psychological Science 8:399403.Google Scholar
Chen, Z. (1998) Switching attention within and between objects: The role of subjective organization. Canadian Journal of Experimental Psychology 52:716.Google Scholar
Chen, Z. & Cave, K. R. (2013) Perceptual load vs. dilution: The role of attentional focus, stimulus category, and target predictability. Frontiers in Psychology 4:327, 114.Google Scholar
Chen, Z. & Cave, K. R. (2014) Constraints on dilution from a narrow attentional zoom reveal how spatial and color cues direct selection. Vision Research 101:125–37.CrossRefGoogle ScholarPubMed
Chen, Z. & Cave, K. R. (2016) Zooming in on the cause of the perceptual load effect in the go/no-go paradigm. Journal of Experimental Psychology: Human Perception and Performance 42(8):1072–87. Available at: http://dx.doi.org/10.1037/xhp0000168.Google Scholar
Deubel, H. & Schneider, W. X. (1996) Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Research 36:1827–37.Google Scholar
Egly, R., Driver, J. & Rafal, R. D. (1994) Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of Experimental Psychology: General 123:161–77. doi: 10.1037//0096-3445.123.2.161.Google Scholar
Eriksen, C. W. & St. James, J. D. (1986) Visual attention within and around the field of focal attention: A zoom lens model. Perception and Psychophysics 40:225–40.Google Scholar
Goldsmith, M. & Yeari, M. (2003) Modulation of object-based attention by spatial focus under endogenous and exogenous orienting. Journal of Experimental Psychology: Human Perception and Performance 18:2628.Google Scholar
Harms, L. & Bundesen, C. (1983) Color segregation and selective attention in a nonsearch task. Perception and Psychophysics 33:1119.Google Scholar
Hoffman, J. E. & Nelson, B. (1981) Spatial selectivity in visual search. Perception and Psychophysics 30:283–90.Google Scholar
Kim, M. S. & Cave, K. R. (1995) Spatial attention in visual search for features and feature conjunctions. Psychological Science 6:376–80.Google Scholar
Lavie, N. (2005) Distracted and confused? Selective attention under load. Trends in Cognitive Sciences 9:7582.Google Scholar
Mangun, G. R. & Hillyard, S. A. (1995) Mechanisms and models of selective attention. In: Electrophysiology of mind: Event-related brain potentials and cognition, ed. Rugg, M. D. & Coles, M. G. H., pp. 4085. Oxford University Press.Google Scholar
Olivers, C. N. L., Meijer, F. & Theeuwes, J. (2006) Feature-based memory-driven attentional capture: Visual working memory content affects visual attention. Journal of Experimental Psychology: Human Perception and Performance 32:1243–65.Google Scholar
Rayner, K. & Fisher, D. L. (1987) Eye movements and the perceptual span during visual search. In: Eye movements: From physiology to cognition, ed. O'Regan, J. K. & Levy-Schoen, A., pp. 293302. Elsevier Science.Google Scholar
Sanders, A. F. (1970) Some aspects of the selective process in the functional visual field. Ergonomics 13:101–17. doi: 10.1080/00140137008931124.Google Scholar
Shomstein, S. S. & Yantis, S. (2002) Object-based attention: Sensory modulation or priority setting? Perception and Psychophysics 64(1):4151.Google Scholar
Treisman, A. & Gormican, S. (1988) Feature analysis in early vision: Evidence from search asymmetries. Psychological Review 95:1548.Google Scholar
Tsal, Y. & Benoni, H. (2010) Diluting the burden of load: Perceptual load effects are simply dilution effects. Journal of Experimental Psychology: Human Perception and Performance 36:1645–56.Google Scholar
Wilson, D. E., Muroi, M. & MacLeod, C. M. (2011) Dilution, not load, affects distractor processing. Journal of Experimental Psychology: Human Perception and Performance 37:319–35.Google Scholar
Woodman, G. F. & Luck, S. J. (1999) Electrophysiological measurement of rapid shifts of attention during visual search. Nature 400:867–69. doi: 10.1038/23698.Google Scholar
Young, A. H. & Hulleman, J. (2013) Eye movements reveal how task difficulty moulds visual search. Journal of Experimental Psychology: Human Perception and Performance 39:168–90.Google ScholarPubMed