Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-12T08:05:08.388Z Has data issue: false hasContentIssue false

The “item” as a window into how prior knowledge guides visual search

Published online by Cambridge University Press:  24 May 2017

Rachel Wu
Affiliation:
Department of Psychology, University of California, Riverside, Riverside, CA92521. rachel.wu@ucr.edu
Jiaying Zhao
Affiliation:
Department of Psychology and Institute for Resources, Environment and Sustainability, University of British Columbia, Vancouver, BC V6T 1Z4, Canada. jiayingz@psych.ubc.ca

Abstract

We challenge the central idea proposed in Hulleman & Olivers (H&O) by arguing that the “item” is still useful for understanding visual search and for developing new theoretical frameworks. The “item” is a flexible unit that represents not only an individual object, but also a bundle of objects that are grouped based on prior knowledge. Uncovering how the “item” is represented based on prior knowledge is essential for advancing theories of visual search.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Hulleman & Olivers (H&O) present an elegant framework that aims to help us better understand visual search mechanisms. This framework proposes using fixations, rather than individual items, as the conceptual unit of visual search. The general ideas in the framework are very useful because it can account for many extant findings and identifies some shortcomings (such as embodied visual search) in the existing visual search literature.

Although this framework has its strengths, we disagree with the main argument that the item is no longer useful for understanding visual search. We do, however, agree with Olivers' earlier argument (Olivers et al. Reference Olivers, Peters, Houtkamp and Roelfsema2011) that visual search relies on an attentional template – a prioritized working memory representation – that is typically determined before starting a task via prior knowledge and/or explicit instructions. This attentional template evolves in various ways on the shorter time scale as the task progresses (e.g., Nako et al. Reference Nako, Smith and Eimer2015) and on the longer time scale as the learner gains more experience (e.g., Wu et al. Reference Wu, Nako, Band, Pizzuto, Shadravan, Scerif and Aslin2015).

We argue that the “item” is still useful for understanding visual search and developing new theoretical frameworks. Critical to our argument is the idea that the “item” (contained in the attentional template) is a flexible unit that can represent not only an individual feature or object, but also a bundle of features or objects that are grouped based on prior knowledge. Such grouping, via either explicit or implicit cues, can result in the unitization of features or objects into an “item,” which increases the amount of information held in working memory during visual search, and thus typically facilitates search performance. However, because many visual search studies control for prior experiences by using simple visual stimuli or equating prior knowledge across conditions, the nature and the limits of the attentional template are unclear. The use of prior knowledge is only mentioned briefly in H&O, but we believe that incorporating prior knowledge into visual search frameworks is critical for advancing the research area.

A growing number of studies on visual search (as well as visual working memory) demonstrate the benefits of prior knowledge on the outcomes of search tasks. For example, Nako et al. (Reference Nako, Wu and Eimer2014a) confirmed that searching for one item (e.g., a letter) is more efficient than searching for two or more items (e.g., multiple letters), as evidenced by both neural measures (attenuated N2pc) and behavioral measures (slower reaction time and lower accuracy). Importantly, they demonstrated that if category knowledge can be applied during visual search, then one-item search and multiple-item search show very similar neural and behavioral outcomes. Nako et al. (Reference Nako, Wu, Smith and Eimer2014b) and Wu et al. (Reference Wu, Nako, Band, Pizzuto, Shadravan, Scerif and Aslin2015) replicated and extended this initial finding using real-world objects, such as clothing, kitchen items, and human faces. In addition to prior knowledge about object category, grouping cues can also improve visual search. For example, Wu et al. (Reference Wu, Pruitt, Runkle, Scerif and Aslin2016) showed that a heterogeneous set of novel alien stimuli grouped by an abstract rule (same versus different) can facilitate search performance.

Grouping of objects can occur not only by means of shared features and spatial proximity, but also by reliable co-occurrences over space and time. The visual system is remarkably efficient at detecting probabilities of co-occurrences among individual objects (e.g., Fiser & Aslin Reference Fiser and Aslin2001; Turk-Browne et al. Reference Turk-Browne, Jungé and Scholl2005), and this ability is present in early infancy (Fiser & Aslin Reference Fiser and Aslin2002; Kirkham et al. Reference Kirkham, Slemmer and Johnson2002; Saffran et al. Reference Saffran, Aslin and Newport1996; Wu et al. Reference Wu, Gopnik, Richardson and Kirkham2011). A direct consequence of learning the co-occurrences between objects is that the individual objects are implicitly represented as one unit (Mole & Zhao Reference Mole and Zhao2016; Schapiro et al. Reference Schapiro, Kustner and Turk-Browne2012; Wu et al. Reference Wu, Gopnik, Richardson and Kirkham2011; Wu et al. Reference Wu, Scerif, Aslin, Smith, Nako and Eimer2013; Zhao & Yu Reference Zhao and Yu2016). Such unitized representations implicitly and spontaneously draw attention to the co-occurring objects during visual search (Wu et al. Reference Wu, Scerif, Aslin, Smith, Nako and Eimer2013; Yu & Zhao Reference Yu and Zhao2015; Zhao & Luo Reference Zhao and Luo2014; Zhao et al. Reference Zhao, Al-Aidroos and Turk-Browne2013), interferes with global processing of the visual array (Hall et al. Reference Hall, Mattingley and Dux2015; Zhao et al. Reference Zhao, Ngo, McKendrick and Turk-Browne2011), and increases the capacity of visual working memory (Brady et al. Reference Brady, Konkle and Alvarez2009; see also Brady et al. Reference Brady, Konkle and Alvarez2011). These findings support the idea that individual objects can be grouped into one “item” based on prior knowledge of co-occurrences, and such representations determine the allocation of attention, group objects into chunks, and facilitate search performance.

Besides the benefits of prior knowledge on visual search outcomes, there are also costs. When asked to search for one item in a category (e.g., the letter “A”) and a foil item from the category appeared (e.g., the letter “R”), participants exhibited attentional capture to the foil at both neural and behavioral levels (Nako et al. Reference Nako, Wu and Eimer2014a). Wu et al. (Reference Wu, Pruitt, Zinszer and Cheung2017) suggests that the “foil effect” is predicted by level of prior experience (e.g., distinguishing healthy and unhealthy foods based on dieting experience). Taken together, these recent studies show how the application of categorically based attentional templates (i.e., prior knowledge) can help overcome efficiency limitations in visual search by expanding the scope of target search, yet at the cost of false alarms to non-targets that fall within the search category.

In sum, we agree that investigating individual objects only may not provide a deeper understanding of visual search, but the “item” is still very useful. A better understanding of the bidirectional interactions of attention and learning allows us to build ecologically valid models reflecting cascading effects during visual search to advance this research area. Moreover, understanding how prior knowledge affects visual search and related attentional abilities has important implications for attention training. Given the growing literature showing the impact of knowledge on attention, increasing attentional abilities may involve training knowledge, rather than training attention per se.

References

Brady, T. F., Konkle, T. & Alvarez, G. A. (2009) Compression in visual working memory: Using statistical regularities to form more efficient memory representations. Journal of Experimental Psychology: General 138:487502.Google Scholar
Brady, T. F., Konkle, T. & Alvarez, G. A. (2011) A review of visual memory capacity: Beyond individual items and toward structured representations. Journal of Vision 11:134.CrossRefGoogle ScholarPubMed
Fiser, J. & Aslin, R. N. (2001) Unsupervised statistical learning of higher-order spatial structures from visual scenes. Psychological Science 12:499504.CrossRefGoogle ScholarPubMed
Fiser, J. & Aslin, R. N. (2002) Statistical learning of new visual feature combinations by infants. Proceedings of the National Academy of Sciences of the United States of America 99(24):15822–26.Google Scholar
Hall, M., Mattingley, J. & Dux, P. (2015) Distinct contributions of attention and working memory to visual statistical learning and ensemble processing. Journal of Experimental Psychology: Human Perception and Performance 41:1112–23.Google ScholarPubMed
Kirkham, N. Z., Slemmer, J. A. & Johnson, S. P. (2002) Visual statistical learning in infancy: Evidence for a domain general learning mechanism. Cognition 83:B3542.CrossRefGoogle ScholarPubMed
Mole, C. & Zhao, J. (2016) Vision and abstraction: An empirical refutation of Nico Orlandi's non-cognitivism. Philosophical Psychology 29:365–73.CrossRefGoogle Scholar
Nako, R., Smith, T. J. & Eimer, M. (2015) Activation of new attentional templates for real-world objects in visual search. Journal of Cognitive Neuroscience 27:902–12.CrossRefGoogle ScholarPubMed
Nako, R., Wu, R. & Eimer, M. (2014a) Rapid guidance of visual search by object categories. Journal of Experimental Psychology: Human Perception and Performance 40(1):5060.Google Scholar
Nako, R., Wu, R., Smith, T. J. & Eimer, M. (2014b) Item and category-based attentional control during search for real-world objects: Can you find the pants among the pans? Journal of Experimental Psychology: Human Perception and Performance 40(4):1283–88.Google Scholar
Olivers, C. N., Peters, J., Houtkamp, R. & Roelfsema, P. R. (2011) Different states in visual working memory: When it guides attention and when it does not. Trends in Cognitive Sciences 15(7):327–34.Google Scholar
Saffran, J. R., Aslin, R. N. & Newport, E. L. (1996) Statistical learning by 8-month-old infants. Science 274:1926–28.Google Scholar
Schapiro, A. C., Kustner, L. V. & Turk-Browne, N. B. (2012) Shaping of object representations in the human medial temporal lobe based on temporal regularities. Current Biology 22:1622–27.CrossRefGoogle ScholarPubMed
Turk-Browne, N. B., Jungé, J. A. & Scholl, B. J. (2005) The automaticity of visual statistical learning. Journal of Experimental Psychology: General 134:552–64.CrossRefGoogle ScholarPubMed
Wu, R., Gopnik, A., Richardson, D. C. & Kirkham, N. Z. (2011) Infants learn about objects from statistics and people. Developmental Psychology 47(5):1220–29.CrossRefGoogle ScholarPubMed
Wu, R., Nako, R., Band, J., Pizzuto, J., Shadravan, Y., Scerif, G. & Aslin, R. N. (2015) Rapid selection of non-native stimuli despite perceptual narrowing. Journal of Cognitive Neuroscience 27(11):2299–307.Google Scholar
Wu, R., Pruitt, Z., Runkle, M., Scerif, G. & Aslin, R. N. (2016) A neural signature of rapid category-based target selection as a function of intra-item perceptual similarity despite inter-item dissimilarity. Attention Perception and Psychophysics 78(3):749–76. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26732265.CrossRefGoogle ScholarPubMed
Wu, R., Pruitt, Z., Zinszer, B. & Cheung, O. (2017) Increased experience amplifies the activation of task-irrelevant category representations. Attention, Perception, and Psychophysics 79(2):522–32.Google Scholar
Wu, R., Scerif, G., Aslin, R. N., Smith, T. J., Nako, R. & Eimer, M. (2013) Searching for something familiar or novel: Top-down attentional selection of specific items or object categories. Journal of Cognitive Neuroscience 25(5):719–29.CrossRefGoogle ScholarPubMed
Yu, R. & Zhao, J. (2015) The persistence of attentional bias to regularities in a changing environment. Attention, Perception, and Psychophysics 77:2217–28.CrossRefGoogle Scholar
Zhao, J., Al-Aidroos, N. & Turk-Browne, N. B. (2013) Attention is spontaneously biased toward regularities. Psychological Science 24:667–77.CrossRefGoogle ScholarPubMed
Zhao, J. & Luo, Y. (2014) Statistical regularities alter the spatial scale of attention. Journal of Vision 14(10):11.Google Scholar
Zhao, J., Ngo, N., McKendrick, R. & Turk-Browne, N. B. (2011) Mutual interference between statistical summary perception and statistical learning. Psychological Science 22:1212–19.Google Scholar
Zhao, J. & Yu, R. (2016) Statistical regularities reduce perceived numerosity. Cognition 146:217–22.Google Scholar