Finding objects in a complex environment is a crucial skill for the survival of humans and animals. Finding the suitable fruit in a cluttered forest, detecting predators disguised in the savanna, or simply finding a stapler on a cluttered office desk, all show the importance of the ability to find objects in an environment containing much irrelevant information (distractors). In most cases, and for humans in particular, this function is carried out by the visual modality, namely through visual search. Not surprisingly, visual search has been broadly investigated in humans and animals (Hickey et al. Reference Hickey, Chelazzi and Theeuwes2010; Proulx et al. Reference Proulx, Parker, Tahir and Brennan2014; Tomonaga Reference Tomonaga2007; Young & Hulleman Reference Young and Hulleman2013). Over the past several decades, theories of visual search have been constructed based on studies focusing on the number and the type of items used as distractors in visual search tasks (Duncan & Humphreys Reference Duncan and Humphreys1989; He & Nakayama Reference He and Nakayama1992; Treisman Reference Treisman1982). In other words, the characteristics of the items were considered the central feature in visual search.
However, towards the end of the 90s, empirical data began to suggest that, rather than the characteristics of the items, another feature might better explain how visual search works. This feature is the number of fixations produced by participants during visual search tasks (Wolfe Reference Wolfe1998b; Wolfe et al. Reference Wolfe, Palmer and Horowitz2010b; Zelinsky Reference Zelinsky1996). The more recent and mounting evidence in favour of the theory based on visual fixations prompted Hulleman & Olivers (H&O) to propose a novel theory to interpret past results and design future experiments. The authors conducted a meta-analysis to demonstrate how the number of fixations accurately predicts search times, and argued that the theory based on eye-fixations takes into account the physiology and the limits of the visual system. This led the authors to suggest that number of fixations would be better able to explain how visual search works than the number and type of the items. In fact, fixations are a constant factor (“fixations are fixations”), whereas items can be objects, faces, letters, numbers, and so forth, making experimental results highly task-dependent. This implies that studying fixation patterns would facilitate the comparison of results across different studies, and perhaps lead to a more comprehensive understanding of the underlying mechanisms of visual search.
This innovative approach is, however, limited to visual search. In real world, object finding could depend upon, and can be performed through, other modalities such as the haptic modality; for example, finding the keys in a pocket, finding a torch during a blackout, and so forth. The haptic modality consists of touch as well as proprioceptive and kinaesthetic cues (Gibson Reference Gibson1962; Heller Reference Heller1984), and is very well suited to conveying information regarding shapes and positions (plus “extra” information such as temperature) of external objects (Lacreuse & Fragaszy Reference Lacreuse and Fragaszy1997; Lederman & Klatzky Reference Lederman and Klatzky1987; Sann & Streri Reference Sann and Streri2007). Interestingly, haptics and vision are broadly interconnected at the behavioural level (Ballesteros et al. Reference Ballesteros, Millar and Reales1998; Grabowecky et al. Reference Grabowecky, List, Iordanescu and Suzuki2011; Newell et al. Reference Newell, Woods, Mernagh and Bülthoff2005; Pasqualotto et al. Reference Pasqualotto, Finucane and Newell2005; Reference Pasqualotto, Finucane and Newell2013a), and the two sensory modalities are strongly interlinked in the brain (Lacey et al. Reference Lacey, Tal, Amedi and Sathian2009; Pasqualotto et al. Reference Pasqualotto, Proulx and Sereno2013b; Pietrini et al. Reference Pietrini, Furey, Ricciardi, Gobbini, Wu, Cohen, Guazzelli and Haxby2004). Therefore, it seems reasonable to assume that, if visual search is based on visual fixations, haptic search might be based on hand/finger scanning movements.
Haptic search has been far less studied than visual search; nevertheless, both early and recent evidence suggests that scanning movements are as pivotal for haptics (Lederman & Klatzky Reference Lederman and Klatzky1987; Overvliet et al. Reference Overvliet, Smeets and Brenner2007; Plaisier et al. Reference Plaisier, Tiest and Kappers2008; Reference Plaisier, Kuling, Tiest and Kappers2009; Van Polanen et al. Reference Van Polanen, Tiest and Kappers2011) as fixations are for vision, thus supporting the assumption that the two modalities are strongly interlinked (see the previous paragraph). For example, in a haptic search task where participants had to find a target object among several distractors through active touch, haptic scanning patterns resembled patterns of eye-fixations observed in visual search (Overvliet et al. Reference Overvliet, Smeets and Brenner2007). A similar study reported that, rather than the number and type of distractors, the strategy of haptic exploration (i.e., using one finger, one hand, or two hands) was the crucial factor that influenced haptic search performance (Overvliet et al. Reference Overvliet, Smeets and Brenner2008). The same study also reported that systematic “zig-zag” haptic scanning patterns were observed across different searching conditions (e.g., different target objects), which suggests that scanning movements reflect a fundamental feature of haptic search. Furthermore, another study found that the “pop-out” effect well documented in the literature of vision (e.g., detecting a red object amongst blue objects) also occurs for haptics, and that haptic scanning movements were the best predictor for haptic search performance (Plaisier et al. Reference Plaisier, Tiest and Kappers2008). The similarity between visual and haptic search is consistent with theories such as neural reuse (Anderson Reference Anderson2010) and metamodal brain (Pascual-Leone & Hamilton Reference Pascual-Leone and Hamilton2001), both of which suggest a substantial overlap in the brain across the areas processing input from different sensory modalities (Pasqualotto et al. Reference Pasqualotto, Dumitru and Myachykov2016; Uesaki & Ashida Reference Uesaki and Ashida2015).
Although the novel approach to understanding the underlying mechanisms of visual search proposed in this article is supported by evidence from haptic search, the number of studies on haptic search is still very limited compared to that on visual search, thus more research is needed to confirm those initial findings. In particular, it is critical to compare the patterns of visual fixations and haptic scanning movements arising within the same experimental setup directly to achieve a more holistic understanding of visual search as well as object search through other sensory modalities.
Finding objects in a complex environment is a crucial skill for the survival of humans and animals. Finding the suitable fruit in a cluttered forest, detecting predators disguised in the savanna, or simply finding a stapler on a cluttered office desk, all show the importance of the ability to find objects in an environment containing much irrelevant information (distractors). In most cases, and for humans in particular, this function is carried out by the visual modality, namely through visual search. Not surprisingly, visual search has been broadly investigated in humans and animals (Hickey et al. Reference Hickey, Chelazzi and Theeuwes2010; Proulx et al. Reference Proulx, Parker, Tahir and Brennan2014; Tomonaga Reference Tomonaga2007; Young & Hulleman Reference Young and Hulleman2013). Over the past several decades, theories of visual search have been constructed based on studies focusing on the number and the type of items used as distractors in visual search tasks (Duncan & Humphreys Reference Duncan and Humphreys1989; He & Nakayama Reference He and Nakayama1992; Treisman Reference Treisman1982). In other words, the characteristics of the items were considered the central feature in visual search.
However, towards the end of the 90s, empirical data began to suggest that, rather than the characteristics of the items, another feature might better explain how visual search works. This feature is the number of fixations produced by participants during visual search tasks (Wolfe Reference Wolfe1998b; Wolfe et al. Reference Wolfe, Palmer and Horowitz2010b; Zelinsky Reference Zelinsky1996). The more recent and mounting evidence in favour of the theory based on visual fixations prompted Hulleman & Olivers (H&O) to propose a novel theory to interpret past results and design future experiments. The authors conducted a meta-analysis to demonstrate how the number of fixations accurately predicts search times, and argued that the theory based on eye-fixations takes into account the physiology and the limits of the visual system. This led the authors to suggest that number of fixations would be better able to explain how visual search works than the number and type of the items. In fact, fixations are a constant factor (“fixations are fixations”), whereas items can be objects, faces, letters, numbers, and so forth, making experimental results highly task-dependent. This implies that studying fixation patterns would facilitate the comparison of results across different studies, and perhaps lead to a more comprehensive understanding of the underlying mechanisms of visual search.
This innovative approach is, however, limited to visual search. In real world, object finding could depend upon, and can be performed through, other modalities such as the haptic modality; for example, finding the keys in a pocket, finding a torch during a blackout, and so forth. The haptic modality consists of touch as well as proprioceptive and kinaesthetic cues (Gibson Reference Gibson1962; Heller Reference Heller1984), and is very well suited to conveying information regarding shapes and positions (plus “extra” information such as temperature) of external objects (Lacreuse & Fragaszy Reference Lacreuse and Fragaszy1997; Lederman & Klatzky Reference Lederman and Klatzky1987; Sann & Streri Reference Sann and Streri2007). Interestingly, haptics and vision are broadly interconnected at the behavioural level (Ballesteros et al. Reference Ballesteros, Millar and Reales1998; Grabowecky et al. Reference Grabowecky, List, Iordanescu and Suzuki2011; Newell et al. Reference Newell, Woods, Mernagh and Bülthoff2005; Pasqualotto et al. Reference Pasqualotto, Finucane and Newell2005; Reference Pasqualotto, Finucane and Newell2013a), and the two sensory modalities are strongly interlinked in the brain (Lacey et al. Reference Lacey, Tal, Amedi and Sathian2009; Pasqualotto et al. Reference Pasqualotto, Proulx and Sereno2013b; Pietrini et al. Reference Pietrini, Furey, Ricciardi, Gobbini, Wu, Cohen, Guazzelli and Haxby2004). Therefore, it seems reasonable to assume that, if visual search is based on visual fixations, haptic search might be based on hand/finger scanning movements.
Haptic search has been far less studied than visual search; nevertheless, both early and recent evidence suggests that scanning movements are as pivotal for haptics (Lederman & Klatzky Reference Lederman and Klatzky1987; Overvliet et al. Reference Overvliet, Smeets and Brenner2007; Plaisier et al. Reference Plaisier, Tiest and Kappers2008; Reference Plaisier, Kuling, Tiest and Kappers2009; Van Polanen et al. Reference Van Polanen, Tiest and Kappers2011) as fixations are for vision, thus supporting the assumption that the two modalities are strongly interlinked (see the previous paragraph). For example, in a haptic search task where participants had to find a target object among several distractors through active touch, haptic scanning patterns resembled patterns of eye-fixations observed in visual search (Overvliet et al. Reference Overvliet, Smeets and Brenner2007). A similar study reported that, rather than the number and type of distractors, the strategy of haptic exploration (i.e., using one finger, one hand, or two hands) was the crucial factor that influenced haptic search performance (Overvliet et al. Reference Overvliet, Smeets and Brenner2008). The same study also reported that systematic “zig-zag” haptic scanning patterns were observed across different searching conditions (e.g., different target objects), which suggests that scanning movements reflect a fundamental feature of haptic search. Furthermore, another study found that the “pop-out” effect well documented in the literature of vision (e.g., detecting a red object amongst blue objects) also occurs for haptics, and that haptic scanning movements were the best predictor for haptic search performance (Plaisier et al. Reference Plaisier, Tiest and Kappers2008). The similarity between visual and haptic search is consistent with theories such as neural reuse (Anderson Reference Anderson2010) and metamodal brain (Pascual-Leone & Hamilton Reference Pascual-Leone and Hamilton2001), both of which suggest a substantial overlap in the brain across the areas processing input from different sensory modalities (Pasqualotto et al. Reference Pasqualotto, Dumitru and Myachykov2016; Uesaki & Ashida Reference Uesaki and Ashida2015).
Although the novel approach to understanding the underlying mechanisms of visual search proposed in this article is supported by evidence from haptic search, the number of studies on haptic search is still very limited compared to that on visual search, thus more research is needed to confirm those initial findings. In particular, it is critical to compare the patterns of visual fixations and haptic scanning movements arising within the same experimental setup directly to achieve a more holistic understanding of visual search as well as object search through other sensory modalities.