Jeffery et al. propose that bicoded representations may support the encoding of three-dimensional space in a wide range of species, including non–surface-travelling animals. Only humans, however, have the ability to draw on their representations of space to talk about their spatial experience. Here we highlight the potential of spatial language – not traditionally considered in the study of navigation through space – to provide insight into the nature of nonlinguistic spatial representation. In particular, we suggest that recent research on spatial language, spatial cognition, and the relationship between the two offers an unexpected source of evidence, albeit indirect, for the kinds of representations posited by Jeffery et al. Such evidence raises the possibility that bicoded representations may support spatial cognition even beyond navigational contexts.
There are several striking parallels between Jeffery et al.'s account of spatial representation and the semantics of spatial language. Jeffery et al. argue that animals represent the vertical dimension in a qualitatively different manner than they do the two horizontal dimensions, in large part due to differences in locomotive experience. This distinction between vertical and horizontal is also evident in spatial language. Clark (Reference Clark and Moore1973) noted that English spatial terms rely on three primary planes of reference, one defined by ground level (dividing above from below) and the other two defined by canonical body orientation (dividing front from back and left from right). In a similar vein, Landau and Jackendoff (Reference Landau and Jackendoff1993) pointed to axial structure (e.g., the vertical and horizontal axes) as a key property encoded by spatial prepositions, which otherwise tend to omit much perceptual detail (see also Holmes & Wolff Reference Holmes and Wolff2013a). More recently, Holmes (Reference Holmes2012; Holmes & Wolff Reference Holmes, Wolff, Knauff, Pauen, Sebanz and Wachsmuth2013b) examined the semantic organization of the spatial domain by asking native English speakers to sort a comprehensive inventory of spatial prepositions into groups based on the similarity of their meanings. Using several dimensionality reduction methods to analyze the sorting data, including multidimensional scaling and hierarchical clustering, Holmes found that the first major cut of the domain was between vertical terms (e.g., above, below, on top of, under) and all other prepositions; terms referring to the left-right and front-back axes (e.g., to the left of, to the right of, in front of, behind) tended to cluster together. These findings suggest that the vertical-horizontal distinction may be semantically, and perhaps conceptually, privileged.
Holmes's (Reference Holmes2012) findings are also consistent with Jeffery et al.'s claims about the properties that distinguish horizontal from vertical representations. Jeffery et al. propose that horizontal representations are relatively fine-grained, whereas vertical representations are coarser and nonmetric in nature. In Holmes's study, prepositions encoding distance information (e.g., near, far from) clustered exclusively with horizontal terms, implying that metric properties are more associated with the horizontal dimensions than the vertical. Further, vertical terms divided into discrete subcategories of “above” and “below” relations, but horizontal terms did not; English speakers regarded to the left of and to the right of as essentially equivalent in meaning. Perhaps most intriguingly, Holmes found that the semantic differences among the dimensions were mirrored by corresponding differences in how spatial relations are processed in nonlinguistic contexts (see also Franklin & Tversky Reference Franklin and Tversky1990). When presented with visual stimuli depicting spatial relations between objects (e.g., a bird above, below, to the left of, or to the right of an airplane), participants were faster to discriminate an “above” relation from a “below” relation than two different exemplars of an “above” relation – but only in the right visual field, consistent with the view that the left hemisphere is specialized for categorical processing (Kosslyn et al. Reference Kosslyn, Koenig, Barrett, Cave, Tang and Gabrieli1989). Though observed for vertical relations, this effect of lateralized categorical perception, demonstrated previously for color (Gilbert et al. Reference Gilbert, Regier, Kay and Ivry2006) and objects (Holmes & Wolff Reference Holmes and Wolff2012), was entirely absent in the case of horizontal relations: Participants were just as fast to discriminate two different exemplars of “left” as they were to discriminate “left” from “right.” That the vertical dimension was perceived categorically but the horizontal dimension was not suggests differences in how the mind carves up spatial information along different axes. In characterizing the nature of the bicoded cognitive map, Jeffery et al. use color merely as a way of illustrating the nonmetric property of vertical representations, but such an analogy seems particularly fitting: Whereas the vertical axis may be represented in the same way that we see a rainbow as forming discrete units, the horizontal axis may be represented more like color actually presents itself, namely as a continuous gradient.
Together, the findings reviewed above tell a story about spatial representation that is, in many respects, similar to that proposed by Jeffery et al. in the target article. However, such findings suggest an alternative explanation for the many differences observed between the horizontal and vertical dimensions. Early in the target article, Jeffery et al. briefly distinguish between spatial navigation and spatial perception more generally, implying that the representations supporting navigation may not extend to other spatial contexts. But given the parallels between spatial language and the representations implicated by Jeffery et al.'s account, and the fact that spatial terms often refer to static spatial configurations rather than motion through space, bicoded representations may constitute a more general property of spatial perception, rather than being specifically tied to navigation. This possibility could be examined in future research on the representation of three-dimensional space. More broadly, our observations suggest a role for research on the language–thought interface in informing accounts of cognitive abilities ostensibly unrelated to language, lending support to the enduring maxim that language is a window into the mind (Pinker Reference Pinker2007).
Jeffery et al. propose that bicoded representations may support the encoding of three-dimensional space in a wide range of species, including non–surface-travelling animals. Only humans, however, have the ability to draw on their representations of space to talk about their spatial experience. Here we highlight the potential of spatial language – not traditionally considered in the study of navigation through space – to provide insight into the nature of nonlinguistic spatial representation. In particular, we suggest that recent research on spatial language, spatial cognition, and the relationship between the two offers an unexpected source of evidence, albeit indirect, for the kinds of representations posited by Jeffery et al. Such evidence raises the possibility that bicoded representations may support spatial cognition even beyond navigational contexts.
There are several striking parallels between Jeffery et al.'s account of spatial representation and the semantics of spatial language. Jeffery et al. argue that animals represent the vertical dimension in a qualitatively different manner than they do the two horizontal dimensions, in large part due to differences in locomotive experience. This distinction between vertical and horizontal is also evident in spatial language. Clark (Reference Clark and Moore1973) noted that English spatial terms rely on three primary planes of reference, one defined by ground level (dividing above from below) and the other two defined by canonical body orientation (dividing front from back and left from right). In a similar vein, Landau and Jackendoff (Reference Landau and Jackendoff1993) pointed to axial structure (e.g., the vertical and horizontal axes) as a key property encoded by spatial prepositions, which otherwise tend to omit much perceptual detail (see also Holmes & Wolff Reference Holmes and Wolff2013a). More recently, Holmes (Reference Holmes2012; Holmes & Wolff Reference Holmes, Wolff, Knauff, Pauen, Sebanz and Wachsmuth2013b) examined the semantic organization of the spatial domain by asking native English speakers to sort a comprehensive inventory of spatial prepositions into groups based on the similarity of their meanings. Using several dimensionality reduction methods to analyze the sorting data, including multidimensional scaling and hierarchical clustering, Holmes found that the first major cut of the domain was between vertical terms (e.g., above, below, on top of, under) and all other prepositions; terms referring to the left-right and front-back axes (e.g., to the left of, to the right of, in front of, behind) tended to cluster together. These findings suggest that the vertical-horizontal distinction may be semantically, and perhaps conceptually, privileged.
Holmes's (Reference Holmes2012) findings are also consistent with Jeffery et al.'s claims about the properties that distinguish horizontal from vertical representations. Jeffery et al. propose that horizontal representations are relatively fine-grained, whereas vertical representations are coarser and nonmetric in nature. In Holmes's study, prepositions encoding distance information (e.g., near, far from) clustered exclusively with horizontal terms, implying that metric properties are more associated with the horizontal dimensions than the vertical. Further, vertical terms divided into discrete subcategories of “above” and “below” relations, but horizontal terms did not; English speakers regarded to the left of and to the right of as essentially equivalent in meaning. Perhaps most intriguingly, Holmes found that the semantic differences among the dimensions were mirrored by corresponding differences in how spatial relations are processed in nonlinguistic contexts (see also Franklin & Tversky Reference Franklin and Tversky1990). When presented with visual stimuli depicting spatial relations between objects (e.g., a bird above, below, to the left of, or to the right of an airplane), participants were faster to discriminate an “above” relation from a “below” relation than two different exemplars of an “above” relation – but only in the right visual field, consistent with the view that the left hemisphere is specialized for categorical processing (Kosslyn et al. Reference Kosslyn, Koenig, Barrett, Cave, Tang and Gabrieli1989). Though observed for vertical relations, this effect of lateralized categorical perception, demonstrated previously for color (Gilbert et al. Reference Gilbert, Regier, Kay and Ivry2006) and objects (Holmes & Wolff Reference Holmes and Wolff2012), was entirely absent in the case of horizontal relations: Participants were just as fast to discriminate two different exemplars of “left” as they were to discriminate “left” from “right.” That the vertical dimension was perceived categorically but the horizontal dimension was not suggests differences in how the mind carves up spatial information along different axes. In characterizing the nature of the bicoded cognitive map, Jeffery et al. use color merely as a way of illustrating the nonmetric property of vertical representations, but such an analogy seems particularly fitting: Whereas the vertical axis may be represented in the same way that we see a rainbow as forming discrete units, the horizontal axis may be represented more like color actually presents itself, namely as a continuous gradient.
Together, the findings reviewed above tell a story about spatial representation that is, in many respects, similar to that proposed by Jeffery et al. in the target article. However, such findings suggest an alternative explanation for the many differences observed between the horizontal and vertical dimensions. Early in the target article, Jeffery et al. briefly distinguish between spatial navigation and spatial perception more generally, implying that the representations supporting navigation may not extend to other spatial contexts. But given the parallels between spatial language and the representations implicated by Jeffery et al.'s account, and the fact that spatial terms often refer to static spatial configurations rather than motion through space, bicoded representations may constitute a more general property of spatial perception, rather than being specifically tied to navigation. This possibility could be examined in future research on the representation of three-dimensional space. More broadly, our observations suggest a role for research on the language–thought interface in informing accounts of cognitive abilities ostensibly unrelated to language, lending support to the enduring maxim that language is a window into the mind (Pinker Reference Pinker2007).