Jeffery et al. propose that vertical space is coded using a different set of mechanisms from the head direction, place, and grid cells that code horizontal space. In particular, the authors present evidence that grid cells do not fire periodically as an organism moves along the vertical axis of space, as they do along the horizontal axis, and that the head direction system uses the plane of locomotion as its reference frame. But a challenge for the bicoded model is whether it can account similarly for the different types of information that can be specified in the vertical dimension. We argue that there are two markedly different components of vertical space that provide entirely different spatial information, and therefore are represented differently: orthogonal-to-horizontal (as in multilevel buildings and volumetric space) and sloped terrain.
The vertical dimension, when orthogonal-to-horizontal, can specify contextual information that can be used to reference the appropriate planar map (“what floor am I on”), as well as body-orientation information (“which way is the axis of gravity”). The former is likely coded categorically in terms of low, high, higher, and so forth, as described by Jeffery et al. in the target article. Theoretically, this information functions no differently from the contextual information provided by rooms with differently colored walls or by the floor numbers outside elevators. The cue (wall color or floor number) specifies the planar cognitive map to which the organism should refer. The vertical dimension can also specify body-orientation information. The organism constantly refers to the plane of its body with respect to earth-horizontal as a source of input for the direction of gravity. These cues combine for an organism to determine its elevation with respect to the ground.
In contrast, none of this information is available in sloped environments. Terrain slope can provide a directional cue by specifying compass-like directions (i.e., up = North), which could then orient or augment the planar cognitive map. Note that this processing is distinct from orientation with respect to the direction of gravity, which provides no directional information for the horizontal plane. Recent work conducted in our lab shows that slope is very useful for locating the goal in an otherwise ambiguous environment, and that the directional information it provides is coded with respect to one's own body (Weisberg et al. Reference Weisberg, Brakoniecki and Newcombe2012). Our data suggest that, unlike North, which is an invariant directional cue that does not change based on the direction one faces, terrain slope is coded preferentially as uphill or downhill, depending on one's current facing direction. It is unclear how the vertical dimension with respect to one's own body could provide similar directional information.
Data from our lab also suggest that the way slope information augments the planar map is computationally different from the way the orthogonal-to-horizontal dimension provides context. In a replication of Restat et al. (Reference Restat, Steck, Mochnatzki and Mallot2004), we found that participants were able to use unidirectional-sloped terrain to make more accurate pointing judgments and sketch maps, compared to the same environment on a flat terrain. However, this was only the case for a simple environment. In a complex environment, only participants with a high self-reported sense of direction used the slope information, while low self-reporters did no better than in the flat condition (Weisberg et al. Reference Weisberg, Nardi, Newcombe and Shipley2012).
Neither of these experimental effects arises from information provided by the vertical dimension as orthogonal-to-horizontal. That is, the participants in these experiments need not encode their elevation with respect to the ground, but only the direction specified by the gradient of the sloped terrain. The goals of the organism modulate which type of vertical information it encodes. As further evidence of their dissociability, consider how the two possible reference frames might interact (see Fig. 1).
Figure 1. Two possible representations of the vertical dimension – one specified with respect to the plane of the organism's body, the other specified by the organism travelling along a sloped surface and therefore changing its elevation. Which of these representations would be predicted by the bicoded model? Are they both categorical? If so, are they functionally similar? (After Jeffery et al.)
For example, an organism navigating along a sloped terrain could have a categorical representation of the vertical dimension either as it refers to the organism's own body (the gradient at left in the figure) or as it refers to earth-horizontal (the gradient at right). The rat on the left is unconcerned with the increase in elevation, and is similar to the rat on the vertical trunk of the tree in Figure 12 of the target article. The rat on the right, however, is encoding its elevation with respect to some low or high point. Presumably, Jeffery et al. would predict that a rat would encode the vertical dimension with respect to its own body, as in the figure on the left. But in doing so, they may be discounting the information provided by the terrain gradient.
By considering the possible representations of the vertical component of space, one can characterize the qualities of each representation separately. For example, are each of the resultant representations derived from sloped terrain and volumetric space non-metric, or do some of them contain metric properties? If they are non-metric, are they functionally different; susceptible to the same distortions? Whether these representations become integrated and can be used together is an empirical question, but exploring these possibilities creates a richer picture of three-dimensional spatial cognition.
Jeffery et al. propose that vertical space is coded using a different set of mechanisms from the head direction, place, and grid cells that code horizontal space. In particular, the authors present evidence that grid cells do not fire periodically as an organism moves along the vertical axis of space, as they do along the horizontal axis, and that the head direction system uses the plane of locomotion as its reference frame. But a challenge for the bicoded model is whether it can account similarly for the different types of information that can be specified in the vertical dimension. We argue that there are two markedly different components of vertical space that provide entirely different spatial information, and therefore are represented differently: orthogonal-to-horizontal (as in multilevel buildings and volumetric space) and sloped terrain.
The vertical dimension, when orthogonal-to-horizontal, can specify contextual information that can be used to reference the appropriate planar map (“what floor am I on”), as well as body-orientation information (“which way is the axis of gravity”). The former is likely coded categorically in terms of low, high, higher, and so forth, as described by Jeffery et al. in the target article. Theoretically, this information functions no differently from the contextual information provided by rooms with differently colored walls or by the floor numbers outside elevators. The cue (wall color or floor number) specifies the planar cognitive map to which the organism should refer. The vertical dimension can also specify body-orientation information. The organism constantly refers to the plane of its body with respect to earth-horizontal as a source of input for the direction of gravity. These cues combine for an organism to determine its elevation with respect to the ground.
In contrast, none of this information is available in sloped environments. Terrain slope can provide a directional cue by specifying compass-like directions (i.e., up = North), which could then orient or augment the planar cognitive map. Note that this processing is distinct from orientation with respect to the direction of gravity, which provides no directional information for the horizontal plane. Recent work conducted in our lab shows that slope is very useful for locating the goal in an otherwise ambiguous environment, and that the directional information it provides is coded with respect to one's own body (Weisberg et al. Reference Weisberg, Brakoniecki and Newcombe2012). Our data suggest that, unlike North, which is an invariant directional cue that does not change based on the direction one faces, terrain slope is coded preferentially as uphill or downhill, depending on one's current facing direction. It is unclear how the vertical dimension with respect to one's own body could provide similar directional information.
Data from our lab also suggest that the way slope information augments the planar map is computationally different from the way the orthogonal-to-horizontal dimension provides context. In a replication of Restat et al. (Reference Restat, Steck, Mochnatzki and Mallot2004), we found that participants were able to use unidirectional-sloped terrain to make more accurate pointing judgments and sketch maps, compared to the same environment on a flat terrain. However, this was only the case for a simple environment. In a complex environment, only participants with a high self-reported sense of direction used the slope information, while low self-reporters did no better than in the flat condition (Weisberg et al. Reference Weisberg, Nardi, Newcombe and Shipley2012).
Neither of these experimental effects arises from information provided by the vertical dimension as orthogonal-to-horizontal. That is, the participants in these experiments need not encode their elevation with respect to the ground, but only the direction specified by the gradient of the sloped terrain. The goals of the organism modulate which type of vertical information it encodes. As further evidence of their dissociability, consider how the two possible reference frames might interact (see Fig. 1).
Figure 1. Two possible representations of the vertical dimension – one specified with respect to the plane of the organism's body, the other specified by the organism travelling along a sloped surface and therefore changing its elevation. Which of these representations would be predicted by the bicoded model? Are they both categorical? If so, are they functionally similar? (After Jeffery et al.)
For example, an organism navigating along a sloped terrain could have a categorical representation of the vertical dimension either as it refers to the organism's own body (the gradient at left in the figure) or as it refers to earth-horizontal (the gradient at right). The rat on the left is unconcerned with the increase in elevation, and is similar to the rat on the vertical trunk of the tree in Figure 12 of the target article. The rat on the right, however, is encoding its elevation with respect to some low or high point. Presumably, Jeffery et al. would predict that a rat would encode the vertical dimension with respect to its own body, as in the figure on the left. But in doing so, they may be discounting the information provided by the terrain gradient.
By considering the possible representations of the vertical component of space, one can characterize the qualities of each representation separately. For example, are each of the resultant representations derived from sloped terrain and volumetric space non-metric, or do some of them contain metric properties? If they are non-metric, are they functionally different; susceptible to the same distortions? Whether these representations become integrated and can be used together is an empirical question, but exploring these possibilities creates a richer picture of three-dimensional spatial cognition.
ACKNOWLEDGMENTS
Work on this commentary was supported by grant no. SBE-1041707 from the NSF to the Spatial Intelligence and Learning Center.