How can one prove that an animal has three-dimensional spatial representations? For navigation in two-dimensional space (e.g., horizontal plane), the classical paradigm is the novel shortcut test: If an animal traverses a route and then returns home by a novel shortcut, this is taken as sufficient (but not necessary) evidence of a 2D spatial representation. A variant of the novel shortcut test is the detour test, where a familiar route is blocked and the animal must navigate around the obstacle to reach the destination. Although novel shortcutting is usually attributed to path integration rather than cognitive mapping, both mechanisms require 2D spatial representations, and therefore the conclusion regarding dimensionality holds in both cases.
Based on this logic, novel shortcutting in three-dimensional space is sufficient evidence for 3D spatial representations. If an animal traverses a route in three-dimensional space and returns home through a novel shortcut or navigates around an obstacle to reach the goal, then that animal possesses a 3D spatial representation. Among the four candidate models, only the volumetric map supports such behavior. Animals using the surface map and the extracted flat map may take shortcuts within the encoded surface, but they cannot return to the appropriate elevation once they are displaced along that dimension. A bicoded map with non-spatial (e.g., contextual) coding of the third dimension allows shortcuts within the surface(s) of the two spatial dimensions, but the animal has to perform a random search along the third dimension until it happens to reach the desired context, because by definition the contextual coding without spatial mapping provides no information about the direction and distance an animal should travel to reach the goal context from a novel context.
The ability to perform novel shortcuts does not necessarily mean high accuracy or efficiency. Performance in navigation tasks varies depending on the species, the perceptual information available, and properties of the trajectory (Loomis et al. Reference Loomis, Klatzky, Golledge, Cicinelli, Pellegrino and Fry1993; Wang & Spelke Reference Wang, Spelke and Jeffery2003). Differences in accuracy may be due to the quality of the spatial representation and spatial processing. However, lower accuracy cannot be used as evidence against the presence of 3D spatial representations. Instead, the criterion for possessing a 3D representation should be above-chance performance in the third dimension. Similarly, biases occur in spatial judgments of lower dimensions (e.g., Huttenlocher et al. Reference Huttenlocher, Hedges and Duncan1991; Sampaio & Wang Reference Sampaio and Wang2009) and veridical information is not required for spatial representations. Hence, systematic biases are not evidence against 3D representations.
Moreover, spatial representations do not require integration across dimensions, and none of the four models makes this assumption. Multiple dimensions can be encoded separately, for example, as coordinates for each dimension, as long as the information can be used together for spatial processing. Therefore, separation of one or more dimensions is not evidence against 3D spatial representations. Integration across space is also not required, and fragmentation has been shown in 2D spatial representations (Wang & Brockmole Reference Wang and Brockmole2003). All four models are open about whether the entire space is represented as a whole or is divided into pieces/segments, and all can accommodate both mosaic and integrated representations. Thus, segregation of space is not informative on the dimensionality of the spatial representations.
Finally, the dimensionality of spatial representations for navigation needs to be considered in a broader theoretical framework in terms of reference frames and sensory domains. Flexible, efficient navigation does not require an allocentric map. For example, an egocentric updating system can provide the same navigational capabilities as an allocentric cognitive map (Wang Reference Wang2012; Wang & Spelke Reference Wang and Spelke2000). It is also important to consider visual, proprioceptive, and motor command information for self-motion estimation, which may provide simpler and more efficient computation than the vestibular signals (Lappe et al. Reference Lappe, Bremmer and van den Berg1999; Wang & Cutting Reference Wang and Cutting1999).
As Jeffery et al. discuss in the target article, the dimensionality question has been extended to four dimensions for humans. Although humans probably did not evolve for navigation in four-dimensional space, there are no known inherent constraints in implementing 4D spatial representations with neurons and neural connections, and the existing brain structure may be utilized to accommodate higher-dimensional spatial representations. Therefore, 4D spatial representations cannot be dismissed a priori based on evolution alone and should be treated as an empirical question.
A few studies have examined human 4D spatial representations using variations of the shortcutting task and spatial judgment tasks. Aflalo and Graziano (Reference Aflalo and Graziano2008) showed that humans can perform path integration in four-dimensional space with extended training and feedback. Because the movements were orthogonal (90° rotations) and determined by the observer, updating at each translation and rotation involved only one and two dimensions, respectively, and was relatively easy to compute algebraically within an egocentric reference frame. Nevertheless, representation of a 4D vector was required, hence the study provided some evidence of 4D representations. Other studies used virtual reality techniques to create visual simulations of 4D geometric objects (hyper-tetrahedron) and showed that observers could judge the distance between two points and the angle between two lines embedded in the 4D virtual space (Ambinder et al. Reference Ambinder, Wang, Crowell, Francis and Brinkmann2009). Observers could also estimate novel spatial properties unique to 4D space, such as the size (i.e., hyper-volume) of virtual 4D objects (Wang, in press), providing further evidence of 4D spatial representations.
In summary, an animal possesses three-dimensional spatial representations if it can exhibit above-chance performance along the third dimension in a novel shortcut or detour test in 3D space. Based on this criterion, humans (e.g., Wilson et al. Reference Wilson, Foreman, Stanton and Duffy2004), rats (e.g., Jovalekic et al. Reference Jovalekic, Hayman, Becares, Reid, Thomas, Wilson and Jeffery2011), and possibly ants (Grah et al. Reference Grah, Wehner and Ronacher2007) all encode spatial information about the vertical dimension that is beyond a surface map, extracted flat map, or bicoded map with contextual coding for the third dimension. Thus, these species possess some form of 3D spatial representations for navigation, which may vary in quality and format across species and the individual dimensions may be represented separately and differently. Empirical studies of four-dimensional path integration and spatial judgments suggest that human 4D spatial representations are possible, although the condition and properties of such representations require further investigation.
How can one prove that an animal has three-dimensional spatial representations? For navigation in two-dimensional space (e.g., horizontal plane), the classical paradigm is the novel shortcut test: If an animal traverses a route and then returns home by a novel shortcut, this is taken as sufficient (but not necessary) evidence of a 2D spatial representation. A variant of the novel shortcut test is the detour test, where a familiar route is blocked and the animal must navigate around the obstacle to reach the destination. Although novel shortcutting is usually attributed to path integration rather than cognitive mapping, both mechanisms require 2D spatial representations, and therefore the conclusion regarding dimensionality holds in both cases.
Based on this logic, novel shortcutting in three-dimensional space is sufficient evidence for 3D spatial representations. If an animal traverses a route in three-dimensional space and returns home through a novel shortcut or navigates around an obstacle to reach the goal, then that animal possesses a 3D spatial representation. Among the four candidate models, only the volumetric map supports such behavior. Animals using the surface map and the extracted flat map may take shortcuts within the encoded surface, but they cannot return to the appropriate elevation once they are displaced along that dimension. A bicoded map with non-spatial (e.g., contextual) coding of the third dimension allows shortcuts within the surface(s) of the two spatial dimensions, but the animal has to perform a random search along the third dimension until it happens to reach the desired context, because by definition the contextual coding without spatial mapping provides no information about the direction and distance an animal should travel to reach the goal context from a novel context.
The ability to perform novel shortcuts does not necessarily mean high accuracy or efficiency. Performance in navigation tasks varies depending on the species, the perceptual information available, and properties of the trajectory (Loomis et al. Reference Loomis, Klatzky, Golledge, Cicinelli, Pellegrino and Fry1993; Wang & Spelke Reference Wang, Spelke and Jeffery2003). Differences in accuracy may be due to the quality of the spatial representation and spatial processing. However, lower accuracy cannot be used as evidence against the presence of 3D spatial representations. Instead, the criterion for possessing a 3D representation should be above-chance performance in the third dimension. Similarly, biases occur in spatial judgments of lower dimensions (e.g., Huttenlocher et al. Reference Huttenlocher, Hedges and Duncan1991; Sampaio & Wang Reference Sampaio and Wang2009) and veridical information is not required for spatial representations. Hence, systematic biases are not evidence against 3D representations.
Moreover, spatial representations do not require integration across dimensions, and none of the four models makes this assumption. Multiple dimensions can be encoded separately, for example, as coordinates for each dimension, as long as the information can be used together for spatial processing. Therefore, separation of one or more dimensions is not evidence against 3D spatial representations. Integration across space is also not required, and fragmentation has been shown in 2D spatial representations (Wang & Brockmole Reference Wang and Brockmole2003). All four models are open about whether the entire space is represented as a whole or is divided into pieces/segments, and all can accommodate both mosaic and integrated representations. Thus, segregation of space is not informative on the dimensionality of the spatial representations.
Finally, the dimensionality of spatial representations for navigation needs to be considered in a broader theoretical framework in terms of reference frames and sensory domains. Flexible, efficient navigation does not require an allocentric map. For example, an egocentric updating system can provide the same navigational capabilities as an allocentric cognitive map (Wang Reference Wang2012; Wang & Spelke Reference Wang and Spelke2000). It is also important to consider visual, proprioceptive, and motor command information for self-motion estimation, which may provide simpler and more efficient computation than the vestibular signals (Lappe et al. Reference Lappe, Bremmer and van den Berg1999; Wang & Cutting Reference Wang and Cutting1999).
As Jeffery et al. discuss in the target article, the dimensionality question has been extended to four dimensions for humans. Although humans probably did not evolve for navigation in four-dimensional space, there are no known inherent constraints in implementing 4D spatial representations with neurons and neural connections, and the existing brain structure may be utilized to accommodate higher-dimensional spatial representations. Therefore, 4D spatial representations cannot be dismissed a priori based on evolution alone and should be treated as an empirical question.
A few studies have examined human 4D spatial representations using variations of the shortcutting task and spatial judgment tasks. Aflalo and Graziano (Reference Aflalo and Graziano2008) showed that humans can perform path integration in four-dimensional space with extended training and feedback. Because the movements were orthogonal (90° rotations) and determined by the observer, updating at each translation and rotation involved only one and two dimensions, respectively, and was relatively easy to compute algebraically within an egocentric reference frame. Nevertheless, representation of a 4D vector was required, hence the study provided some evidence of 4D representations. Other studies used virtual reality techniques to create visual simulations of 4D geometric objects (hyper-tetrahedron) and showed that observers could judge the distance between two points and the angle between two lines embedded in the 4D virtual space (Ambinder et al. Reference Ambinder, Wang, Crowell, Francis and Brinkmann2009). Observers could also estimate novel spatial properties unique to 4D space, such as the size (i.e., hyper-volume) of virtual 4D objects (Wang, in press), providing further evidence of 4D spatial representations.
In summary, an animal possesses three-dimensional spatial representations if it can exhibit above-chance performance along the third dimension in a novel shortcut or detour test in 3D space. Based on this criterion, humans (e.g., Wilson et al. Reference Wilson, Foreman, Stanton and Duffy2004), rats (e.g., Jovalekic et al. Reference Jovalekic, Hayman, Becares, Reid, Thomas, Wilson and Jeffery2011), and possibly ants (Grah et al. Reference Grah, Wehner and Ronacher2007) all encode spatial information about the vertical dimension that is beyond a surface map, extracted flat map, or bicoded map with contextual coding for the third dimension. Thus, these species possess some form of 3D spatial representations for navigation, which may vary in quality and format across species and the individual dimensions may be represented separately and differently. Empirical studies of four-dimensional path integration and spatial judgments suggest that human 4D spatial representations are possible, although the condition and properties of such representations require further investigation.