The target article is a well-written and timely paper that summarizes much of the spatial cognition literature, which has only recently made headway into understanding how animals navigate in a three-dimensional world. Most of what is known about how the brains of different animals manage navigation has been constrained to the horizontal plane. Here Jeffery et al. review experiments that show how insects, fish, rodents, and other animals (including humans) navigate space which includes a vertical component. From these studies, including some elegant work of their own on the encoding of three-dimensional space by place and grid cells (Hayman et al. Reference Hayman, Verriotis, Jovalekic, Fenton and Jeffery2011), the authors propose that 3D space is not uniformly represented by the brain. Rather, they suggest that 3D space is represented in a quasi-planar fashion, where spaces are constrained to separate planes that are stitched into a non-Euclidian, but integrated, map. There is, we think, merit in this analysis which presents a reasonable and testable hypothesis, one that addresses the need to reduce the computational load required to fully represent navigation through 3D space while also requiring a mechanism to stitch representational spaces together.
Given that the real world is three-dimensional, it is somewhat surprising that investigations into three-dimensional navigation have only recently emerged. One reason for this latency may be that representing movement in a volumetric space is not only computationally complicated for the brain, it poses additional constraints on experimental design, equipment, and analysis than are required for horizontal plane navigation. After reviewing the literature on nonhuman vertical navigation we became interested in human navigation with a vertical component. In a recent paper (Barnett-Cowan et al. Reference Barnett-Cowan, Meilinger, Vidal, Teufel and Bülthoff2012) we showed for the first time that human path integration operates differently in all three dimensions. We handled experimental constraints by comparing performance in an angle completion pointing task after passive translational motion in the horizontal, sagittal, and frontal (coronal) planes. To move participants in three dimensions, we took advantage of the unique Max Planck Institute (MPI) CyberMotion Simulator (Teufel et al. Reference Teufel, Nusseck, Beykirch, Butler, Kerger and Bülthoff2007), which is based on an anthropomorphic robot arm and which offers a large motion range to assess whether human path integration is similar between horizontal and vertical planes. We found that while humans tend to underestimate the angle through which they move in the horizontal plane (see also Loomis et al. Reference Loomis, Klatzky, Golledge, Cicinelli, Pellegrino and Fry1993), either no bias or an overestimate of movement angle is found in the sagittal and frontal planes, respectively. Our results are generally in agreement with the theory proposed by Jeffery et al. for the representation of space being fixed to the plane of movement, and our approach lends itself well to testing predictions that follow from this theory. For example, if the representation of space is fixed to the plane of movement and additional processing is required to stitch planes together, then one would expect delays in response times to cognitive tasks as the number of planes moved through increases.
As Jeffery et al. point out, the constant force of gravity provides an allocentric reference for navigation. However, we would like to clarify that gravity provides an allocentric reference direction and not a reference frame. A reference direction is fundamental to perception and action, and gravity is ideally suited as a reference direction because it is universally available to all organisms on Earth for navigation and orientation. Knowing one's orientation and the orientation of surrounding objects in relation to gravity affects the ability to identify (Dyde et al. Reference Dyde, Jenkin and Harris2006), predict the behaviour of (Barnett-Cowan et al. Reference Barnett-Cowan, Fleming, Singh and Bülthoff2011), and interact with surrounding objects (McIntyre et al. Reference McIntyre, Zago, Berthoz and Lacquaniti2001), as well as to maintain postural stability (Kluzik et al. Reference Kluzik, Horak and Peterka2005; Wade & Jones Reference Wade and Jones1997). Tendencies to maintain upright head posture relative to gravity during self-motion and when interacting with objects have the functional advantage of constraining the computational processing that would otherwise be required to maintain a coherent representation of self and object orientation when the eyes, head, and body are differently orientated relative to one another and relative to an allocentric reference direction such as gravity. Such righting behaviour is expressed in Figure 1, where maintaining an upright head posture benefits gaze control for both fly and human.
*Note: The photo shown on the left above was taken at the Max Planck Institute for Biological Cybernetics, which retains the copyright. Hengstenberg (Reference Hengstenberg1991) published part of this photograph as well. In addition to holding the copyright for the original photo we also have permission from the publisher of Hengstenberg's article.
In addition to studies of righting behaviour, there are now numerous studies which indicate that the brain constructs an internal representation of the body with a prior assumption that the head is upright (Barnett-Cowan et al. Reference Barnett-Cowan, Dyde and Harris2005; Dyde et al. Reference Dyde, Jenkin and Harris2006; MacNeilage et al. Reference MacNeilage, Banks, Berger and Bülthoff2007; Mittelstaedt Reference Mittelstaedt1983; Schwabe & Blanke Reference Schwabe and Blanke2008). We speculate that the combination of righting reflexes – to maintain an upright head posture during self-motion and optimize object recognition – along with prior assumptions of the head being upright, may interfere with the brain's ability to represent three-dimensional navigation through volumetric space. Further, as vestibular and visual directional sensitivity are best with the head and body upright relative to gravity (MacNeilage et al. Reference MacNeilage, Banks, DeAngelis and Angelaki2010), future experiments aimed at disentangling how the brain represents three-dimensional navigation must consider how the senses are tuned to work best when upright.
Understanding how humans represent motion through volumetric space is particularly important for assessing human aviation, where loss of control in flight is today's major single cause of fatal accidents in commercial aviation (see the EASA Annual Safety Review; European Aviation Safety Agency 2006). We suggest that our understanding of three-dimensional navigation in humans can be improved using advanced motion simulators – such as our Max Planck Institute CyberMotion Simulator, which is capable of moving human observers through complex motion paths in a volume of space. (An open access video is found in the original article: Barnett-Cowan et al. Reference Barnett-Cowan, Meilinger, Vidal, Teufel and Bülthoff2012.) Future experiments with fewer constraints, including trajectories using additional degrees of freedom, longer paths, multiple planes, and with the body differently orientated relative to gravity, will be able to more fully assess the quasi-planar representation of space proposed by Jeffery et al., as well as how vertical movement may be encoded differently.
The target article is a well-written and timely paper that summarizes much of the spatial cognition literature, which has only recently made headway into understanding how animals navigate in a three-dimensional world. Most of what is known about how the brains of different animals manage navigation has been constrained to the horizontal plane. Here Jeffery et al. review experiments that show how insects, fish, rodents, and other animals (including humans) navigate space which includes a vertical component. From these studies, including some elegant work of their own on the encoding of three-dimensional space by place and grid cells (Hayman et al. Reference Hayman, Verriotis, Jovalekic, Fenton and Jeffery2011), the authors propose that 3D space is not uniformly represented by the brain. Rather, they suggest that 3D space is represented in a quasi-planar fashion, where spaces are constrained to separate planes that are stitched into a non-Euclidian, but integrated, map. There is, we think, merit in this analysis which presents a reasonable and testable hypothesis, one that addresses the need to reduce the computational load required to fully represent navigation through 3D space while also requiring a mechanism to stitch representational spaces together.
Given that the real world is three-dimensional, it is somewhat surprising that investigations into three-dimensional navigation have only recently emerged. One reason for this latency may be that representing movement in a volumetric space is not only computationally complicated for the brain, it poses additional constraints on experimental design, equipment, and analysis than are required for horizontal plane navigation. After reviewing the literature on nonhuman vertical navigation we became interested in human navigation with a vertical component. In a recent paper (Barnett-Cowan et al. Reference Barnett-Cowan, Meilinger, Vidal, Teufel and Bülthoff2012) we showed for the first time that human path integration operates differently in all three dimensions. We handled experimental constraints by comparing performance in an angle completion pointing task after passive translational motion in the horizontal, sagittal, and frontal (coronal) planes. To move participants in three dimensions, we took advantage of the unique Max Planck Institute (MPI) CyberMotion Simulator (Teufel et al. Reference Teufel, Nusseck, Beykirch, Butler, Kerger and Bülthoff2007), which is based on an anthropomorphic robot arm and which offers a large motion range to assess whether human path integration is similar between horizontal and vertical planes. We found that while humans tend to underestimate the angle through which they move in the horizontal plane (see also Loomis et al. Reference Loomis, Klatzky, Golledge, Cicinelli, Pellegrino and Fry1993), either no bias or an overestimate of movement angle is found in the sagittal and frontal planes, respectively. Our results are generally in agreement with the theory proposed by Jeffery et al. for the representation of space being fixed to the plane of movement, and our approach lends itself well to testing predictions that follow from this theory. For example, if the representation of space is fixed to the plane of movement and additional processing is required to stitch planes together, then one would expect delays in response times to cognitive tasks as the number of planes moved through increases.
As Jeffery et al. point out, the constant force of gravity provides an allocentric reference for navigation. However, we would like to clarify that gravity provides an allocentric reference direction and not a reference frame. A reference direction is fundamental to perception and action, and gravity is ideally suited as a reference direction because it is universally available to all organisms on Earth for navigation and orientation. Knowing one's orientation and the orientation of surrounding objects in relation to gravity affects the ability to identify (Dyde et al. Reference Dyde, Jenkin and Harris2006), predict the behaviour of (Barnett-Cowan et al. Reference Barnett-Cowan, Fleming, Singh and Bülthoff2011), and interact with surrounding objects (McIntyre et al. Reference McIntyre, Zago, Berthoz and Lacquaniti2001), as well as to maintain postural stability (Kluzik et al. Reference Kluzik, Horak and Peterka2005; Wade & Jones Reference Wade and Jones1997). Tendencies to maintain upright head posture relative to gravity during self-motion and when interacting with objects have the functional advantage of constraining the computational processing that would otherwise be required to maintain a coherent representation of self and object orientation when the eyes, head, and body are differently orientated relative to one another and relative to an allocentric reference direction such as gravity. Such righting behaviour is expressed in Figure 1, where maintaining an upright head posture benefits gaze control for both fly and human.
Figure 1. Optimization of gaze control in the blowfly Calliphora (left; see Hengstenberg Reference Hengstenberg1991) and in humans (right; see Brodsky et al. Reference Brodsky, Donahue, Vaphiades and Brandt2006), where upright head posture is maintained during self-motion. Republished from Hengstenberg (Reference Hengstenberg1991) with original photo copyright to MPI for Biological Cybernetics (left), and Bike Magazine (August 2001, p. 71; right), with permission.*
In addition to studies of righting behaviour, there are now numerous studies which indicate that the brain constructs an internal representation of the body with a prior assumption that the head is upright (Barnett-Cowan et al. Reference Barnett-Cowan, Dyde and Harris2005; Dyde et al. Reference Dyde, Jenkin and Harris2006; MacNeilage et al. Reference MacNeilage, Banks, Berger and Bülthoff2007; Mittelstaedt Reference Mittelstaedt1983; Schwabe & Blanke Reference Schwabe and Blanke2008). We speculate that the combination of righting reflexes – to maintain an upright head posture during self-motion and optimize object recognition – along with prior assumptions of the head being upright, may interfere with the brain's ability to represent three-dimensional navigation through volumetric space. Further, as vestibular and visual directional sensitivity are best with the head and body upright relative to gravity (MacNeilage et al. Reference MacNeilage, Banks, DeAngelis and Angelaki2010), future experiments aimed at disentangling how the brain represents three-dimensional navigation must consider how the senses are tuned to work best when upright.
Understanding how humans represent motion through volumetric space is particularly important for assessing human aviation, where loss of control in flight is today's major single cause of fatal accidents in commercial aviation (see the EASA Annual Safety Review; European Aviation Safety Agency 2006). We suggest that our understanding of three-dimensional navigation in humans can be improved using advanced motion simulators – such as our Max Planck Institute CyberMotion Simulator, which is capable of moving human observers through complex motion paths in a volume of space. (An open access video is found in the original article: Barnett-Cowan et al. Reference Barnett-Cowan, Meilinger, Vidal, Teufel and Bülthoff2012.) Future experiments with fewer constraints, including trajectories using additional degrees of freedom, longer paths, multiple planes, and with the body differently orientated relative to gravity, will be able to more fully assess the quasi-planar representation of space proposed by Jeffery et al., as well as how vertical movement may be encoded differently.