Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-06T17:46:33.278Z Has data issue: false hasContentIssue false

Human path navigation in a three-dimensional world

Published online by Cambridge University Press:  08 October 2013

Michael Barnett-Cowan
Affiliation:
The Brain and Mind Institute, The University of Western Ontario, London, Ontario, N6A 5B7Canada. mbarnettcowan@gmail.comwww.sites.google.com/site/mbarnettcowan/
Heinrich H. Bülthoff
Affiliation:
Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany. hhb@tuebingen.mpg.dewww.kyb.mpg.de/~hhb Department of Brain and Cognitive Engineering, Korea University, Seoul, 136-713, Korea

Abstract

Jeffery et al. propose a non-uniform representation of three-dimensional space during navigation. Fittingly, we recently revealed asymmetries between horizontal and vertical path integration in humans. We agree that representing navigation in more than two dimensions increases computational load and suggest that tendencies to maintain upright head posture may help constrain computational processing, while distorting neural representation of three-dimensional navigation.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2013 

The target article is a well-written and timely paper that summarizes much of the spatial cognition literature, which has only recently made headway into understanding how animals navigate in a three-dimensional world. Most of what is known about how the brains of different animals manage navigation has been constrained to the horizontal plane. Here Jeffery et al. review experiments that show how insects, fish, rodents, and other animals (including humans) navigate space which includes a vertical component. From these studies, including some elegant work of their own on the encoding of three-dimensional space by place and grid cells (Hayman et al. Reference Hayman, Verriotis, Jovalekic, Fenton and Jeffery2011), the authors propose that 3D space is not uniformly represented by the brain. Rather, they suggest that 3D space is represented in a quasi-planar fashion, where spaces are constrained to separate planes that are stitched into a non-Euclidian, but integrated, map. There is, we think, merit in this analysis which presents a reasonable and testable hypothesis, one that addresses the need to reduce the computational load required to fully represent navigation through 3D space while also requiring a mechanism to stitch representational spaces together.

Given that the real world is three-dimensional, it is somewhat surprising that investigations into three-dimensional navigation have only recently emerged. One reason for this latency may be that representing movement in a volumetric space is not only computationally complicated for the brain, it poses additional constraints on experimental design, equipment, and analysis than are required for horizontal plane navigation. After reviewing the literature on nonhuman vertical navigation we became interested in human navigation with a vertical component. In a recent paper (Barnett-Cowan et al. Reference Barnett-Cowan, Meilinger, Vidal, Teufel and Bülthoff2012) we showed for the first time that human path integration operates differently in all three dimensions. We handled experimental constraints by comparing performance in an angle completion pointing task after passive translational motion in the horizontal, sagittal, and frontal (coronal) planes. To move participants in three dimensions, we took advantage of the unique Max Planck Institute (MPI) CyberMotion Simulator (Teufel et al. Reference Teufel, Nusseck, Beykirch, Butler, Kerger and Bülthoff2007), which is based on an anthropomorphic robot arm and which offers a large motion range to assess whether human path integration is similar between horizontal and vertical planes. We found that while humans tend to underestimate the angle through which they move in the horizontal plane (see also Loomis et al. Reference Loomis, Klatzky, Golledge, Cicinelli, Pellegrino and Fry1993), either no bias or an overestimate of movement angle is found in the sagittal and frontal planes, respectively. Our results are generally in agreement with the theory proposed by Jeffery et al. for the representation of space being fixed to the plane of movement, and our approach lends itself well to testing predictions that follow from this theory. For example, if the representation of space is fixed to the plane of movement and additional processing is required to stitch planes together, then one would expect delays in response times to cognitive tasks as the number of planes moved through increases.

As Jeffery et al. point out, the constant force of gravity provides an allocentric reference for navigation. However, we would like to clarify that gravity provides an allocentric reference direction and not a reference frame. A reference direction is fundamental to perception and action, and gravity is ideally suited as a reference direction because it is universally available to all organisms on Earth for navigation and orientation. Knowing one's orientation and the orientation of surrounding objects in relation to gravity affects the ability to identify (Dyde et al. Reference Dyde, Jenkin and Harris2006), predict the behaviour of (Barnett-Cowan et al. Reference Barnett-Cowan, Fleming, Singh and Bülthoff2011), and interact with surrounding objects (McIntyre et al. Reference McIntyre, Zago, Berthoz and Lacquaniti2001), as well as to maintain postural stability (Kluzik et al. Reference Kluzik, Horak and Peterka2005; Wade & Jones Reference Wade and Jones1997). Tendencies to maintain upright head posture relative to gravity during self-motion and when interacting with objects have the functional advantage of constraining the computational processing that would otherwise be required to maintain a coherent representation of self and object orientation when the eyes, head, and body are differently orientated relative to one another and relative to an allocentric reference direction such as gravity. Such righting behaviour is expressed in Figure 1, where maintaining an upright head posture benefits gaze control for both fly and human.

*Note: The photo shown on the left above was taken at the Max Planck Institute for Biological Cybernetics, which retains the copyright. Hengstenberg (Reference Hengstenberg1991) published part of this photograph as well. In addition to holding the copyright for the original photo we also have permission from the publisher of Hengstenberg's article.

Figure 1. Optimization of gaze control in the blowfly Calliphora (left; see Hengstenberg Reference Hengstenberg1991) and in humans (right; see Brodsky et al. Reference Brodsky, Donahue, Vaphiades and Brandt2006), where upright head posture is maintained during self-motion. Republished from Hengstenberg (Reference Hengstenberg1991) with original photo copyright to MPI for Biological Cybernetics (left), and Bike Magazine (August 2001, p. 71; right), with permission.*

In addition to studies of righting behaviour, there are now numerous studies which indicate that the brain constructs an internal representation of the body with a prior assumption that the head is upright (Barnett-Cowan et al. Reference Barnett-Cowan, Dyde and Harris2005; Dyde et al. Reference Dyde, Jenkin and Harris2006; MacNeilage et al. Reference MacNeilage, Banks, Berger and Bülthoff2007; Mittelstaedt Reference Mittelstaedt1983; Schwabe & Blanke Reference Schwabe and Blanke2008). We speculate that the combination of righting reflexes – to maintain an upright head posture during self-motion and optimize object recognition – along with prior assumptions of the head being upright, may interfere with the brain's ability to represent three-dimensional navigation through volumetric space. Further, as vestibular and visual directional sensitivity are best with the head and body upright relative to gravity (MacNeilage et al. Reference MacNeilage, Banks, DeAngelis and Angelaki2010), future experiments aimed at disentangling how the brain represents three-dimensional navigation must consider how the senses are tuned to work best when upright.

Understanding how humans represent motion through volumetric space is particularly important for assessing human aviation, where loss of control in flight is today's major single cause of fatal accidents in commercial aviation (see the EASA Annual Safety Review; European Aviation Safety Agency 2006). We suggest that our understanding of three-dimensional navigation in humans can be improved using advanced motion simulators – such as our Max Planck Institute CyberMotion Simulator, which is capable of moving human observers through complex motion paths in a volume of space. (An open access video is found in the original article: Barnett-Cowan et al. Reference Barnett-Cowan, Meilinger, Vidal, Teufel and Bülthoff2012.) Future experiments with fewer constraints, including trajectories using additional degrees of freedom, longer paths, multiple planes, and with the body differently orientated relative to gravity, will be able to more fully assess the quasi-planar representation of space proposed by Jeffery et al., as well as how vertical movement may be encoded differently.

References

Barnett-Cowan, M., Dyde, R. T. & Harris, L. R. (2005) Is an internal model of head orientation necessary for oculomotor control? Annals of the New York Academy of Sciences 1039:314–24.CrossRefGoogle ScholarPubMed
Barnett-Cowan, M., Fleming, R. W., Singh, M. & Bülthoff, H. H. (2011) Perceived object stability depends on multisensory estimates of gravity. PLoS ONE 6(4):e19289.CrossRefGoogle ScholarPubMed
Barnett-Cowan, M., Meilinger, T., Vidal, M., Teufel, H. & Bülthoff, H. H. (2012) MPI CyberMotion Simulator: Implementation of a novel motion simulator to investigate multisensory path integration in three dimensions. Journal of Visualized Experiments (63):e3436.CrossRefGoogle ScholarPubMed
Brodsky, M. C., Donahue, S. P., Vaphiades, M. & Brandt, T. (2006) Skew deviation revisited. Survey of Ophthalmology 51:105–28.CrossRefGoogle ScholarPubMed
Dyde, R. T., Jenkin, M. R. & Harris, L. R. (2006) The subjective visual vertical and the perceptual upright. Experimental Brain Research 173:612–22.Google Scholar
European Aviation Safety Agency. (2006) EASA Annual Safety Review, 2006. EASA (European Aviation Safety Agency, Cologne, Germany).Google Scholar
Hayman, R., Verriotis, M. A., Jovalekic, A., Fenton, A. A. & Jeffery, K. J. (2011) Anisotropic encoding of three-dimensional space by place cells and grid cells. Nature Neuroscience 14(9):1182–88.Google Scholar
Hengstenberg, R. (1991) Gaze control in the blowfly Calliphora: A multisensory, two-stage integration process. Seminars in Neuroscience 3(1):1929. Available at: http://www.sciencedirect.com/science/article/pii/104457659190063T.CrossRefGoogle Scholar
Kluzik, J., Horak, F. B. & Peterka, R. J. (2005) Differences in preferred reference frames for postural orientation shown by after-effects of stance on an inclined surface. Experimental Brain Research 162:474–89. Available at: http://link.springer.com/article/10.1007%2Fs00221-004-2124-6.CrossRefGoogle Scholar
Loomis, J. M., Klatzky, R. L., Golledge, R. G., Cicinelli, J. G., Pellegrino, J. W. & Fry, P. A. (1993) Nonvisual navigation by blind and sighted: Assessment of path integration ability. Journal of Experimental Psychology: General 122:7391.CrossRefGoogle ScholarPubMed
MacNeilage, P. R., Banks, M. S., Berger, D. R. & Bülthoff, H. H. (2007) A Bayesian model of the disambiguation of gravitoinertial force by visual cues. Experimental Brain Research 179:263–90.Google Scholar
MacNeilage, P. R., Banks, M. S., DeAngelis, G. C. & Angelaki, D. E. (2010) Vestibular heading discrimination and sensitivity to linear acceleration in head and world coordinates. The Journal of Neuroscience 30:9084–94.Google Scholar
McIntyre, J., Zago, M., Berthoz, A. & Lacquaniti, F. (2001) Does the brain model Newton's laws? Nature Neuroscience 4:693–4.CrossRefGoogle ScholarPubMed
Mittelstaedt, H. (1983) A new solution to the problem of the subjective vertical. Naturwissenschaften 70:272–81.CrossRefGoogle Scholar
Schwabe, L. & Blanke, O. (2008) The vestibular component in out-of-body experiences: A computational approach. Frontiers in Human Neuroscience 2:17. Available at: http://www.frontiersin.org/human_neuroscience/10.3389/neuro.09.017.2008/abstract.CrossRefGoogle ScholarPubMed
Teufel, H. J., Nusseck, H.-G., Beykirch, K. A., Butler, J. S., Kerger, M. & Bülthoff, H. H. (2007) MPI motion simulator: Development and analysis of a novel motion simulator. In: Proceedings of the AIAA Modeling and Simulation Technologies Conference and Exhibit, Hilton Head, South Carolina AIAA, 2007-6476. Available at: http://kyb.mpg.de/fileadmin/user_upload/files/publications/attachments/Teufel2007_4512%5B0%5D.pdf.Google Scholar
Wade, M. G. & Jones, G. (1997) The role of vision and spatial orientation in the maintenance of posture. Physical Therapy 77:619–28.Google Scholar
Figure 0

Figure 1. Optimization of gaze control in the blowfly Calliphora (left; see Hengstenberg 1991) and in humans (right; see Brodsky et al. 2006), where upright head posture is maintained during self-motion. Republished from Hengstenberg (1991) with original photo copyright to MPI for Biological Cybernetics (left), and Bike Magazine (August 2001, p. 71; right), with permission.*

*Note: The photo shown on the left above was taken at the Max Planck Institute for Biological Cybernetics, which retains the copyright. Hengstenberg (1991) published part of this photograph as well. In addition to holding the copyright for the original photo we also have permission from the publisher of Hengstenberg's article.