1
|
Chen X, Chen Y, McNamara TP. Processing spatial cue conflict in navigation: Distance estimation. Cogn Psychol 2025; 158:101734. [PMID: 40347660 DOI: 10.1016/j.cogpsych.2025.101734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 04/06/2025] [Accepted: 04/16/2025] [Indexed: 05/14/2025]
Abstract
Spatial navigation involves the use of various cues. This study examined how cue conflict influences navigation by contrasting landmarks and optic flow. Participants estimated spatial distances under different levels of cue conflict: minimal conflict, large conflict, and large conflict with explicit awareness of landmark instability. Whereas increased cue conflict alone had little behavioral impact, adding explicit awareness reduced reliance on landmarks and impaired the precision of spatial localization based on them. To understand the underlying mechanisms, we tested two cognitive models: a Bayesian causal inference (BCI) model and a non-Bayesian sensory disparity model. The BCI model provided a better fit to the data, revealing two independent mechanisms for reduced landmark reliance: increased sensory noise for unstable landmarks and lower weighting of unstable landmarks when landmarks and optic flow were judged to originate from different causes. Surprisingly, increased cue conflict did not decrease the prior belief in a common cause, even when explicit awareness of landmark instability was imposed. Additionally, cue weighting in the same-cause judgment was determined by bottom-up sensory reliability, while in the different-cause judgment, it correlated with participants' subjective evaluation of cue quality, suggesting a top-down metacognitive influence. The BCI model further identified key factors contributing to suboptimal cue combination in minimal cue conflicts, including the prior belief in a common cause and prior knowledge of the target location. Together, these findings provide critical insights into how navigators resolve conflicting spatial cues and highlight the utility of the BCI model in dissecting cue interaction mechanisms in navigation.
Collapse
Affiliation(s)
- Xiaoli Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310058, PR China.
| | - Yingyan Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310058, PR China
| | - Timothy P McNamara
- Department of Psychology, Vanderbilt University, Nashville, TN 37240, USA
| |
Collapse
|
2
|
Kessler F, Frankenstein J, Rothkopf CA. Human navigation strategies and their errors result from dynamic interactions of spatial uncertainties. Nat Commun 2024; 15:5677. [PMID: 38971789 PMCID: PMC11227593 DOI: 10.1038/s41467-024-49722-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 06/14/2024] [Indexed: 07/08/2024] Open
Abstract
Goal-directed navigation requires continuously integrating uncertain self-motion and landmark cues into an internal sense of location and direction, concurrently planning future paths, and sequentially executing motor actions. Here, we provide a unified account of these processes with a computational model of probabilistic path planning in the framework of optimal feedback control under uncertainty. This model gives rise to diverse human navigational strategies previously believed to be distinct behaviors and predicts quantitatively both the errors and the variability of navigation across numerous experiments. This furthermore explains how sequential egocentric landmark observations form an uncertain allocentric cognitive map, how this internal map is used both in route planning and during execution of movements, and reconciles seemingly contradictory results about cue-integration behavior in navigation. Taken together, the present work provides a parsimonious explanation of how patterns of human goal-directed navigation behavior arise from the continuous and dynamic interactions of spatial uncertainties in perception, cognition, and action.
Collapse
Affiliation(s)
- Fabian Kessler
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany.
| | - Julia Frankenstein
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Germany
| |
Collapse
|
3
|
Qi Y, Mou W. Relative cue precision and prior knowledge contribute to the preference of proximal and distal landmarks in human orientation. Cognition 2024; 247:105772. [PMID: 38520794 DOI: 10.1016/j.cognition.2024.105772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 03/12/2024] [Accepted: 03/13/2024] [Indexed: 03/25/2024]
Abstract
A prevailing argument posits that distal landmarks dominate over proximal landmarks as orientation cues. However, no studies have tested this argument or examined the underlying mechanisms. This project aimed to close this gap by examining the roles of relative cue precision and prior knowledge in cue preference. Participants learned object locations with proximal and distal landmarks in an immersive virtual environment. After walking a path without seeing objects or landmarks, participants disoriented themselves by spinning in place and pointed to the objects with the reappearance of a proximal landmark being rotated -50°, a distal landmark being rotated 50°, or both (Conflict). Heading errors were examined. Experiment 1 manipulated the relative cue precision. Results showed that in Conflict condition, the observed weight on the distal cue (exceeding 0.5) changed with but remained higher than the weight predicted by the relative cue precision. This indicates that besides the relative cue precision, prior knowledge of distal cue dominance also influences orientation cue usage. In Experiments 2 and 3, participants walked a path stopping at one object location. Participants were informed of it explicitly in Experiment 2 but not in Experiment 3. Results showed that distal cue dominance still occurred in Experiment 3. However, in Experiment 2, proximal cue dominance appeared, and it was not predicted by the relative cue precision. These results suggest that prior knowledge of proximal cue dominance might have been invoked by the instruction of locations. Consistent with the Bayesian inference model, human cue usage in orientation is determined by relative cue precision and prior knowledge. The choice of prior knowledge can be influenced by instructions.
Collapse
Affiliation(s)
- Yafei Qi
- Department of Psychology, University of Alberta; Vanderbilt University, 415 Wilson Hall, Nashville, TN 37203, USA.
| | - Weimin Mou
- Department of Psychology, University of Alberta.
| |
Collapse
|
4
|
Scherer J, Müller MM, Unterbrink P, Meier S, Egelhaaf M, Bertrand OJN, Boeddeker N. Not seeing the forest for the trees: combination of path integration and landmark cues in human virtual navigation. Front Behav Neurosci 2024; 18:1399716. [PMID: 38835838 PMCID: PMC11148297 DOI: 10.3389/fnbeh.2024.1399716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 05/03/2024] [Indexed: 06/06/2024] Open
Abstract
Introduction In order to successfully move from place to place, our brain often combines sensory inputs from various sources by dynamically weighting spatial cues according to their reliability and relevance for a given task. Two of the most important cues in navigation are the spatial arrangement of landmarks in the environment, and the continuous path integration of travelled distances and changes in direction. Several studies have shown that Bayesian integration of cues provides a good explanation for navigation in environments dominated by small numbers of easily identifiable landmarks. However, it remains largely unclear how cues are combined in more complex environments. Methods To investigate how humans process and combine landmarks and path integration in complex environments, we conducted a series of triangle completion experiments in virtual reality, in which we varied the number of landmarks from an open steppe to a dense forest, thus going beyond the spatially simple environments that have been studied in the past. We analysed spatial behaviour at both the population and individual level with linear regression models and developed a computational model, based on maximum likelihood estimation (MLE), to infer the underlying combination of cues. Results Overall homing performance was optimal in an environment containing three landmarks arranged around the goal location. With more than three landmarks, individual differences between participants in the use of cues are striking. For some, the addition of landmarks does not worsen their performance, whereas for others it seems to impair their use of landmark information. Discussion It appears that navigation success in complex environments depends on the ability to identify the correct clearing around the goal location, suggesting that some participants may not be able to see the forest for the trees.
Collapse
Affiliation(s)
- Jonas Scherer
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany
| | - Martin M Müller
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany
| | - Patrick Unterbrink
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
| | - Sina Meier
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
| | - Martin Egelhaaf
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany
| | | | - Norbert Boeddeker
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
5
|
Chen Y, Mou W. Path integration, rather than being suppressed, is used to update spatial views in familiar environments with constantly available landmarks. Cognition 2024; 242:105662. [PMID: 37952370 DOI: 10.1016/j.cognition.2023.105662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 11/01/2023] [Accepted: 11/02/2023] [Indexed: 11/14/2023]
Abstract
This project tested three hypotheses conceptualizing the interaction between path integration based on self-motion and piloting based on landmarks in a familiar environment with persistent landmarks. The first hypothesis posits that path integration functions automatically, as in environments lacking persistent landmarks (environment-independent hypothesis). The second hypothesis suggests that persistent landmarks suppress path integration (suppression hypothesis). The third hypothesis proposes that path integration updates the spatial views of the environment (updating-spatial-views hypothesis). Participants learned a specific object's location. Subsequently, they undertook an outbound path originating from the object and then indicated the object's location (homing). In Experiments 1&1b, there were landmarks throughout the first 9 trials. On some later trials, the landmarks were presented during the outbound path but unexpectedly removed during homing (catch trials). On the last trials, there were no landmarks throughout (baseline trials). Experiments 2-3 were similar but added two identical objects (the original one and a rotated distractor) during homing on the catch and baseline trials. Experiment 4 replaced two identical objects with two groups of landmarks. The results showed that in Experiments 1&1b, homing angular error on the first catch trial was significantly larger than the matched baseline trial, undermining the environment-independent hypothesis. Conversely, in Experiment 2-4, the proportion of participants who recognized the original object or landmarks was similar between the first catch and the matched baseline trial, favoring the updating-spatial-views hypothesis over the suppression hypothesis. Therefore, while mismatches between updated spatial views and actual views of unexpected removal of landmarks impair homing performance, the updated spatial views help eliminate ambiguous targets or landmarks within the familiar environment.
Collapse
Affiliation(s)
- Yue Chen
- Department of Psychology, University of Alberta, P217 Biological Sciences Bldg., Edmonton, Alberta T6G 2E9, Canada.
| | - Weimin Mou
- Department of Psychology, University of Alberta, P217 Biological Sciences Bldg., Edmonton, Alberta T6G 2E9, Canada.
| |
Collapse
|
6
|
Newman PM, Qi Y, Mou W, McNamara TP. Statistically Optimal Cue Integration During Human Spatial Navigation. Psychon Bull Rev 2023; 30:1621-1642. [PMID: 37038031 DOI: 10.3758/s13423-023-02254-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/08/2023] [Indexed: 04/12/2023]
Abstract
In 2007, Cheng and colleagues published their influential review wherein they analyzed the literature on spatial cue interaction during navigation through a Bayesian lens, and concluded that models of optimal cue integration often applied in psychophysical studies could explain cue interaction during navigation. Since then, numerous empirical investigations have been conducted to assess the degree to which human navigators are optimal when integrating multiple spatial cues during a variety of navigation-related tasks. In the current review, we discuss the literature on human cue integration during navigation that has been published since Cheng et al.'s original review. Evidence from most studies demonstrate optimal navigation behavior when humans are presented with multiple spatial cues. However, applications of optimal cue integration models vary in their underlying assumptions (e.g., uninformative priors and decision rules). Furthermore, cue integration behavior depends in part on the nature of the cues being integrated and the navigational task (e.g., homing versus non-home goal localization). We discuss the implications of these models and suggest directions for future research.
Collapse
Affiliation(s)
- Phillip M Newman
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA.
| | - Yafei Qi
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Weimin Mou
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Timothy P McNamara
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA
| |
Collapse
|
7
|
Harrison SJ, Reynolds N, Bishoff B, Stergiou N, White E. Homing tasks and distance matching tasks reveal different types of perceptual variables associated with perceiving self-motion during over-ground locomotion. Exp Brain Res 2022; 240:1257-1266. [PMID: 35199188 DOI: 10.1007/s00221-022-06337-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 02/15/2022] [Indexed: 11/28/2022]
Abstract
Self-motion perception refers to the ability to perceive how the body is moving through the environment. Perception of self-motion has been shown to depend upon the locomotor action patterns used to move the body through the environment. Two separate lines of enquiry have led to the establishment of two distinct theories regarding this effect. One theory has proposed that distances travelled during locomotion are perceived via higher order perceptual variables detected by the haptic perceptual system. This theory proposes that two higher order haptic perceptual variables exist, and that the implication of one of these variables depends upon the type of gait pattern that is used. A second theory proposes that self-motion is perceived via a higher order perceptual variable termed multimodally specified energy expenditure (MSEE). This theory proposes that the effect of locomotor actions patterns upon self-motion perception is related to changes in the metabolic cost of locomotion per unit of perceptually specified traversed distance. Here, we test the hypothesis that the development of these distinct theories is the result of different choices in methodology. The theory of gait type has been developed based largely on the results of homing tasks, whereas the effect of MSEE has been developed based on the results of distance matching tasks. Here we test the hypothesis that the seemly innocuous change in experimental design from using a homing task to using a distance matching task changes the type of perceptual variables implicated in self-motion perception. To test this hypothesis, we closely replicated a recent study of the effect of gait type in all details bar one-we investigated a distance matching task rather than a homing task. As hypothesized, this change yielded results consistent with the predictions of MSEE, and distinct from gait type. We further show that, unlike the effect of gait type, the effect of MSEE is unaffected by the availability of vision. In sum, our findings support the existence of two distinct types of higher order perceptual variables in self-motion perception. We discuss the roles of these two types of perceptual variables in supporting effective human wayfinding.
Collapse
Affiliation(s)
- Steven J Harrison
- Department of Kinesiology, University of Connecticut, Storrs, CT, 06269, USA. .,Center for Ecological Study of Perception and Action, University of Connecticut, Storrs, USA. .,Department of Biomechanics, University of Nebraska at Omaha, Omaha, USA.
| | - Nicholas Reynolds
- Department of Biomechanics, University of Nebraska at Omaha, Omaha, USA
| | - Brandon Bishoff
- Department of Biomechanics, University of Nebraska at Omaha, Omaha, USA
| | - Nicholas Stergiou
- Department of Biomechanics, University of Nebraska at Omaha, Omaha, USA
| | - Eliah White
- Department of Psychological Science, Northern Kentucky University, Highland Heights, USA
| |
Collapse
|
8
|
Combination and competition between path integration and landmark navigation in the estimation of heading direction. PLoS Comput Biol 2022; 18:e1009222. [PMID: 35143474 PMCID: PMC8865642 DOI: 10.1371/journal.pcbi.1009222] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 02/23/2022] [Accepted: 01/06/2022] [Indexed: 11/19/2022] Open
Abstract
Successful navigation requires the ability to compute one’s location and heading from incoming multisensory information. Previous work has shown that this multisensory input comes in two forms: body-based idiothetic cues, from one’s own rotations and translations, and visual allothetic cues, from the environment (usually visual landmarks). However, exactly how these two streams of information are integrated is unclear, with some models suggesting the body-based idiothetic and visual allothetic cues are combined, while others suggest they compete. In this paper we investigated the integration of body-based idiothetic and visual allothetic cues in the computation of heading using virtual reality. In our experiment, participants performed a series of body turns of up to 360 degrees in the dark with only a brief flash (300ms) of visual feedback en route. Because the environment was virtual, we had full control over the visual feedback and were able to vary the offset between this feedback and the true heading angle. By measuring the effect of the feedback offset on the angle participants turned, we were able to determine the extent to which they incorporated visual feedback as a function of the offset error. By further modeling this behavior we were able to quantify the computations people used. While there were considerable individual differences in performance on our task, with some participants mostly ignoring the visual feedback and others relying on it almost entirely, our modeling results suggest that almost all participants used the same strategy in which idiothetic and allothetic cues are combined when the mismatch between them is small, but compete when the mismatch is large. These findings suggest that participants update their estimate of heading using a hybrid strategy that mixes the combination and competition of cues. Successful navigation requires us to combine visual information about our environment with body-based cues about our own rotations and translations. In this work we investigated how these disparate sources of information work together to compute an estimate of heading. Using a novel virtual reality task we measured how humans integrate visual and body-based cues when there is mismatch between them—that is, when the estimate of heading from visual information is different from body-based cues. By building computational models of different strategies, we reveal that humans use a hybrid strategy for integrating visual and body-based cues—combining them when the mismatch between them is small and picking one or the other when the mismatch is large.
Collapse
|
9
|
Abstract
Spatial navigation is a complex cognitive activity that depends on perception, action, memory, reasoning, and problem-solving. Effective navigation depends on the ability to combine information from multiple spatial cues to estimate one's position and the locations of goals. Spatial cues include landmarks, and other visible features of the environment, and body-based cues generated by self-motion (vestibular, proprioceptive, and efferent information). A number of projects have investigated the extent to which visual cues and body-based cues are combined optimally according to statistical principles. Possible limitations of these investigations are that they have not accounted for navigators' prior experiences with or assumptions about the task environment and have not tested complete decision models. We examine cue combination in spatial navigation from a Bayesian perspective and present the fundamental principles of Bayesian decision theory. We show that a complete Bayesian decision model with an explicit loss function can explain a discrepancy between optimal cue weights and empirical cues weights observed by (Chen et al. Cognitive Psychology, 95, 105-144, 2017) and that the use of informative priors to represent cue bias can explain the incongruity between heading variability and heading direction observed by (Zhao and Warren 2015b, Psychological Science, 26[6], 915-924). We also discuss (Petzschner and Glasauer's , Journal of Neuroscience, 31(47), 17220-17229, 2011) use of priors to explain biases in estimates of linear displacements during visual path integration. We conclude that Bayesian decision theory offers a productive theoretical framework for investigating human spatial navigation and believe that it will lead to a deeper understanding of navigational behaviors.
Collapse
|
10
|
Newman PM, McNamara TP. Integration of visual landmark cues in spatial memory. PSYCHOLOGICAL RESEARCH 2021; 86:1636-1654. [PMID: 34420070 PMCID: PMC8380114 DOI: 10.1007/s00426-021-01581-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 08/11/2021] [Indexed: 11/25/2022]
Abstract
Over the past two decades, much research has been conducted to investigate whether humans are optimal when integrating sensory cues during spatial memory and navigational tasks. Although this work has consistently demonstrated optimal integration of visual cues (e.g., landmarks) with body-based cues (e.g., path integration) during human navigation, little work has investigated how cues of the same sensory type are integrated in spatial memory. A few recent studies have reported mixed results, with some showing very little benefit to having access to more than one landmark, and others showing that multiple landmarks can be optimally integrated in spatial memory. In the current study, we employed a combination of immersive and non-immersive virtual reality spatial memory tasks to test adult humans' ability to integrate multiple landmark cues across six experiments. Our results showed that optimal integration of multiple landmark cues depends on the difficulty of the task, and that the presence of multiple landmarks can elicit an additional latent cue when estimating locations from a ground-level perspective, but not an aerial perspective.
Collapse
Affiliation(s)
- Phillip M Newman
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37212, USA.
| | - Timothy P McNamara
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37212, USA
| |
Collapse
|
11
|
Assessing the relative contribution of vision to odometry via manipulations of gait in an over-ground homing task. Exp Brain Res 2021; 239:1305-1316. [PMID: 33630131 DOI: 10.1007/s00221-021-06066-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Accepted: 02/15/2021] [Indexed: 01/13/2023]
Abstract
The visual, vestibular, and haptic perceptual systems are each able to detect self-motion. Such information can be integrated during locomotion to perceive traversed distances. The process of distance integration is referred to as odometry. Visual odometry relies on information in optic flow patterns. For haptic odometry, such information is associated with leg movement patterns. Recently, it has been shown that haptic odometry is differently calibrated for different types of gaits. Here, we use this fact to examine the relative contributions of the perceptual systems to odometry. We studied a simple homing task in which participants travelled set distances away from an initial starting location (outbound phase), before turning and attempting to walk back to that location (inbound phase). We manipulated whether outbound gait was a walk or a gallop-walk. We also manipulated the outbound availability of optic flow. Inbound reports were performed via walking with eyes closed. Consistent with previous studies of haptic odometry, inbound reports were shorter when the outbound gait was a gallop-walk. We showed that the availability of optic flow decreased this effect. In contrast, the availability of optic flow did not have an observable effect when the outbound gait was walking. We interpreted this to suggest that visual odometry and haptic odometry via walking are similarly calibrated. By measuring the decrease in shortening in the gallop-walk condition, and scaling it relative to the walk condition, we estimated a relative contribution of optic flow to odometry of 41%. Our results present a proof of concept for a new, potentially more generalizable, method for examining the contributions of different perceptual systems to odometry, and by extension, path integration. We discuss implications for understanding human wayfinding.
Collapse
|
12
|
Abstract
Mobile organisms make use of spatial cues to navigate effectively in the world, such as visual and self-motion cues. Over the past decade, researchers have investigated how human navigators combine spatial cues, and whether cue combination is optimal according to statistical principles, by varying the number of cues available in homing tasks. The methodological approaches employed by researchers have varied, however. One important methodological difference exists in the number of cues available to the navigator during the outbound path for single-cue trials. In some studies, navigators have access to all spatial cues on the outbound path and all but one cue is eliminated prior to execution of the return path in the single-cue conditions; in other studies, navigators only have access to one spatial cue on the outbound and return paths in the single-cue conditions. If navigators can integrate cues along the outbound path, single-cue estimates may be contaminated by the undesired cue, which will in turn affect the predictions of models of optimal cue integration. In the current experiment, we manipulated the number of cues available during the outbound path for single-cue trials, while keeping dual-cue trials constant. This variable did not affect performance in the homing task; in particular, homing performance was better in dual-cue conditions than in single-cue conditions and was statistically optimal. Both methodological approaches to measuring spatial cue integration during navigation are appropriate.
Collapse
|
13
|
Barhorst-Cates EM, Rand KM, Creem-Regehr SH. Navigating with peripheral field loss in a museum: learning impairments due to environmental complexity. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:41. [PMID: 31641893 PMCID: PMC6805832 DOI: 10.1186/s41235-019-0189-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2018] [Accepted: 07/31/2019] [Indexed: 11/10/2022]
Abstract
Background Previous research has found that spatial learning while navigating in novel spaces is impaired with extreme restricted peripheral field of view (FOV) (remaining FOV of 4°, but not of 10°) in an indoor environment with long hallways and mostly orthogonal turns. Here we tested effects of restricted peripheral field on a similar real-world spatial learning task in an art museum, a more challenging environment for navigation because of valuable obstacles and unpredictable paths, in which participants were guided along paths through the museum and learned the locations of pieces of art. At the end of each path, participants pointed to the remembered landmarks. Throughout the spatial learning task, participants completed a concurrent auditory reaction time task to measure cognitive load. Results Unlike the previous study in a typical hallway environment, spatial learning was impaired with a simulated 10° FOV compared to a wider 60° FOV, as indicated by greater average pointing error with restricted FOV. Reaction time to the secondary task also revealed slower responses, suggesting increased attentional demands. Conclusions We suggest that the presence of a spatial learning deficit in the current experiment with this level of FOV restriction is due to the complex and unpredictable paths traveled in the museum environment. Our results also convey the importance of the study of low-vision spatial cognition in irregularly structured environments that are representative of many real-world settings, which may increase the difficulty of spatial learning while navigating.
Collapse
|
14
|
Howett D, Castegnaro A, Krzywicka K, Hagman J, Marchment D, Henson R, Rio M, King JA, Burgess N, Chan D. Differentiation of mild cognitive impairment using an entorhinal cortex-based test of virtual reality navigation. Brain 2019; 142:1751-1766. [PMID: 31121601 PMCID: PMC6536917 DOI: 10.1093/brain/awz116] [Citation(s) in RCA: 129] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Revised: 02/15/2019] [Accepted: 02/27/2019] [Indexed: 12/19/2022] Open
Abstract
The entorhinal cortex is one of the first regions to exhibit neurodegeneration in Alzheimer's disease, and as such identification of entorhinal cortex dysfunction may aid detection of the disease in its earliest stages. Extensive evidence demonstrates that the entorhinal cortex is critically implicated in navigation underpinned by the firing of spatially modulated neurons. This study tested the hypothesis that entorhinal-based navigation is impaired in pre-dementia Alzheimer's disease. Forty-five patients with mild cognitive impairment (26 with CSF Alzheimer's disease biomarker data: 12 biomarker-positive and 14 biomarker-negative) and 41 healthy control participants undertook an immersive virtual reality path integration test, as a measure of entorhinal-based navigation. Behavioural performance was correlated with MRI measures of entorhinal cortex volume, and the classification accuracy of the path integration task was compared with a battery of cognitive tests considered sensitive and specific for early Alzheimer's disease. Biomarker-positive patients exhibited larger errors in the navigation task than biomarker-negative patients, whose performance did not significantly differ from controls participants. Path-integration performance correlated with Alzheimer's disease molecular pathology, with levels of CSF amyloid-β and total tau contributing independently to distance error. Path integration errors were negatively correlated with the volumes of the total entorhinal cortex and of its posteromedial subdivision. The path integration task demonstrated higher diagnostic sensitivity and specificity for differentiating biomarker positive versus negative patients (area under the curve = 0.90) than was achieved by the best of the cognitive tests (area under the curve = 0.57). This study demonstrates that an entorhinal cortex-based virtual reality navigation task can differentiate patients with mild cognitive impairment at low and high risk of developing dementia, with classification accuracy superior to reference cognitive tests considered to be highly sensitive to early Alzheimer's disease. This study provides evidence that navigation tasks may aid early diagnosis of Alzheimer's disease, and the basis of this in animal cellular and behavioural studies provides the opportunity to answer the unmet need for translatable outcome measures for comparing treatment effect across preclinical and clinical trial phases of future anti-Alzheimer's drugs.
Collapse
Affiliation(s)
- David Howett
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Andrea Castegnaro
- Institute of Cognitive Neuroscience, University College London, London, UK
- Department of Electrical Engineering, University College London, London, UK
| | - Katarzyna Krzywicka
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Johanna Hagman
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Deepti Marchment
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Richard Henson
- MRC Cognition and Brain Sciences Unit, and Department of Psychiatry, University of Cambridge, UK
| | - Miguel Rio
- Department of Electrical Engineering, University College London, London, UK
| | - John A King
- Department of Clinical, Educational and Health Psychology, University College London, London, UK
| | - Neil Burgess
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Dennis Chan
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| |
Collapse
|
15
|
Abstract
We highlight that optimal cue combination does not represent a general principle of cue interaction during navigation, extending Rahnev & Denison's (R&D) summary of nonoptimal perceptual decisions to the navigation domain. However, we argue that the term "suboptimality" does not capture the way visual and nonvisual cues interact in navigational decisions.
Collapse
|
16
|
Abstract
A basic set of navigation strategies supports navigational tasks ranging from homing to novel detours and shortcuts. To perform these last two tasks, it is generally thought that humans, mammals and perhaps some insects possess Euclidean cognitive maps, constructed on the basis of input from the path integration system. In this article, I review the rationale and behavioral evidence for this metric cognitive map hypothesis, and find it unpersuasive: in practice, there is little evidence for truly novel shortcuts in animals, and human performance is highly unreliable and biased by environmental features. I develop the alternative hypothesis that spatial knowledge is better characterized as a labeled graph: a network of paths between places augmented with local metric information. What distinguishes such a cognitive graph from a metric cognitive map is that this local information is not embedded in a global coordinate system, so spatial knowledge is often geometrically inconsistent. Human path integration appears to be better suited to piecewise measurements of path lengths and turn angles than to building a consistent map. In a series of experiments in immersive virtual reality, we tested human navigation in non-Euclidean environments and found that shortcuts manifest large violations of the metric postulates. The results are contrary to the Euclidean map hypothesis and support the cognitive graph hypothesis. Apparently Euclidean behavior, such as taking novel detours and approximate shortcuts, can be explained by the adaptive use of non-Euclidean strategies.
Collapse
Affiliation(s)
- William H Warren
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI 02912, USA
| |
Collapse
|
17
|
Guo J, Huang J, Wan X. Influence of route decision-making and experience on human path integration. Acta Psychol (Amst) 2019; 193:66-72. [PMID: 30594863 DOI: 10.1016/j.actpsy.2018.12.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 12/02/2018] [Accepted: 12/12/2018] [Indexed: 11/26/2022] Open
Abstract
Path integration refers to a process of integrating information regarding self-motion to estimate one's current position and orientation. Here we reported two experiments designed to investigate whether, and if so, how human path integration could be influenced by route decision-making and previous experience. Using head-mounted display virtual reality and hallway mazes, we asked participants to travel along several hallways and then to directly return to the starting point, namely a path completion task. We created an active condition in which the participants had the opportunity to voluntarily select the structure of outbound paths, and a passive condition in which they followed the outbound paths chosen by others. Each participant was required to take part in the study on two consecutive days, and they performed the task under different (in Experiment 1) or the same conditions (in Experiment 2) on these two days. The results of both experiments revealed a facilitation effect of route decision-making on the participants' performance on the first day. The results also revealed that both their performance and path selection strategies on the second day were subject to their experience obtained from the first day. Collectively, these findings suggest that human path integration may be improved by having the opportunity to make decisions on the structure of outbound paths and/or more experience with the task.
Collapse
|
18
|
Abstract
Navigation is influenced by body-based self-motion cues that are integrated over time, in a process known as path integration, as well as by environmental cues such as landmarks and room shape. In two experiments we explored whether humans combine path integration and environmental cues (Exp. 1: room shape; Exp. 2: room shape, single landmark, and multiple landmarks) to reduce response variability when returning to a previously visited location. Participants walked an outbound path in an immersive virtual environment before attempting to return to the path origin. Path integration and an environmental cue were both available during the outbound path, but experimental manipulations created single- and dual-cue conditions during the return path. The response variance when returning to the path origin was reduced when both cues were available, consistent with optimal integration predicted on the basis of Bayesian principles. The findings indicate that humans optimally integrate multiple spatial cues during navigation. Additionally, a large (but not a small) cue conflict caused participants to assign a higher weight to path integration than to environmental cues, despite the relatively greater precision afforded by the environmental cues.
Collapse
|
19
|
Yeap WK, Hossain M. What is a cognitive map? Unravelling its mystery using robots. Cogn Process 2018; 20:203-225. [PMID: 30539324 DOI: 10.1007/s10339-018-0895-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 12/03/2018] [Indexed: 11/25/2022]
Abstract
Despite years of research into cognitive mapping, the process remains controversial and little understood. A computational theory of cognitive mapping is needed, but developing it is difficult due to the lack of a clear interpretation of the empirical findings. For example, without knowing what a cognitive map is or how landmarks are defined, how does one develop a computational theory for it? We thus face the conundrum of trying to develop a theory without knowing what is computed. In this paper, we overcome the conundrum by abandoning the idea that the process begins by integrating successive views to form a global map of the environment experienced. Instead, we argue that cognitive mapping begins by remembering views as local maps and we empower a mobile robot with the process and study its behaviour as it acquires its "cognitive map". Our results show that what is computed initially could be described as a "route" map and from it, some form of a "survey map" can be computed. The latter, as it turns out, bears much of the characteristics of a cognitive map. Based on our findings, we discuss what a cognitive map is, how cognitive mapping evolves and why such a process also supports the perception of a stable world.
Collapse
Affiliation(s)
- Wai K Yeap
- Centre for Artificial Intelligence Research, Auckland University of Technology, Auckland, New Zealand.
| | - Md Hossain
- Centre for Artificial Intelligence Research, Auckland University of Technology, Auckland, New Zealand
| |
Collapse
|
20
|
Zhao M. Human spatial representation: what we cannot learn from the studies of rodent navigation. J Neurophysiol 2018; 120:2453-2465. [PMID: 30133384 DOI: 10.1152/jn.00781.2017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Studies of human and rodent navigation often reveal a remarkable cross-species similarity between the cognitive and neural mechanisms of navigation. Such cross-species resemblance often overshadows some critical differences between how humans and nonhuman animals navigate. In this review, I propose that a navigation system requires both a storage system (i.e., representing spatial information) and a positioning system (i.e., sensing spatial information) to operate. I then argue that the way humans represent spatial information is different from that inferred from the cellular activity observed during rodent navigation. Such difference spans the whole hierarchy of spatial representation, from representing the structure of an environment to the representation of subregions of an environment, routes and paths, and the distance and direction relative to a goal location. These cross-species inconsistencies suggest that what we learn from rodent navigation does not always transfer to human navigation. Finally, I argue for closing the loop for the dominant, unidirectional animal-to-human approach in navigation research so that insights from behavioral studies of human navigation may also flow back to shed light on the cellular mechanisms of navigation for both humans and other mammals (i.e., a human-to-animal approach).
Collapse
Affiliation(s)
- Mintao Zhao
- School of Psychology, University of East Anglia , Norwich , United Kingdom.,Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics , Tübingen , Germany
| |
Collapse
|
21
|
Twyman AD, Holden MP, Newcombe NS. First Direct Evidence of Cue Integration in Reorientation: A New Paradigm. Cogn Sci 2017; 42 Suppl 3:923-936. [PMID: 29178140 DOI: 10.1111/cogs.12575] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2015] [Revised: 09/07/2017] [Accepted: 09/25/2017] [Indexed: 11/26/2022]
Abstract
There are several models of the use of geometric and feature cues in reorientation (Cheng, Huttenlocher, & Newcombe, ). The adaptive combination approach posits that people integrate cues with weights that depend on cue salience and learning, or, when discrepancies are large, they choose between cues based on these variables (Cheng, Shettleworth, Huttenlocher, & Rieser, ; Newcombe & Huttenlocher, ). In a new paradigm designed to evaluate integration and choice, disoriented participants attempted to return to a heading direction, in a trapezoidal enclosure in which feature and geometric cues both unambiguously specified a heading, but later the feature was moved. With discrepancies greater than 90 degrees, participants choose geometry. With smaller discrepancies, integration appeared in three of five situations; otherwise, participants used geometry alone. Variation depended on direction of feature movement and whether the nearest corner was acute or obtuse. The results have implications for contrasting adaptive combination and modularity theory, and for future research, offering a new paradigm for reorientation research, and for testing cue integration more broadly.
Collapse
|
22
|
Warren WH, Rothman DB, Schnapp BH, Ericson JD. Wormholes in virtual space: From cognitive maps to cognitive graphs. Cognition 2017; 166:152-163. [PMID: 28577445 DOI: 10.1016/j.cognition.2017.05.020] [Citation(s) in RCA: 70] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2016] [Revised: 05/10/2017] [Accepted: 05/14/2017] [Indexed: 02/02/2023]
Abstract
Humans and other animals build up spatial knowledge of the environment on the basis of visual information and path integration. We compare three hypotheses about the geometry of this knowledge of navigation space: (a) 'cognitive map' with metric Euclidean structure and a consistent coordinate system, (b) 'topological graph' or network of paths between places, and (c) 'labelled graph' incorporating local metric information about path lengths and junction angles. In two experiments, participants walked in a non-Euclidean environment, a virtual hedge maze containing two 'wormholes' that visually rotated and teleported them between locations. During training, they learned the metric locations of eight target objects from a 'home' location, which were visible individually. During testing, shorter wormhole routes to a target were preferred, and novel shortcuts were directional, contrary to the topological hypothesis. Shortcuts were strongly biased by the wormholes, with mean constant errors of 37° and 41° (45° expected), revealing violations of the metric postulates in spatial knowledge. In addition, shortcuts to targets near wormholes shifted relative to flanking targets, revealing 'rips' (86% of cases), 'folds' (91%), and ordinal reversals (66%) in spatial knowledge. Moreover, participants were completely unaware of these geometric inconsistencies, reflecting a surprising insensitivity to Euclidean structure. The probability of the shortcut data under the Euclidean map model and labelled graph model indicated decisive support for the latter (BFGM>100). We conclude that knowledge of navigation space is best characterized by a labelled graph, in which local metric information is approximate, geometrically inconsistent, and not embedded in a common coordinate system. This class of 'cognitive graph' models supports route finding, novel detours, and rough shortcuts, and has the potential to unify a range of data on spatial navigation.
Collapse
Affiliation(s)
- William H Warren
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Box 1821, 190 Thayer St., Providence, RI 02912, USA.
| | - Daniel B Rothman
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Box 1821, 190 Thayer St., Providence, RI 02912, USA
| | - Benjamin H Schnapp
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Box 1821, 190 Thayer St., Providence, RI 02912, USA
| | - Jonathan D Ericson
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Box 1821, 190 Thayer St., Providence, RI 02912, USA
| |
Collapse
|
23
|
Chen X, McNamara TP, Kelly JW, Wolbers T. Cue combination in human spatial navigation. Cogn Psychol 2017; 95:105-144. [PMID: 28478330 DOI: 10.1016/j.cogpsych.2017.04.003] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Revised: 04/09/2017] [Accepted: 04/12/2017] [Indexed: 11/28/2022]
Abstract
This project investigated the ways in which visual cues and bodily cues from self-motion are combined in spatial navigation. Participants completed a homing task in an immersive virtual environment. In Experiments 1A and 1B, the reliability of visual cues and self-motion cues was manipulated independently and within-participants. Results showed that participants weighted visual cues and self-motion cues based on their relative reliability and integrated these two cue types optimally or near-optimally according to Bayesian principles under most conditions. In Experiment 2, the stability of visual cues was manipulated across trials. Results indicated that cue instability affected cue weights indirectly by influencing cue reliability. Experiment 3 was designed to mislead participants about cue reliability by providing distorted feedback on the accuracy of their performance. Participants received feedback that their performance with visual cues was better and that their performance with self-motion cues was worse than it actually was or received the inverse feedback. Positive feedback on the accuracy of performance with a given cue improved the relative precision of performance with that cue. Bayesian principles still held for the most part. Experiment 4 examined the relations among the variability of performance, rated confidence in performance, cue weights, and spatial abilities. Participants took part in the homing task over two days and rated confidence in their performance after every trial. Cue relative confidence and cue relative reliability had unique contributions to observed cue weights. The variability of performance was less stable than rated confidence over time. Participants with higher mental rotation scores performed relatively better with self-motion cues than visual cues. Across all four experiments, consistent correlations were found between observed weights assigned to cues and relative reliability of cues, demonstrating that the cue-weighting process followed Bayesian principles. Results also pointed to the important role of subjective evaluation of performance in the cue-weighting process and led to a new conceptualization of cue reliability in human spatial navigation.
Collapse
Affiliation(s)
- Xiaoli Chen
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany.
| | | | | | - Thomas Wolbers
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany
| |
Collapse
|
24
|
Chrastil ER, Warren WH. Rotational error in path integration: encoding and execution errors in angle reproduction. Exp Brain Res 2017; 235:1885-1897. [PMID: 28303327 DOI: 10.1007/s00221-017-4910-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2016] [Accepted: 02/10/2017] [Indexed: 11/24/2022]
Abstract
Path integration is fundamental to human navigation. When a navigator leaves home on a complex outbound path, they are able to keep track of their approximate position and orientation and return to their starting location on a direct homebound path. However, there are several sources of error during path integration. Previous research has focused almost exclusively on encoding error-the error in registering the outbound path in memory. Here, we also consider execution error-the error in the response, such as turning and walking a homebound trajectory. In two experiments conducted in ambulatory virtual environments, we examined the contribution of execution error to the rotational component of path integration using angle reproduction tasks. In the reproduction tasks, participants rotated once and then rotated again to face the original direction, either reproducing the initial turn or turning through the supplementary angle. One outstanding difficulty in disentangling encoding and execution error during a typical angle reproduction task is that as the encoding angle increases, so does the required response angle. In Experiment 1, we dissociated these two variables by asking participants to report each encoding angle using two different responses: by turning to walk on a path parallel to the initial facing direction in the same (reproduction) or opposite (supplementary angle) direction. In Experiment 2, participants reported the encoding angle by turning both rightward and leftward onto a path parallel to the initial facing direction, over a larger range of angles. The results suggest that execution error, not encoding error, is the predominant source of error in angular path integration. These findings also imply that the path integrator uses an intrinsic (action-scaled) rather than an extrinsic (objective) metric.
Collapse
Affiliation(s)
- Elizabeth R Chrastil
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, USA.
- Department of Geography, University of California Santa Barbara, 1832 Ellison Hall, Santa Barbara, CA, 93106-4060, USA.
| | - William H Warren
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, USA
| |
Collapse
|