1
|
Liu B, Shan J, Gu Y. Temporal and spatial properties of vestibular signals for perception of self-motion. Front Neurol 2023; 14:1266513. [PMID: 37780704 PMCID: PMC10534010 DOI: 10.3389/fneur.2023.1266513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
It is well recognized that the vestibular system is involved in numerous important cognitive functions, including self-motion perception, spatial orientation, locomotion, and vector-based navigation, in addition to basic reflexes, such as oculomotor or body postural control. Consistent with this rationale, vestibular signals exist broadly in the brain, including several regions of the cerebral cortex, potentially allowing tight coordination with other sensory systems to improve the accuracy and precision of perception or action during self-motion. Recent neurophysiological studies in animal models based on single-cell resolution indicate that vestibular signals exhibit complex spatiotemporal dynamics, producing challenges in identifying their exact functions and how they are integrated with other modality signals. For example, vestibular and optic flow could provide congruent and incongruent signals regarding spatial tuning functions, reference frames, and temporal dynamics. Comprehensive studies, including behavioral tasks, neural recording across sensory and sensory-motor association areas, and causal link manipulations, have provided some insights into the neural mechanisms underlying multisensory self-motion perception.
Collapse
Affiliation(s)
- Bingyu Liu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jiayu Shan
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
2
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
3
|
Chaudhary S, Saywell N, Taylor D. The Differentiation of Self-Motion From External Motion Is a Prerequisite for Postural Control: A Narrative Review of Visual-Vestibular Interaction. Front Hum Neurosci 2022; 16:697739. [PMID: 35210998 PMCID: PMC8860980 DOI: 10.3389/fnhum.2022.697739] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 01/18/2022] [Indexed: 11/13/2022] Open
Abstract
The visual system is a source of sensory information that perceives environmental stimuli and interacts with other sensory systems to generate visual and postural responses to maintain postural stability. Although the three sensory systems; the visual, vestibular, and somatosensory systems work concurrently to maintain postural control, the visual and vestibular system interaction is vital to differentiate self-motion from external motion to maintain postural stability. The visual system influences postural control playing a key role in perceiving information required for this differentiation. The visual system’s main afferent information consists of optic flow and retinal slip that lead to the generation of visual and postural responses. Visual fixations generated by the visual system interact with the afferent information and the vestibular system to maintain visual and postural stability. This review synthesizes the roles of the visual system and their interaction with the vestibular system, to maintain postural stability.
Collapse
|
4
|
Zhao B, Zhang Y, Chen A. Encoding of vestibular and optic flow cues to self-motion in the posterior superior temporal polysensory area. J Physiol 2021; 599:3937-3954. [PMID: 34192812 DOI: 10.1113/jp281913] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 06/28/2021] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS Neurons in the posterior superior temporal polysensory area (STPp) showed significant directional selectivity in response to vestibular, optic flow and combined visual-vestibular stimuli. By comparison to the dorsal medial superior temporal area, the visual latency was slower in STPp but the vestibular latency was faster. Heading preferences under combined stimulation in STPp were usually dominated by visual signals. Cross-modal enhancement was observed in STPp when both vestibular and visual cues were presented together at their heading preferences. ABSTRACT Human neuroimaging data implicated that the superior temporal polysensory area (STP) might be involved in vestibular-visual interaction during heading computations, but the heading selectivity has not been examined in the macaque. Here, we investigated the convergence of optic flow and vestibular signals in macaque STP by using a virtual-reality system and found that 6.3% of STP neurons showed multisensory responses, with visual and vestibular direction preferences either congruent or opposite in roughly equal proportion. The percentage of vestibular-tuned cells (18.3%) was much smaller than that of visual-tuned cells (30.4%) in STP. The vestibular tuning strength was usually weaker than the visual condition. The visual latency was significantly slower in STPp than in the dorsal medial superior temporal area (MSTd), but the vestibular latency was significantly faster than in MSTd. During the bimodal condition, STP cells' response was dominated by visual signals, with the visual heading preference not affected by the vestibular signals but the response amplitudes modulated by vestibular signals in a subadditive way.
Collapse
Affiliation(s)
- Bin Zhao
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| | - Yi Zhang
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| | - Aihua Chen
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| |
Collapse
|
5
|
Abstract
Spatial navigation is a complex cognitive process based on multiple senses that are integrated and processed by a wide network of brain areas. Previous studies have revealed the retrosplenial complex (RSC) to be modulated in a task-related manner during navigation. However, these studies restricted participants' movement to stationary setups, which might have impacted heading computations due to the absence of vestibular and proprioceptive inputs. Here, we present evidence of human RSC theta oscillation (4-8 Hz) in an active spatial navigation task where participants actively ambulated from one location to several other points while the position of a landmark and the starting location were updated. The results revealed theta power in the RSC to be pronounced during heading changes but not during translational movements, indicating that physical rotations induce human RSC theta activity. This finding provides a potential evidence of head-direction computation in RSC in healthy humans during active spatial navigation.
Collapse
|
6
|
Dynamics of Heading and Choice-Related Signals in the Parieto-Insular Vestibular Cortex of Macaque Monkeys. J Neurosci 2021; 41:3254-3265. [PMID: 33622780 DOI: 10.1523/jneurosci.2275-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 01/20/2021] [Accepted: 02/17/2021] [Indexed: 02/06/2023] Open
Abstract
Perceptual decision-making is increasingly being understood to involve an interaction between bottom-up sensory-driven signals and top-down choice-driven signals, but how these signals interact to mediate perception is not well understood. The parieto-insular vestibular cortex (PIVC) is an area with prominent vestibular responsiveness, and previous work has shown that inactivating PIVC impairs vestibular heading judgments. To investigate the nature of PIVC's contribution to heading perception, we recorded extracellularly from PIVC neurons in two male rhesus macaques during a heading discrimination task, and compared findings with data from previous studies of dorsal medial superior temporal (MSTd) and ventral intraparietal (VIP) areas using identical stimuli. By computing partial correlations between neural responses, heading, and choice, we find that PIVC activity reflects a dynamically changing combination of sensory and choice signals. In addition, the sensory and choice signals are more balanced in PIVC, in contrast to the sensory dominance in MSTd and choice dominance in VIP. Interestingly, heading and choice signals in PIVC are negatively correlated during the middle portion of the stimulus epoch, reflecting a mismatch in the polarity of heading and choice signals. We anticipate that these results will help unravel the mechanisms of interaction between bottom-up sensory signals and top-down choice signals in perceptual decision-making, leading to more comprehensive models of self-motion perception.SIGNIFICANCE STATEMENT Vestibular information is important for our perception of self-motion, and various cortical regions in primates show vestibular heading selectivity. Inactivation of the macaque vestibular cortex substantially impairs the precision of vestibular heading discrimination, more so than inactivation of other multisensory areas. Here, we record for the first time from the vestibular cortex while monkeys perform a forced-choice heading discrimination task, and we compare results with data collected previously from other multisensory cortical areas. We find that vestibular cortex activity reflects a dynamically changing combination of sensory and choice signals, with both similarities and notable differences with other multisensory areas.
Collapse
|
7
|
Keshner EA, Lamontagne A. The Untapped Potential of Virtual Reality in Rehabilitation of Balance and Gait in Neurological Disorders. FRONTIERS IN VIRTUAL REALITY 2021; 2:641650. [PMID: 33860281 PMCID: PMC8046008 DOI: 10.3389/frvir.2021.641650] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Dynamic systems theory transformed our understanding of motor control by recognizing the continual interaction between the organism and the environment. Movement could no longer be visualized simply as a response to a pattern of stimuli or as a demonstration of prior intent; movement is context dependent and is continuously reshaped by the ongoing dynamics of the world around us. Virtual reality is one methodological variable that allows us to control and manipulate that environmental context. A large body of literature exists to support the impact of visual flow, visual conditions, and visual perception on the planning and execution of movement. In rehabilitative practice, however, this technology has been employed mostly as a tool for motivation and enjoyment of physical exercise. The opportunity to modulate motor behavior through the parameters of the virtual world is often ignored in practice. In this article we present the results of experiments from our laboratories and from others demonstrating that presenting particular characteristics of the virtual world through different sensory modalities will modify balance and locomotor behavior. We will discuss how movement in the virtual world opens a window into the motor planning processes and informs us about the relative weighting of visual and somatosensory signals. Finally, we discuss how these findings should influence future treatment design.
Collapse
Affiliation(s)
- Emily A. Keshner
- Department of Health and Rehabilitation Sciences, Temple University, Philadelphia, PA, United States
- Correspondence: Emily A. Keshner,
| | - Anouk Lamontagne
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada
- Virtual Reality and Mobility Laboratory, CISSS Laval—Jewish Rehabilitation Hospital Site of the Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Laval, QC, Canada
| |
Collapse
|
8
|
Nguyen-Vo T, Riecke BE, Stuerzlinger W, Pham DM, Kruijff E. NaviBoard and NaviChair: Limited Translation Combined with Full Rotation for Efficient Virtual Locomotion. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:165-177. [PMID: 31443029 DOI: 10.1109/tvcg.2019.2935730] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Walking has always been considered as the gold standard for navigation in Virtual Reality research. Though full rotation is no longer a technical challenge, physical translation is still restricted through limited tracked areas. While rotational information has been shown to be important, the benefit of the translational component is still unclear with mixed results in previous work. To address this gap, we conducted a mixed-method experiment to compare four levels of translational cues and control: none (using the trackpad of the HTC Vive controller to translate), upper-body leaning (sitting on a "NaviChair", leaning the upper-body to locomote), whole-body leaning/stepping (standing on a platform called NaviBoard, leaning the whole body or stepping one foot off the center to navigate), and full translation (physically walking). Results showed that translational cues and control had significant effects on various measures including task performance, task load, and simulator sickness. While participants performed significantly worse when they used a controller with no embodied translational cues, there was no significant difference between the NaviChair, NaviBoard, and actual walking. These results suggest that translational body-based motion cues and control from a low-cost leaning/stepping interface might provide enough sensory information for supporting spatial updating, spatial awareness, and efficient locomotion in VR, although future work will need to investigate how these results might or might not generalize to other tasks and scenarios.
Collapse
|
9
|
Sato H, Morimoto Y, Remijn GB, Seno T. Differences in Three Vection Indices (Latency, Duration, and Magnitude) Induced by "Camera-Moving" and "Object-Moving" in a Virtual Computer Graphics World, Despite Similarity in the Retinal Images. Iperception 2020; 11:2041669520958430. [PMID: 33149877 PMCID: PMC7580144 DOI: 10.1177/2041669520958430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Accepted: 08/04/2020] [Indexed: 11/20/2022] Open
Abstract
To create a self-motion (vection) situation in three-dimensional computer graphics (CG), there are mainly two ways: moving a camera toward an object ("camera moving") or by moving the object and its surrounding environment toward the camera ("object moving"). As both methods vary considerably in the amount of computer calculations involved in generating CG, knowing how each method affects self-motion perception should be important to CG-creators and psychologists. Here, we simulated self-motion in a virtual three-dimensional CG-world, without stereoscopic disparity, which correctly reflected the lighting and glare. Self-motion was induced by "camera moving" or by "object moving," which in the present experiments was done by moving a tunnel surrounding the camera toward the camera. This produced two retinal images that were virtually identical in Experiment 1 and very similar in Experiments 2 and 3. The stimuli were presented on a large plasma display to 15 naive participants and induced substantial vection. Three experiments comparing vection strength between the two methods found weak but significant differences. The results suggest that when creating CG visual experiences, "camera-moving" induces stronger vection.
Collapse
Affiliation(s)
- Hirotaro Sato
- Faculty of Design, Kyushu University, Fukuoka, Japan
| | - Yuki Morimoto
- Faculty of Design, Kyushu University, Fukuoka, Japan
| | | | - Takeharu Seno
- Faculty of Design, Kyushu University, Fukuoka, Japan
| |
Collapse
|
10
|
Velocity influences the relative contributions of visual and vestibular cues to self-acceleration. Exp Brain Res 2020; 238:1423-1432. [DOI: 10.1007/s00221-020-05824-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Accepted: 04/27/2020] [Indexed: 11/29/2022]
|
11
|
Rodriguez R, Crane BT. Common causation and offset effects in human visual-inertial heading direction integration. J Neurophysiol 2020; 123:1369-1379. [PMID: 32130052 DOI: 10.1152/jn.00019.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Movement direction can be determined from a combination of visual and inertial cues. Visual motion (optic flow) can represent self-motion through a fixed environment or environmental motion relative to an observer. Simultaneous visual and inertial heading cues present the question of whether the cues have a common cause (i.e., should be integrated) or whether they should be considered independent. This was studied in eight healthy human subjects who experienced 12 visual and inertial headings in the horizontal plane divided in 30° increments. The headings were estimated in two unisensory and six multisensory trial blocks. Each unisensory block included 72 stimulus presentations, while each multisensory block included 144 stimulus presentations, including every possible combination of visual and inertial headings in random order. After each multisensory stimulus, subjects reported their perception of visual and inertial headings as congruous (i.e., having common causation) or not. In the multisensory trial blocks, subjects also reported visual or inertial heading direction (3 trial blocks for each). For aligned visual-inertial headings, the rate of common causation was higher during alignment in cardinal than noncardinal directions. When visual and inertial stimuli were separated by 30°, the rate of reported common causation remained >50%, but it decreased to 15% or less for separation of ≥90°. The inertial heading was biased toward the visual heading by 11-20° for separations of 30-120°. Thus there was sensory integration even in conditions without reported common causation. The visual heading was minimally influenced by inertial direction. When trials with common causation perception were compared with those without, inertial heading perception had a stronger bias toward visual stimulus direction.NEW & NOTEWORTHY Optic flow ambiguously represents self-motion or environmental motion. When these are in different directions, it is uncertain whether these are integrated into a common perception or not. This study looks at that issue by determining whether the two modalities are consistent and by measuring their perceived directions to get a degree of influence. The visual stimulus can have significant influence on the inertial stimulus even when they are perceived as inconsistent.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
12
|
Zhao H, Straub D, Rothkopf CA. The visual control of interceptive steering: How do people steer a car to intercept a moving target? J Vis 2019; 19:11. [PMID: 31830240 DOI: 10.1167/19.14.11] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The visually guided interception of a moving target is a fundamental visuomotor task that humans can do with ease. But how humans carry out this task is still unclear despite numerous empirical investigations. Measurements of angular variables during human interception have suggested three possible strategies: the pursuit strategy, the constant bearing angle strategy, and the constant target-heading strategy. Here, we review previous experimental paradigms and show that some of them do not allow one to distinguish among the three strategies. Based on this analysis, we devised a virtual driving task that allows investigating which of the three strategies best describes human interception. Crucially, we measured participants' steering, head, and gaze directions over time for three different target velocities. Subjects initially aligned head and gaze in the direction of the car's heading. When the target appeared, subjects centered their gaze on the target, pointed their head slightly off the heading direction toward the target, and maintained an approximately constant target-heading angle, whose magnitude varied across participants, while the target's bearing angle continuously changed. With a second condition, in which the target was partially occluded, we investigated several alternative hypotheses about participants' visual strategies. Overall, the results suggest that interceptive steering is best described by the constant target-heading strategy and that gaze and head are coordinated to continuously acquire visual information to achieve successful interception.
Collapse
Affiliation(s)
- Huaiyong Zhao
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany
| | - Dominik Straub
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany.,Center for Cognitive Science, Technical University Darmstadt, Germany.,Frankfurt Institute for Advanced Studies, Goethe University, Germany
| |
Collapse
|
13
|
Material surface properties modulate vection strength. Exp Brain Res 2019; 237:2675-2690. [DOI: 10.1007/s00221-019-05620-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 08/05/2019] [Indexed: 01/19/2023]
|
14
|
de Winkel KN, Kurtz M, Bülthoff HH. Effects of visual stimulus characteristics and individual differences in heading estimation. J Vis 2019; 18:9. [PMID: 30347100 DOI: 10.1167/18.11.9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual heading estimation is subject to periodic patterns of constant (bias) and variable (noise) error. The nature of the errors, however, appears to differ between studies, showing underestimation in some, but overestimation in others. We investigated whether field of view (FOV), the availability of binocular disparity cues, motion profile, and visual scene layout can account for error characteristics, with a potential mediating effect of vection. Twenty participants (12 females) reported heading and rated vection for visual horizontal motion stimuli with headings ranging the full circle, while we systematically varied the above factors. Overall, the results show constant errors away from the fore-aft axis. Error magnitude was affected by FOV, disparity, and scene layout. Variable errors varied with heading angle, and depended on scene layout. Higher vection ratings were associated with smaller variable errors. Vection ratings depended on FOV, motion profile, and scene layout, with the highest ratings for a large FOV, cosine-bell velocity profile, and a ground plane scene rather than a dot cloud scene. Although the factors did affect error magnitude, differences in its direction were observed only between participants. We show that the observations are consistent with prior beliefs that headings align with the cardinal axes, where the attraction of each axis is an idiosyncratic property.
Collapse
Affiliation(s)
- Ksander N de Winkel
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Max Kurtz
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department of Human Factors and Engineering Psychology, University of Twente, Enschede, The Netherlands
| | - Heinrich H Bülthoff
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
15
|
Cullen KE, Taube JS. Our sense of direction: progress, controversies and challenges. Nat Neurosci 2019; 20:1465-1473. [PMID: 29073639 DOI: 10.1038/nn.4658] [Citation(s) in RCA: 116] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Accepted: 09/14/2017] [Indexed: 12/16/2022]
Abstract
In this Perspective, we evaluate current progress in understanding how the brain encodes our sense of direction, within the context of parallel work focused on how early vestibular pathways encode self-motion. In particular, we discuss how these systems work together and provide evidence that they involve common mechanisms. We first consider the classic view of the head direction cell and results of recent experiments in rodents and primates indicating that inputs to these neurons encode multimodal information during self-motion, such as proprioceptive and motor efference copy signals, including gaze-related information. We also consider the paradox that, while the head-direction network is generally assumed to generate a fixed representation of perceived directional heading, this computation would need to be dynamically updated when the relationship between voluntary motor command and its sensory consequences changes. Such situations include navigation in virtual reality and head-restricted conditions, since the natural relationship between visual and extravisual cues is altered.
Collapse
Affiliation(s)
- Kathleen E Cullen
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Jeffrey S Taube
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire, USA
| |
Collapse
|
16
|
Rodriguez R, Crane BT. Effect of range of heading differences on human visual-inertial heading estimation. Exp Brain Res 2019; 237:1227-1237. [PMID: 30847539 DOI: 10.1007/s00221-019-05506-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Accepted: 03/01/2019] [Indexed: 11/29/2022]
Abstract
Both visual and inertial cues are salient in heading determination. However, optic flow can ambiguously represent self-motion or environmental motion. It is unclear how visual and inertial heading cues are determined to have common cause and integrated vs perceived independently. In four experiments visual and inertial headings were presented simultaneously with ten subjects reporting visual or inertial headings in separate trial blocks. Experiment 1 examined inertial headings within 30° of straight-ahead and visual headings that were offset by up to 60°. Perception of the inertial heading was shifted in the direction of the visual stimulus by as much as 35° by the 60° offset, while perception of the visual stimulus remained largely uninfluenced. Experiment 2 used ± 140° range of inertial headings with up to 120° visual offset. This experiment found variable behavior between subjects with most perceiving the sensory stimuli to be shifted towards an intermediate heading but a few perceiving the headings independently. The visual and inertial headings influenced each other even at the largest offsets. Experiments 3 and 4 had similar inertial headings to experiments 1 and 2, respectively, except subjects reported environmental motion direction. Experiment 4 displayed similar perceptual influences as experiment 2, but in experiment 3 percepts were independent. Results suggested that perception of visual and inertial stimuli tend to be perceived as having common causation in most subjects with offsets up to 90° although with significant variation in perception between individuals. Limiting the range of inertial headings caused the visual heading to dominate the perception.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Bioengineering, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA
| | - Benjamin T Crane
- Department of Bioengineering, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA. .,Department of Otolaryngology, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA. .,Department of Neuroscience, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA.
| |
Collapse
|
17
|
Britton Z, Arshad Q. Vestibular and Multi-Sensory Influences Upon Self-Motion Perception and the Consequences for Human Behavior. Front Neurol 2019; 10:63. [PMID: 30899238 PMCID: PMC6416181 DOI: 10.3389/fneur.2019.00063] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 01/17/2019] [Indexed: 11/16/2022] Open
Abstract
In this manuscript, we comprehensively review both the human and animal literature regarding vestibular and multi-sensory contributions to self-motion perception. This covers the anatomical basis and how and where the signals are processed at all levels from the peripheral vestibular system to the brainstem and cerebellum and finally to the cortex. Further, we consider how and where these vestibular signals are integrated with other sensory cues to facilitate self-motion perception. We conclude by demonstrating the wide-ranging influences of the vestibular system and self-motion perception upon behavior, namely eye movement, postural control, and spatial awareness as well as new discoveries that such perception can impact upon numerical cognition, human affect, and bodily self-consciousness.
Collapse
Affiliation(s)
- Zelie Britton
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| | - Qadeer Arshad
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| |
Collapse
|
18
|
Macuga KL. Multisensory Influences on Driver Steering During Curve Navigation. HUMAN FACTORS 2019; 61:337-347. [PMID: 30320509 DOI: 10.1177/0018720818805898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE The effects of inertial (vestibular and somatosensory) information on driver steering during curve navigation were investigated, using an electric four-wheel mobility vehicle outfitted with a steering wheel and a portable virtual reality system. BACKGROUND When driving, multiple sources of perceptual information are available. Researchers have focused on visual information, which plays a critical role in steering control. However, it is not yet well established how inertial information might contribute. METHODS I biased inertial cues by varying visual/inertial gains (doubled, halved, reversed), as drivers negotiated curving paths, and measured steering accuracy and efficiency. I also assessed whether being exposed to inertial biases had an impact on postbias steering by comparing pre- and posttest session performance measures. RESULTS Doubling or halving inertial cues had little effect on steering performance. Inertial information only disrupted steering when it was reversed with respect to visual information. Over time, the influence of this extreme inertial bias was reduced though not eliminated. Postbias curve navigation performance was not impacted, likely because participants had learned to disregard, rather than integrate, biased inertial cues. CONCLUSION Results suggest that biased inertial information has little influence on curve navigation performance when visual information is available. APPLICATION Though inertial cues may be important for open-loop steering, when visual cues are unavailable, their role in closed-loop steering seems less influential. This has implications for driving simulation and suggests that inertial discrepancies due to limitations in motion-cuing capabilities may not be all that problematic for the simulation of closed-loop curve steering tasks.
Collapse
|
19
|
Zhang Y, Li S, Jiang D, Chen A. Response Properties of Interneurons and Pyramidal Neurons in Macaque MSTd and VPS Areas During Self-Motion. Front Neural Circuits 2018; 12:105. [PMID: 30532695 PMCID: PMC6265351 DOI: 10.3389/fncir.2018.00105] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2018] [Accepted: 11/05/2018] [Indexed: 11/29/2022] Open
Abstract
To perceive self-motion perception, the brain needs to integrate multi-modal sensory signals such as visual, vestibular and proprioceptive cues. Self-motion perception is very complex and involves multi candidate areas. Previous studies related to self-motion perception during passive motion have revealed that some of the areas show selective response to different directions for both visual (optic flow) and vestibular stimuli, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the visual posterior sylvian fissure (VPS), although MSTd is dominated by visual signals and VPS is dominated by vestibular signals. However, none of studies related to self-motion perception have distinguished the different neuron types with distinct neuronal properties in cortical microcircuitry, which limited our understanding of the local circuits for self-motion perception. In the current study, we classified the recorded MSTd and VPS neurons into putative pyramidal neurons and putative interneurons based on the extracellular action potential waveforms and spontaneous firing rates. We found that: (1) the putative interneurons exhibited obviously broader direction tuning than putative pyramidal neurons in response to their dominant (visual for MSTd; vestibular for VPS) stimulation type; (2) either in visual or vestibular condition, the putative interneurons were more responsive but with larger variability than the putative pyramidal neurons for both MSTd and VPS areas; and (3) the timing of vestibular and visual peak directional tuning was earlier in the putative interneurons than that of the putative pyramidal neurons for both MSTd and VPS areas. Based on these findings we speculated that, within the microcircuitry, several adjacent putative interneurons with broad direction tuning receive earlier strong but variable signals, which might act feedforward input to shape the direction tuning of the target putative pyramidal neuron, but each interneuron may participate in several microcircuitries, targeting different output neurons.
Collapse
Affiliation(s)
| | | | | | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, China
| |
Collapse
|
20
|
Abstract
Detection of the state of self-motion, such as the instantaneous heading direction, the traveled trajectory and traveled distance or time, is critical for efficient spatial navigation. Numerous psychophysical studies have indicated that the vestibular system, originating from the otolith and semicircular canals in our inner ears, provides robust signals for different aspects of self-motion perception. In addition, vestibular signals interact with other sensory signals such as visual optic flow to facilitate natural navigation. These behavioral results are consistent with recent findings in neurophysiological studies. In particular, vestibular activity in response to the translation or rotation of the head/body in darkness is revealed in a growing number of cortical regions, many of which are also sensitive to visual motion stimuli. The temporal dynamics of the vestibular activity in the central nervous system can vary widely, ranging from acceleration-dominant to velocity-dominant. Different temporal dynamic signals may be decoded by higher level areas for different functions. For example, the acceleration signals during the translation of body in the horizontal plane may be used by the brain to estimate the heading directions. Although translation and rotation signals arise from independent peripheral organs, that is, otolith and canals, respectively, they frequently converge onto single neurons in the central nervous system including both the brainstem and the cerebral cortex. The convergent neurons typically exhibit stronger responses during a combined curved motion trajectory which may serve as the neural correlate for complex path perception. During spatial navigation, traveled distance or time may be encoded by different population of neurons in multiple regions including hippocampal-entorhinal system, posterior parietal cortex, or frontal cortex.
Collapse
Affiliation(s)
- Zhixian Cheng
- Department of Neuroscience, Yale School of Medicine, New Haven, CT, United States
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
21
|
Gu Y. Vestibular signals in primate cortex for self-motion perception. Curr Opin Neurobiol 2018; 52:10-17. [DOI: 10.1016/j.conb.2018.04.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 03/12/2018] [Accepted: 04/07/2018] [Indexed: 10/17/2022]
|
22
|
Effect of vibration during visual-inertial integration on human heading perception during eccentric gaze. PLoS One 2018; 13:e0199097. [PMID: 29902253 PMCID: PMC6002115 DOI: 10.1371/journal.pone.0199097] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 05/31/2018] [Indexed: 11/21/2022] Open
Abstract
Heading direction is determined from visual and inertial cues. Visual headings use retinal coordinates while inertial headings use body coordinates. Thus during eccentric gaze the same heading may be perceived differently by visual and inertial modalities. Stimulus weights depend on the relative reliability of these stimuli, but previous work suggests that the inertial heading may be given more weight than predicted. These experiments only varied the visual stimulus reliability, and it is unclear what occurs with variation in inertial reliability. Five human subjects completed a heading discrimination task using 2s of translation with a peak velocity of 16cm/s. Eye position was ±25° left/right with visual, inertial, or combined motion. The visual motion coherence was 50%. Inertial stimuli included 6 Hz vertical vibration with 0, 0.10, 0.15, or 0.20cm amplitude. Subjects reported perceived heading relative to the midline. With an inertial heading, perception was biased 3.6° towards the gaze direction. Visual headings biased perception 9.6° opposite gaze. The inertial threshold without vibration was 4.8° which increased significantly to 8.8° with vibration but the amplitude of vibration did not influence reliability. With visual-inertial headings, empirical stimulus weights were calculated from the bias and compared with the optimal weight calculated from the threshold. In 2 subjects empirical weights were near optimal while in the remaining 3 subjects the inertial stimuli were weighted greater than optimal predictions. On average the inertial stimulus was weighted greater than predicted. These results indicate multisensory integration may not be a function of stimulus reliability when inertial stimulus reliability is varied.
Collapse
|
23
|
Standing Postural Control in Individuals with Autism Spectrum Disorder: Systematic Review and Meta-analysis. J Autism Dev Disord 2018; 47:2238-2253. [PMID: 28508177 DOI: 10.1007/s10803-017-3144-y] [Citation(s) in RCA: 75] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Impairments in postural control affect the development of motor and social skills in individuals with autism spectrum disorder (ASD). This review compared the effect of different sensory conditions on static standing postural control between ASD and neurotypical individuals. Results from 19 studies indicated a large difference in postural control between groups across all sensory conditions. This review revealed sensorimotor and multiple sensory processing deficits in ASD. The tendency for individuals with ASD to be more susceptible to postural instability with use of visual information compared with somatosensory information suggests perinatal alterations in sensory development. There is further scope for studies on the use of sensory information and postural control to provide additional evidence about sensorimotor processing in ASD.
Collapse
|
24
|
Yang L, Gu Y. Distinct spatial coordinate of visual and vestibular heading signals in macaque FEFsem and MSTd. eLife 2017; 6. [PMID: 29134944 PMCID: PMC5685470 DOI: 10.7554/elife.29809] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/03/2017] [Indexed: 11/17/2022] Open
Abstract
Precise heading estimate requires integration of visual optic flow and vestibular inertial motion originating from distinct spatial coordinates (eye- and head-centered, respectively). To explore whether the two heading signals may share a common reference frame along the hierarchy of cortical stages, we explored two multisensory areas in macaques: the smooth pursuit area of the frontal eye field (FEFsem) closer to the motor side, and the dorsal portion of medial superior temporal area (MSTd) closer to the sensory side. In both areas, vestibular signals are head-centered, whereas visual signals are mainly eye-centered. However, visual signals in FEFsem are more shifted towards the head coordinate compared to MSTd. These results are robust being largely independent on: (1) smooth pursuit eye movement, (2) motion parallax cue, and (3) behavioral context for active heading estimation, indicating that the visual and vestibular heading signals may be represented in distinct spatial coordinate in sensory cortices.
Collapse
Affiliation(s)
- Lihua Yang
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
25
|
The vestibulocochlear bases for wartime posttraumatic stress disorder manifestations. Med Hypotheses 2017; 106:44-56. [DOI: 10.1016/j.mehy.2017.06.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Accepted: 06/28/2017] [Indexed: 11/23/2022]
|
26
|
Crane BT. Effect of eye position during human visual-vestibular integration of heading perception. J Neurophysiol 2017; 118:1609-1621. [PMID: 28615328 DOI: 10.1152/jn.00037.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2017] [Revised: 06/13/2017] [Accepted: 06/13/2017] [Indexed: 11/22/2022] Open
Abstract
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems.NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability.
Collapse
Affiliation(s)
- Benjamin T Crane
- Department of Otolaryngology, University of Rochester, Rochester, New York
| |
Collapse
|
27
|
Smith AT, Greenlee MW, DeAngelis GC, Angelaki D. Distributed Visual–Vestibular Processing in the Cerebral Cortex of Man and Macaque. Multisens Res 2017. [DOI: 10.1163/22134808-00002568] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Recent advances in understanding the neurobiological underpinnings of visual–vestibular interactions underlying self-motion perception are reviewed with an emphasis on comparisons between the macaque and human brains. In both species, several distinct cortical regions have been identified that are active during both visual and vestibular stimulation and in some of these there is clear evidence for sensory integration. Several possible cross-species homologies between cortical regions are identified. A key feature of cortical organization is that the same information is apparently represented in multiple, anatomically diverse cortical regions, suggesting that information about self-motion is used for different purposes in different brain regions.
Collapse
Affiliation(s)
- Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| | - Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, 93053 Regensburg, Germany
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA
| |
Collapse
|
28
|
Evidence for a Causal Contribution of Macaque Vestibular, But Not Intraparietal, Cortex to Heading Perception. J Neurosci 2016; 36:3789-98. [PMID: 27030763 DOI: 10.1523/jneurosci.2485-15.2016] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Accepted: 01/31/2016] [Indexed: 12/30/2022] Open
Abstract
UNLABELLED Multisensory convergence of visual and vestibular signals has been observed within a network of cortical areas involved in representing heading. Vestibular-dominant heading tuning has been found in the macaque parietoinsular vestibular cortex (PIVC) and the adjacent visual posterior sylvian (VPS) area, whereas relatively balanced visual/vestibular tuning was encountered in the ventral intraparietal (VIP) area and visual-dominant tuning was found in the dorsal medial superior temporal (MSTd) area. Although the respective functional roles of these areas remain unclear, perceptual deficits in heading discrimination following reversible chemical inactivation of area MSTd area suggested that areas with vestibular-dominant heading tuning also contribute to behavior. To explore the roles of other areas in heading perception, muscimol injections were used to reversibly inactivate either the PIVC or the VIP area bilaterally in macaques. Inactivation of the anterior PIVC increased psychophysical thresholds when heading judgments were based on either optic flow or vestibular cues, although effects were stronger for vestibular stimuli. All behavioral deficits recovered within 36 h. Visual deficits were larger following inactivation of the posterior portion of the PIVC, likely because these injections encroached upon the VPS area, which contains neurons with optic flow tuning (unlike the PIVC). In contrast, VIP inactivation led to no behavioral deficits, despite the fact that VIP neurons show much stronger choice-related activity than MSTd neurons. These results suggest that the VIP area either provides a parallel and partially redundant pathway for this task, or does not participate in heading discrimination. In contrast, the PIVC/VPS area, along with the MSTd area, make causal contributions to heading perception based on either vestibular or visual signals. SIGNIFICANCE STATEMENT Multisensory vestibular and visual signals are found in multiple cortical areas, but their causal contribution to self-motion perception has been previously tested only in the dorsal medial superior temporal (MSTd) area. In these experiments, we show that inactivation of the parietoinsular vestibular cortex (PIVC) also results in causal deficits during heading discrimination for both visual and vestibular cues. In contrast, ventral intraparietal (VIP) area inactivation led to no behavioral deficits, despite the fact that VIP neurons show much stronger choice-related activity than MSTd or PIVC neurons. These results demonstrate that choice-related activity does not always imply a causal role in sensory perception.
Collapse
|
29
|
Nooij SAE, Nesti A, Bülthoff HH, Pretto P. Perception of rotation, path, and heading in circular trajectories. Exp Brain Res 2016; 234:2323-37. [PMID: 27056085 PMCID: PMC4923114 DOI: 10.1007/s00221-016-4638-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2015] [Accepted: 03/23/2016] [Indexed: 11/04/2022]
Abstract
When in darkness, humans can perceive the direction and magnitude of rotations and of linear translations in the horizontal plane. The current paper addresses the integrated perception of combined translational and rotational motion, as it occurs when moving along a curved trajectory. We questioned whether the perceived motion through the environment follows the predictions of a self-motion perception model (e.g., Merfeld et al. in J Vestib Res 3:141-161, 1993; Newman in A multisensory observer model for human spatial orientation perception, 2009), which assume linear addition of rotational and translational components. For curved motion in darkness, such models predict a non-veridical motion percept, consisting of an underestimation of the perceived rotation, a distortion of the perceived travelled path, and a bias in the perceived heading (i.e., the perceived instantaneous direction of motion with respect to the body). These model predictions were evaluated in two experiments. In Experiment 1, seven participants were moved along a circular trajectory in darkness while facing the motion direction. They indicated perceived yaw rotation using an online tracking task, and perceived travelled path by drawings. In Experiment 2, the heading was systematically varied, and six participants indicated, in a 2-alternative forced-choice task, whether they perceived facing inward or outward of the circular path. Overall, we found no evidence for the heading bias predicted by the model. This suggests that the sum of the perceived rotational and translational components alone cannot adequately explain the overall perceived motion through the environment. Possibly, knowledge about motion dynamics and familiar stimuli combinations may play an important additional role in shaping the percept.
Collapse
Affiliation(s)
- Suzanne A E Nooij
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
| | - Alessandro Nesti
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Heinrich H Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
| | - Paolo Pretto
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
30
|
Abstract
UNLABELLED Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. SIGNIFICANCE STATEMENT To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception.
Collapse
|
31
|
Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object. J Neurosci 2016; 35:13599-607. [PMID: 26446214 DOI: 10.1523/jneurosci.2267-15.2015] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion.
Collapse
|
32
|
Human discrimination of head-centred visual-inertial yaw rotations. Exp Brain Res 2015; 233:3553-64. [PMID: 26319547 PMCID: PMC4646930 DOI: 10.1007/s00221-015-4426-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2014] [Accepted: 08/21/2015] [Indexed: 11/09/2022]
Abstract
To successfully perform daily activities such as maintaining posture or running, humans need to be sensitive to self-motion over a large range of motion intensities. Recent studies have shown that the human ability to discriminate self-motion in the presence of either inertial-only motion cues or visual-only motion cues is not constant but rather decreases with motion intensity. However, these results do not yet allow for a quantitative description of how self-motion is discriminated in the presence of combined visual and inertial cues, since little is known about visual–inertial perceptual integration and the resulting self-motion perception over a wide range of motion intensity. Here we investigate these two questions for head-centred yaw rotations (0.5 Hz) presented either in darkness or combined with visual cues (optical flow with limited lifetime dots). Participants discriminated a reference motion, repeated unchanged for every trial, from a comparison motion, iteratively adjusted in peak velocity so as to measure the participants’ differential threshold, i.e. the smallest perceivable change in stimulus intensity. A total of six participants were tested at four reference velocities (15, 30, 45 and 60 °/s). Results are combined for further analysis with previously published differential thresholds measured for visual-only yaw rotation cues using the same participants and procedure. Overall, differential thresholds increase with stimulus intensity following a trend described well by three power functions with exponents of 0.36, 0.62 and 0.49 for inertial, visual and visual–inertial stimuli, respectively. Despite the different exponents, differential thresholds do not depend on the type of sensory input significantly, suggesting that combining visual and inertial stimuli does not lead to improved discrimination performance over the investigated range of yaw rotations.
Collapse
|
33
|
Palmisano S, Allison RS, Schira MM, Barry RJ. Future challenges for vection research: definitions, functional significance, measures, and neural bases. Front Psychol 2015; 6:193. [PMID: 25774143 PMCID: PMC4342884 DOI: 10.3389/fpsyg.2015.00193] [Citation(s) in RCA: 75] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2014] [Accepted: 02/07/2015] [Indexed: 11/25/2022] Open
Abstract
This paper discusses four major challenges facing modern vection research. Challenge 1 (Defining Vection) outlines the different ways that vection has been defined in the literature and discusses their theoretical and experimental ramifications. The term vection is most often used to refer to visual illusions of self-motion induced in stationary observers (by moving, or simulating the motion of, the surrounding environment). However, vection is increasingly being used to also refer to non-visual illusions of self-motion, visually mediated self-motion perceptions, and even general subjective experiences (i.e., “feelings”) of self-motion. The common thread in all of these definitions is the conscious subjective experience of self-motion. Thus, Challenge 2 (Significance of Vection) tackles the crucial issue of whether such conscious experiences actually serve functional roles during self-motion (e.g., in terms of controlling or guiding the self-motion). After more than 100 years of vection research there has been surprisingly little investigation into its functional significance. Challenge 3 (Vection Measures) discusses the difficulties with existing subjective self-report measures of vection (particularly in the context of contemporary research), and proposes several more objective measures of vection based on recent empirical findings. Finally, Challenge 4 (Neural Basis) reviews the recent neuroimaging literature examining the neural basis of vection and discusses the hurdles still facing these investigations.
Collapse
Affiliation(s)
- Stephen Palmisano
- School of Psychology, University of Wollongong Wollongong, NSW, Australia
| | - Robert S Allison
- Department of Electrical Engineering and Computer Science, York University Toronto, ON, Canada
| | - Mark M Schira
- School of Psychology, University of Wollongong Wollongong, NSW, Australia
| | - Robert J Barry
- School of Psychology, University of Wollongong Wollongong, NSW, Australia
| |
Collapse
|
34
|
Butler JS, Campos JL, Bülthoff HH. Optimal visual–vestibular integration under conditions of conflicting intersensory motion profiles. Exp Brain Res 2014; 233:587-97. [DOI: 10.1007/s00221-014-4136-1] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2014] [Accepted: 10/20/2014] [Indexed: 10/24/2022]
|
35
|
Enkhjargal N, Matsumoto J, Chinzorig C, Berthoz A, Ono T, Nishijo H. Rat thalamic neurons encode complex combinations of heading and movement directions and the trajectory route during translocation with sensory conflict. Front Behav Neurosci 2014; 8:242. [PMID: 25100955 PMCID: PMC4104644 DOI: 10.3389/fnbeh.2014.00242] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Accepted: 06/24/2014] [Indexed: 11/13/2022] Open
Abstract
It is unknown how thalamic head direction neurons extract meaningful information from multiple conflicting sensory information sources when animals run under conditions of sensory mismatch. In the present study, rats were placed on a treadmill on a stage that moved in a figure-8-shaped pathway. The anterodorsal and laterodorsal neurons were recorded under two conditions: (1) control sessions, in which both the stage and the treadmill moved forward, or (2) backward (mismatch) sessions, in which the stage was moved backward while the rats ran forward on the treadmill. Of the 222 thalamic neurons recorded, 55 showed differential responses to the directions to window (south) and door (north) sides, along which the animals were translocated in the long axis of the trajectory. Of these 55 direction-related neurons, 15 showed heading direction-dependent responses regardless of movement direction (forward or backward movements). Thirteen neurons displayed heading and movement direction-dependent responses, and, of these 13, activity of 6 neurons increased during forward movement to the window or door side, while activity of the remaining 7 neurons increased during backward movement to the window or door side. Eighteen neurons showed movement direction-related responses regardless of heading direction. Furthermore, activity of some direction-related neurons increased only in a specific trajectory. These results suggested that the activity of these neurons reflects complex combinations of facing direction (landmarks), movement direction (optic flow/vestibular information), motor/proprioceptive information, and the trajectory of the movement.
Collapse
Affiliation(s)
- Nyamdavaa Enkhjargal
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama Toyama, Japan
| | - Jumpei Matsumoto
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama Toyama, Japan
| | - Choijiljav Chinzorig
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama Toyama, Japan
| | - Alain Berthoz
- Center for Interdisciplinary Research in Biology, Collège de France Paris, France
| | - Taketoshi Ono
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama Toyama, Japan
| | - Hisao Nishijo
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama Toyama, Japan
| |
Collapse
|
36
|
Li L, Niehorster DC. Influence of optic flow on the control of heading and target egocentric direction during steering toward a goal. J Neurophysiol 2014; 112:766-77. [PMID: 25128559 DOI: 10.1152/jn.00697.2013] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Although previous studies have shown that people use both optic flow and target egocentric direction to walk or steer toward a goal, it remains in question how enriching the optic flow field affects the control of heading specified by optic flow and the control of target egocentric direction during goal-oriented locomotion. In the current study, we used a control-theoretic approach to separate the control response specific to these two cues in the visual control of steering toward a goal. The results showed that the addition of optic flow information (such as foreground motion and global flow) in the display improved the overall control precision, the amplitude, and the response delay of the control of heading. The amplitude and the response delay of the control of target egocentric direction were, however, not affected. The improvement in the control of heading with enriched optic flow displays was mirrored by an increase in the accuracy of heading perception. The findings provide direct support for the claim that people use the heading specified by optic flow as well as target egocentric direction to walk or steer toward a goal and suggest that the visual system does not internally weigh these two cues for goal-oriented locomotion control.
Collapse
Affiliation(s)
- Li Li
- Department of Psychology, The University of Hong Kong, Hong Kong, Special Administrative Region of the People's Republic of China
| | - Diederick C Niehorster
- Department of Psychology, The University of Hong Kong, Hong Kong, Special Administrative Region of the People's Republic of China
| |
Collapse
|
37
|
Saunders JA. Reliability and relative weighting of visual and nonvisual information for perceiving direction of self-motion during walking. J Vis 2014; 14:24. [PMID: 24648194 DOI: 10.1167/14.3.24] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°-2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%-34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures.
Collapse
Affiliation(s)
- Jeffrey A Saunders
- Department of Psychology, University of Hong Kong, Hong Kong, Hong Kong SAR
| |
Collapse
|
38
|
Abstract
Identifying the neural mechanisms underlying spatial orientation and navigation has long posed a challenge for researchers. Multiple approaches incorporating a variety of techniques and animal models have been used to address this issue. More recently, virtual navigation has become a popular tool for understanding navigational processes. Although combining this technique with functional imaging can provide important information on many aspects of spatial navigation, it is important to recognize some of the limitations these techniques have for gaining a complete understanding of the neural mechanisms of navigation. Foremost among these is that, when participants perform a virtual navigation task in a scanner, they are lying motionless in a supine position while viewing a video monitor. Here, we provide evidence that spatial orientation and navigation rely to a large extent on locomotion and its accompanying activation of motor, vestibular, and proprioceptive systems. Researchers should therefore consider the impact on the absence of these motion-based systems when interpreting virtual navigation/functional imaging experiments to achieve a more accurate understanding of the mechanisms underlying navigation.
Collapse
|
39
|
Guidetti G. The role of cognitive processes in vestibular disorders. HEARING, BALANCE AND COMMUNICATION 2013. [DOI: 10.3109/21695717.2013.765085] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
40
|
Cuturi LF, MacNeilage PR. Systematic biases in human heading estimation. PLoS One 2013; 8:e56862. [PMID: 23457631 PMCID: PMC3574054 DOI: 10.1371/journal.pone.0056862] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2012] [Accepted: 01/15/2013] [Indexed: 11/18/2022] Open
Abstract
Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.
Collapse
Affiliation(s)
- Luigi F. Cuturi
- German Center for Vertigo and Balance Disorders, University Hospital of Munich, Munich, Germany
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians University, Munich, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo and Balance Disorders, University Hospital of Munich, Munich, Germany
- * E-mail:
| |
Collapse
|
41
|
Do walkers follow their heads? Investigating the role of head rotation in locomotor control. Exp Brain Res 2012; 219:175-90. [PMID: 22466410 DOI: 10.1007/s00221-012-3077-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2011] [Accepted: 03/14/2012] [Indexed: 10/28/2022]
Abstract
Eye and head rotations are normally correlated with changes in walking direction; however, it is unknown whether they play a causal role in the control of steering. The objective of the present study was to answer two questions about the role of head rotations in steering control when walking to a goal. First, are head rotations sufficient to elicit a change in walking direction? Second, are head rotations necessary to initiate a change in walking direction or guide steering to a goal? To answer these questions, participants either walked toward a goal located 7 m away or were cued to steer to the left or right by 37°. On a subset of trials, participants were either cued to voluntarily turn their heads to the left or right, or they underwent an involuntary head perturbation via a head-mounted air jet. The results showed that large voluntary head turns (35°) yielded slight path deviations (1°-2°) in the same or opposite direction as the head turn, depending on conditions, which have alternative explanations. Involuntary head rotations did not elicit path deviations despite comparable head rotation magnitudes. In addition, the walking trajectory when turning toward an eccentric goal was the same regardless of head orientation. Steering can thus be decoupled from head rotation during walking. We conclude that head rotations are neither a sufficient nor a necessary component of steering control, because they do not induce a turn and they are not required to initiate a turn or to guide the locomotor trajectory to a goal.
Collapse
|
42
|
Nolan H, Butler JS, Whelan R, Foxe JJ, Bülthoff HH, Reilly RB. Neural correlates of oddball detection in self-motion heading: a high-density event-related potential study of vestibular integration. Exp Brain Res 2012; 219:1-11. [PMID: 22434342 DOI: 10.1007/s00221-012-3059-y] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2011] [Accepted: 03/02/2012] [Indexed: 11/25/2022]
Abstract
The perception of self-motion is a product of the integration of information from both visual and non-visual cues, to which the vestibular system is a central contributor. It is well documented that vestibular dysfunction leads to impaired movement and balance, dizziness and falls, and yet our knowledge of the neuronal processing of vestibular signals remains relatively sparse. In this study, high-density electroencephalographic recordings were deployed to investigate the neural processes associated with vestibular detection of changes in heading. To this end, a self-motion oddball paradigm was designed. Participants were translated linearly 7.8 cm on a motion platform using a one second motion profile, at a 45° angle leftward or rightward of straight ahead. These headings were presented with a stimulus probability of 80-20 %. Participants responded when they detected the infrequent direction change via button-press. Event-related potentials (ERPs) were calculated in response to the standard (80 %) and target (20 %) movement directions. Statistical parametric mapping showed that ERPs to standard and target movements differed significantly from 490 to 950 ms post-stimulus. Topographic analysis showed that this difference had a typical P3 topography. Individual participant bootstrap analysis revealed that 93.3 % of participants exhibited a clear P3 component. These results indicate that a perceived change in vestibular heading can readily elicit a P3 response, wholly similar to that evoked by oddball stimuli presented in other sensory modalities. This vestibular-evoked P3 response may provide a readily and robustly detectable objective measure for the evaluation of vestibular integrity in various disease models.
Collapse
Affiliation(s)
- H Nolan
- The Trinity Centre for Bioengineering, Trinity College Dublin, Dublin, Ireland
| | | | | | | | | | | |
Collapse
|
43
|
Yoder RM, Clark BJ, Taube JS. Origins of landmark encoding in the brain. Trends Neurosci 2011; 34:561-71. [PMID: 21982585 PMCID: PMC3200508 DOI: 10.1016/j.tins.2011.08.004] [Citation(s) in RCA: 102] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2010] [Revised: 05/16/2011] [Accepted: 08/22/2011] [Indexed: 11/24/2022]
Abstract
The ability to perceive one's position and directional heading relative to landmarks is necessary for successful navigation within an environment. Recent studies have shown that the visual system dominantly controls the neural representations of directional heading and location when familiar visual cues are available, and several neural circuits, or streams, have been proposed to be crucial for visual information processing. Here, we summarize the evidence that the dorsal presubiculum (also known as the postsubiculum) is critically important for the direct transfer of visual landmark information to spatial signals within the limbic system.
Collapse
Affiliation(s)
| | | | - Jeffrey S. Taube
- Department of Psychological and Brain Sciences, Center for Cognitive Neuroscience, Dartmouth College
| |
Collapse
|
44
|
DeAngelis G, Angelaki D. Visual–Vestibular Integration for Self-Motion Perception. Front Neurosci 2011. [DOI: 10.1201/b11092-39] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
45
|
|
46
|
Using an evolutionary algorithm to determine the parameters of a biologically inspired model of head direction cells. J Comput Neurosci 2011; 32:281-95. [PMID: 21785973 DOI: 10.1007/s10827-011-0352-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2011] [Revised: 07/04/2011] [Accepted: 07/04/2011] [Indexed: 10/18/2022]
Abstract
A biologically inspired model of head direction cells is presented and tested on a small mobile robot. Head direction cells (discovered in the brain of rats in 1984) encode the head orientation of their host irrespective of the host's location in the environment. The head direction system thus acts as a biological compass (though not a magnetic one) for its host. Head direction cells are influenced in different ways by idiothetic (host-centred) and allothetic (not host-centred) cues. The model presented here uses the visual, vestibular and kinesthetic inputs that are simulated by robot sensors. Real robot-sensor data has been used in order to train the model's artificial neural network connections. The main contribution of this paper lies in the use of an evolutionary algorithm in order to determine the values of parameters that determine the behaviour of the model. More importantly, the objective function of the evolutionary strategy used takes into consideration quantitative biological observations reported in the literature.
Collapse
|
47
|
Visual influence on path integration in darkness indicates a multimodal representation of large-scale space. Proc Natl Acad Sci U S A 2011; 108:1152-7. [PMID: 21199934 DOI: 10.1073/pnas.1011843108] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.
Collapse
|
48
|
Fetsch CR, Rajguru SM, Karunaratne A, Gu Y, Angelaki DE, Deangelis GC. Spatiotemporal properties of vestibular responses in area MSTd. J Neurophysiol 2010; 104:1506-22. [PMID: 20631212 PMCID: PMC2944682 DOI: 10.1152/jn.91247.2008] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2008] [Accepted: 07/10/2010] [Indexed: 11/22/2022] Open
Abstract
Recent studies have shown that many neurons in the primate dorsal medial superior temporal area (MSTd) show spatial tuning during inertial motion and that these responses are vestibular in origin. Given their well-studied role in processing visual self-motion cues (i.e., optic flow), these neurons may be involved in the integration of visual and vestibular signals to facilitate robust perception of self-motion. However, the temporal structure of vestibular responses in MSTd has not been characterized in detail. Specifically, it is not known whether MSTd neurons encode velocity, acceleration, or some combination of motion parameters not explicitly encoded by vestibular afferents. In this study, we have applied a frequency-domain analysis to single-unit responses during translation in three dimensions (3D). The analysis quantifies the stimulus-driven temporal modulation of each response as well as the degree to which this modulation reflects the velocity and/or acceleration profile of the stimulus. We show that MSTd neurons signal a combination of velocity and acceleration components with the velocity component being stronger for most neurons. These two components can exist both within and across motion directions, although their spatial tuning did not show a systematic relationship across the population. From these results, vestibular responses in MSTd appear to show characteristic features of spatiotemporal convergence, similar to previous findings in the brain stem and thalamus. The predominance of velocity encoding in this region may reflect the suitability of these signals to be integrated with visual signals regarding self-motion perception.
Collapse
Affiliation(s)
- Christopher R Fetsch
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri, USA
| | | | | | | | | | | |
Collapse
|
49
|
Fetsch CR, Deangelis GC, Angelaki DE. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory. Eur J Neurosci 2010; 31:1721-9. [PMID: 20584175 PMCID: PMC3108057 DOI: 10.1111/j.1460-9568.2010.07207.x] [Citation(s) in RCA: 92] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.
Collapse
Affiliation(s)
- Christopher R Fetsch
- Department of Anatomy and Neurobiology, Washington University School of Medicine, 660 S. Euclid Ave., Box 8108, St. Louis, MO 63110, USA
| | | | | |
Collapse
|
50
|
Abstract
The perception of self-motion direction, or heading, relies on integration of multiple sensory cues, especially from the visual and vestibular systems. However, the reliability of sensory information can vary rapidly and unpredictably, and it remains unclear how the brain integrates multiple sensory signals given this dynamic uncertainty. Human psychophysical studies have shown that observers combine cues by weighting them in proportion to their reliability, consistent with statistically optimal integration schemes derived from Bayesian probability theory. Remarkably, because cue reliability is varied randomly across trials, the perceptual weight assigned to each cue must change from trial to trial. Dynamic cue reweighting has not been examined for combinations of visual and vestibular cues, nor has the Bayesian cue integration approach been applied to laboratory animals, an important step toward understanding the neural basis of cue integration. To address these issues, we tested human and monkey subjects in a heading discrimination task involving visual (optic flow) and vestibular (translational motion) cues. The cues were placed in conflict on a subset of trials, and their relative reliability was varied to assess the weights that subjects gave to each cue in their heading judgments. We found that monkeys can rapidly reweight visual and vestibular cues according to their reliability, the first such demonstration in a nonhuman species. However, some monkeys and humans tended to over-weight vestibular cues, inconsistent with simple predictions of a Bayesian model. Nonetheless, our findings establish a robust model system for studying the neural mechanisms of dynamic cue reweighting in multisensory perception.
Collapse
|