1
|
Kim JJJ, Harris LR. Updating the remembered position of targets following passive lateral translation. PLoS One 2024; 19:e0316469. [PMID: 39739643 DOI: 10.1371/journal.pone.0316469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 12/11/2024] [Indexed: 01/02/2025] Open
Abstract
Spatial updating, the ability to track the egocentric position of surrounding objects during self-motion, is fundamental to navigating around the world. However, people make systematic errors when updating the position of objects after linear self-motion. To determine the source of these errors, we measured errors in remembered target position with or without passive lateral translations. Self-motion was presented both visually (simulated in virtual reality) and physically (on a 6-DOF motion platform). People underestimated targets' eccentricity in general even when just asked to remember them for a few seconds (5-7 seconds), with larger underestimations of more eccentric targets. We hypothesized that updating errors would depend on target eccentricity, which was manifested as errors depending not only on target eccentricity but also the observer's movement range. When updating the position of targets within the range of movement (such that their actual locations crossed the viewer's midline), people overestimated their change in position relative to their head/body compared to when judging the location of objects that were outside the range of movement and therefore did not cross the midline. We interpret these results as revealing changes in the efficacy of spatial updating depending on participant's perception of self-motion and the perceptual consequences for targets represented initially in one half of the visual field having to be reconstructed in the opposite hemifield.
Collapse
Affiliation(s)
- John J J Kim
- Department of Psychology, York University, Toronto, Ontario, Canada
| | | |
Collapse
|
2
|
Evans L, Champion RA, Rushton SK, Montaldi D, Warren PA. Detection of scene-relative object movement and optic flow parsing across the adult lifespan. J Vis 2020; 20:12. [PMID: 32945848 PMCID: PMC7509779 DOI: 10.1167/jov.20.9.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Moving around safely relies critically on our ability to detect object movement. This is made difficult because retinal motion can arise from object movement or our own movement. Here we investigate ability to detect scene-relative object movement using a neural mechanism called optic flow parsing. This mechanism acts to subtract retinal motion caused by self-movement. Because older observers exhibit marked changes in visual motion processing, we consider performance across a broad age range (N = 30, range: 20–76 years). In Experiment 1 we measured thresholds for reliably discriminating the scene-relative movement direction of a probe presented among three-dimensional objects moving onscreen to simulate observer movement. Performance in this task did not correlate with age, suggesting that ability to detect scene-relative object movement from retinal information is preserved in ageing. In Experiment 2 we investigated changes in the underlying optic flow parsing mechanism that supports this ability, using a well-established task that measures the magnitude of globally subtracted optic flow. We found strong evidence for a positive correlation between age and global flow subtraction. These data suggest that the ability to identify object movement during self-movement from visual information is preserved in ageing, but that there are changes in the flow parsing mechanism that underpins this ability. We suggest that these changes reflect compensatory processing required to counteract other impairments in the ageing visual system.
Collapse
Affiliation(s)
- Lucy Evans
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Rebecca A Champion
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | | | - Daniela Montaldi
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Paul A Warren
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| |
Collapse
|
3
|
Mackrous I, Carriot J, Simoneau M. Learning to use vestibular sense for spatial updating is context dependent. Sci Rep 2019; 9:11154. [PMID: 31371770 PMCID: PMC6671975 DOI: 10.1038/s41598-019-47675-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 07/22/2019] [Indexed: 11/09/2022] Open
Abstract
As we move, perceptual stability is crucial to successfully interact with our environment. Notably, the brain must update the locations of objects in space using extra-retinal signals. The vestibular system is a strong candidate as a source of information for spatial updating as it senses head motion. The ability to use this cue is not innate but must be learned. To date, the mechanisms of vestibular spatial updating generalization are unknown or at least controversial. In this paper we examine generalization patterns within and between different conditions of vestibular spatial updating. Participants were asked to update the position of a remembered target following (offline) or during (online) passive body rotation. After being trained on a single spatial target position within a given task, we tested generalization of performance for different spatial targets and an unpracticed spatial updating task. The results demonstrated different patterns of generalization across the workspace depending on the task. Further, no transfer was observed from the practiced to the unpracticed task. We found that the type of mechanism involved during learning governs generalization. These findings provide new knowledge about how the brain uses vestibular information to preserve its spatial updating ability.
Collapse
Affiliation(s)
| | - Jérôme Carriot
- Department of Physiology, McGill University, Montreal, QC, Canada
| | - Martin Simoneau
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, QC, Canada. .,Département de kinésiologie, Faculté de médecine, Université Laval, Québec, QC, Canada.
| |
Collapse
|
4
|
Pigarev IN, Levichkina EV. Absolute Depth Sensitivity in Cat Primary Visual Cortex under Natural Viewing Conditions. Front Syst Neurosci 2016; 10:66. [PMID: 27547179 PMCID: PMC4974279 DOI: 10.3389/fnsys.2016.00066] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2015] [Accepted: 07/21/2016] [Indexed: 11/13/2022] Open
Abstract
Mechanisms of 3D perception, investigated in many laboratories, have defined depth either relative to the fixation plane or to other objects in the visual scene. It is obvious that for efficient perception of the 3D world, additional mechanisms of depth constancy could operate in the visual system to provide information about absolute distance. Neurons with properties reflecting some features of depth constancy have been described in the parietal and extrastriate occipital cortical areas. It has also been shown that, for some neurons in the visual area V1, responses to stimuli of constant angular size differ at close and remote distances. The present study was designed to investigate whether, in natural free gaze viewing conditions, neurons tuned to absolute depths can be found in the primary visual cortex (area V1). Single-unit extracellular activity was recorded from the visual cortex of waking cats sitting on a trolley in front of a large screen. The trolley was slowly approaching the visual scene, which consisted of stationary sinusoidal gratings of optimal orientation rear-projected over the whole surface of the screen. Each neuron was tested with two gratings, with spatial frequency of one grating being twice as high as that of the other. Assuming that a cell is tuned to a spatial frequency, its maximum response to the grating with a spatial frequency twice as high should be shifted to a distance half way closer to the screen in order to attain the same size of retinal projection. For hypothetical neurons selective to absolute depth, location of the maximum response should remain at the same distance irrespective of the type of stimulus. It was found that about 20% of neurons in our experimental paradigm demonstrated sensitivity to particular distances independently of the spatial frequencies of the gratings. We interpret these findings as an indication of the use of absolute depth information in the primary visual cortex.
Collapse
Affiliation(s)
- Ivan N Pigarev
- Institute for Information Transmission Problems (Kharkevich Institute), Russian Academy of Sciences Moscow, Russia
| | - Ekaterina V Levichkina
- Institute for Information Transmission Problems (Kharkevich Institute), Russian Academy of SciencesMoscow, Russia; Department of Optometry and Vision Sciences, The University of Melbourne, ParkvilleVIC, Australia
| |
Collapse
|
5
|
Reference frames for reaching when decoupling eye and target position in depth and direction. Sci Rep 2016; 6:21646. [PMID: 26876496 PMCID: PMC4753502 DOI: 10.1038/srep21646] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2015] [Accepted: 01/28/2016] [Indexed: 11/23/2022] Open
Abstract
Spatial representations in cortical areas involved in reaching movements were traditionally studied in a frontoparallel plane where the two-dimensional target location and the movement direction were the only variables to consider in neural computations. No studies so far have characterized the reference frames for reaching considering both depth and directional signals. Here we recorded from single neurons of the medial posterior parietal area V6A during a reaching task where fixation point and reaching targets were decoupled in direction and depth. We found a prevalent mixed encoding of target position, with eye-centered and spatiotopic representations differently balanced in the same neuron. Depth was stronger in defining the reference frame of eye-centered cells, while direction was stronger in defining that of spatiotopic cells. The predominant presence of various typologies of mixed encoding suggests that depth and direction signals are processed on the basis of flexible coordinate systems to ensure optimal motor response.
Collapse
|
6
|
Tramper JJ, Medendorp WP. Parallel updating and weighting of multiple spatial maps for visual stability during whole body motion. J Neurophysiol 2015; 114:3211-9. [PMID: 26490289 DOI: 10.1152/jn.00576.2015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Accepted: 10/21/2015] [Indexed: 11/22/2022] Open
Abstract
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.
Collapse
Affiliation(s)
- J J Tramper
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - W P Medendorp
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
7
|
Mackrous I, Simoneau M. Improving spatial updating accuracy in absence of external feedback. Neuroscience 2015; 300:155-62. [PMID: 25987200 DOI: 10.1016/j.neuroscience.2015.05.024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Revised: 04/23/2015] [Accepted: 05/11/2015] [Indexed: 10/23/2022]
Abstract
Updating the position of an earth-fixed target during whole-body rotation seems to rely on cognitive processes such as the utilization of external feedback. According to perceptual learning models, improvement in performance can also occur without external feedback. The aim of this study was to assess spatial updating improvement in the absence and in the presence of external feedback. While being rotated counterclockwise (CCW), participants had to predict when their body midline had crossed the position of a memorized target. Four experimental conditions were tested: (1) Pre-test: the target was presented 30° in the CCW direction from participant's midline. (2) Practice: the target was located 45° in the CCW direction from participant's midline. One group received external feedback about their spatial accuracy (Mackrous and Simoneau, 2014) while the other group did not. (3) Transfer T(30)CCW: the target was presented 30° in the CCW direction to evaluate whether improvement in performance, during practice, generalized to other target eccentricity. (4) Transfer T(30)CW: the target was presented 30° in the clockwise (CW) direction and participants were rotated CW. This transfer condition evaluated whether improvement in performance generalized to the untrained rotation direction. With practice, performance improved in the absence of external feedback (p=0.004). Nonetheless, larger improvement occurred when external feedback was provided (ps=0.002). During T(30)CCW, performance remained better for the feedback than the no-feedback group (p=0.005). However, no group difference was observed for the untrained direction (p=0.22). We demonstrated that spatial updating improved without external feedback but less than when external feedback was given. These observations are explained by a mixture of calibration processes and supervised vestibular learning.
Collapse
Affiliation(s)
- I Mackrous
- Département de kinésiologie, Faculté de médecine, Université Laval, Québec, QC, Canada; Centre de recherche du CHU de Québec, Québec, QC, Canada
| | - M Simoneau
- Département de kinésiologie, Faculté de médecine, Université Laval, Québec, QC, Canada; Centre de recherche du CHU de Québec, Québec, QC, Canada.
| |
Collapse
|
8
|
Gutteling TP, Selen LPJ, Medendorp WP. Parallax-sensitive remapping of visual space in occipito-parietal alpha-band activity during whole-body motion. J Neurophysiol 2014; 113:1574-84. [PMID: 25505108 DOI: 10.1152/jn.00477.2014] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Despite the constantly changing retinal image due to eye, head, and body movements, we are able to maintain a stable representation of the visual environment. Various studies on retinal image shifts caused by saccades have suggested that occipital and parietal areas correct for these perturbations by a gaze-centered remapping of the neural image. However, such a uniform, rotational, remapping mechanism cannot work during translations when objects shift on the retina in a more complex, depth-dependent fashion due to motion parallax. Here we tested whether the brain's activity patterns show parallax-sensitive remapping of remembered visual space during whole-body motion. Under continuous recording of electroencephalography (EEG), we passively translated human subjects while they had to remember the location of a world-fixed visual target, briefly presented in front of or behind the eyes' fixation point prior to the motion. Using a psychometric approach we assessed the quality of the memory update, which had to be made based on vestibular feedback and other extraretinal motion cues. All subjects showed a variable amount of parallax-sensitive updating errors, i.e., the direction of the errors depended on the depth of the target relative to fixation. The EEG recordings show a neural correlate of this parallax-sensitive remapping in the alpha-band power at occipito-parietal electrodes. At parietal electrodes, the strength of these alpha-band modulations correlated significantly with updating performance. These results suggest that alpha-band oscillatory activity reflects the time-varying updating of gaze-centered spatial information during parallax-sensitive remapping during whole-body motion.
Collapse
Affiliation(s)
- T P Gutteling
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - L P J Selen
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - W P Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
9
|
Moreau-Debord I, Martin CZ, Landry M, Green AM. Evidence for a reference frame transformation of vestibular signal contributions to voluntary reaching. J Neurophysiol 2014; 111:1903-19. [DOI: 10.1152/jn.00419.2013] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To contribute appropriately to voluntary reaching during body motion, vestibular signals must be transformed from a head-centered to a body-centered reference frame. We quantitatively investigated the evidence for this transformation during online reach execution by using galvanic vestibular stimulation (GVS) to simulate rotation about a head-fixed, roughly naso-occipital axis as human subjects made planar reaching movements to a remembered location with their head in different orientations. If vestibular signals that contribute to reach execution have been transformed from a head-centered to a body-centered reference frame, the same stimulation should be interpreted as body tilt with the head upright but as vertical-axis rotation with the head inclined forward. Consequently, GVS should perturb reach trajectories in a head-orientation-dependent way. Consistent with this prediction, GVS applied during reach execution induced trajectory deviations that were significantly larger with the head forward compared with upright. Only with the head forward were trajectories consistently deviated in opposite directions for rightward versus leftward simulated rotation, as appropriate to compensate for body vertical-axis rotation. These results demonstrate that vestibular signals contributing to online reach execution have indeed been transformed from a head-centered to a body-centered reference frame. Reach deviation amplitudes were comparable to those predicted for ideal compensation for body rotation using a biomechanical limb model. Finally, by comparing the effects of application of GVS during reach execution versus prior to reach onset we also provide evidence that spatially transformed vestibular signals contribute to at least partially distinct compensation mechanisms for body motion during reach planning versus execution.
Collapse
Affiliation(s)
- Ian Moreau-Debord
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| | | | - Marianne Landry
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| | - Andrea M. Green
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| |
Collapse
|
10
|
Mackrous I, Simoneau M. Generalization of vestibular learning to earth-fixed targets is possible but limited when the polarity of afferent vestibular information is changed. Neuroscience 2014; 260:12-22. [DOI: 10.1016/j.neuroscience.2013.12.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2013] [Revised: 11/29/2013] [Accepted: 12/03/2013] [Indexed: 10/25/2022]
|
11
|
Chen X, Deangelis GC, Angelaki DE. Diverse spatial reference frames of vestibular signals in parietal cortex. Neuron 2013; 80:1310-21. [PMID: 24239126 DOI: 10.1016/j.neuron.2013.09.006] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2013] [Indexed: 10/26/2022]
Abstract
Reference frames are important for understanding how sensory cues from different modalities are coordinated to guide behavior, and the parietal cortex is critical to these functions. We compare reference frames of vestibular self-motion signals in the ventral intraparietal area (VIP), parietoinsular vestibular cortex (PIVC), and dorsal medial superior temporal area (MSTd). Vestibular heading tuning in VIP is invariant to changes in both eye and head positions, indicating a body (or world)-centered reference frame. Vestibular signals in PIVC have reference frames that are intermediate between head and body centered. In contrast, MSTd neurons show reference frames between head and eye centered but not body centered. Eye and head position gain fields were strongest in MSTd and weakest in PIVC. Our findings reveal distinct spatial reference frames for representing vestibular signals and pose new challenges for understanding the respective roles of these areas in potentially diverse vestibular functions.
Collapse
Affiliation(s)
- Xiaodong Chen
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | | | | |
Collapse
|
12
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
13
|
Medendorp WP. Spatial constancy mechanisms in motor control. Philos Trans R Soc Lond B Biol Sci 2011; 366:476-91. [PMID: 21242137 DOI: 10.1098/rstb.2010.0089] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
Collapse
Affiliation(s)
- W Pieter Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, PO Box 9104, NL-6500 HE Nijmegen, The Netherlands.
| |
Collapse
|
14
|
Green AM, Angelaki DE. Internal models and neural computation in the vestibular system. Exp Brain Res 2010; 200:197-222. [PMID: 19937232 PMCID: PMC2853943 DOI: 10.1007/s00221-009-2054-4] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2009] [Accepted: 10/08/2009] [Indexed: 10/20/2022]
Abstract
The vestibular system is vital for motor control and spatial self-motion perception. Afferents from the otolith organs and the semicircular canals converge with optokinetic, somatosensory and motor-related signals in the vestibular nuclei, which are reciprocally interconnected with the vestibulocerebellar cortex and deep cerebellar nuclei. Here, we review the properties of the many cell types in the vestibular nuclei, as well as some fundamental computations implemented within this brainstem-cerebellar circuitry. These include the sensorimotor transformations for reflex generation, the neural computations for inertial motion estimation, the distinction between active and passive head movements, as well as the integration of vestibular and proprioceptive information for body motion estimation. A common theme in the solution to such computational problems is the concept of internal models and their neural implementation. Recent studies have shed new insights into important organizational principles that closely resemble those proposed for other sensorimotor systems, where their neural basis has often been more difficult to identify. As such, the vestibular system provides an excellent model to explore common neural processing strategies relevant both for reflexive and for goal-directed, voluntary movement as well as perception.
Collapse
Affiliation(s)
- Andrea M Green
- Dépt. de Physiologie, Université de Montréal, 2960 Chemin de la Tour, Rm. 4141, Montreal, QC H3T 1J4, Canada.
| | | |
Collapse
|
15
|
Angelaki DE, Klier EM, Snyder LH. A vestibular sensation: probabilistic approaches to spatial perception. Neuron 2009; 64:448-61. [PMID: 19945388 DOI: 10.1016/j.neuron.2009.11.010] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2009] [Indexed: 10/20/2022]
Abstract
The vestibular system helps maintain equilibrium and clear vision through reflexes, but it also contributes to spatial perception. In recent years, research in the vestibular field has expanded to higher-level processing involving the cortex. Vestibular contributions to spatial cognition have been difficult to study because the circuits involved are inherently multisensory. Computational methods and the application of Bayes theorem are used to form hypotheses about how information from different sensory modalities is combined together with expectations based on past experience in order to obtain optimal estimates of cognitive variables like current spatial orientation. To test these hypotheses, neuronal populations are being recorded during active tasks in which subjects make decisions based on vestibular and visual or somatosensory information. This review highlights what is currently known about the role of vestibular information in these processes, the computations necessary to obtain the appropriate signals, and the benefits that have emerged thus far.
Collapse
Affiliation(s)
- Dora E Angelaki
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
16
|
Klier EM, Angelaki DE. Spatial updating and the maintenance of visual constancy. Neuroscience 2008; 156:801-18. [PMID: 18786618 PMCID: PMC2677727 DOI: 10.1016/j.neuroscience.2008.07.079] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2008] [Revised: 07/29/2008] [Accepted: 07/30/2008] [Indexed: 11/16/2022]
Abstract
Spatial updating is the means by which we keep track of the locations of objects in space even as we move. Four decades of research have shown that humans and non-human primates can take the amplitude and direction of intervening movements into account, including saccades (both head-fixed and head-free), pursuit, whole-body rotations and translations. At the neuronal level, spatial updating is thought to be maintained by receptive field locations that shift with changes in gaze, and evidence for such shifts has been shown in several cortical areas. These regions receive information about the intervening movement from several sources including motor efference copies when a voluntary movement is made and vestibular/somatosensory signals when the body is in motion. Many of these updating signals arise from brainstem regions that monitor our ongoing movements and subsequently transmit this information to the cortex via pathways that likely include the thalamus. Several issues of debate include (1) the relative contribution of extra-retinal sensory and efference copy signals to spatial updating, (2) the source of an updating signal for real life, three-dimensional motion that cannot arise from brain areas encoding only two-dimensional commands, and (3) the reference frames used by the brain to integrate updating signals from various sources. This review highlights the relevant spatial updating studies and provides a summary of the field today. We find that spatial constancy is maintained by a highly evolved neural mechanism that keeps track of our movements, transmits this information to relevant brain regions, and then uses this information to change the way in which single neurons respond. In this way, we are able to keep track of relevant objects in the outside world and interact with them in meaningful ways.
Collapse
Affiliation(s)
- E M Klier
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | |
Collapse
|
17
|
Medendorp WP, Beurze SM, Van Pelt S, Van Der Werf J. Behavioral and cortical mechanisms for spatial coding and action planning. Cortex 2008; 44:587-97. [DOI: 10.1016/j.cortex.2007.06.001] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2007] [Revised: 06/04/2007] [Accepted: 06/26/2007] [Indexed: 11/29/2022]
|
18
|
Van Pelt S, Medendorp WP. Updating Target Distance Across Eye Movements in Depth. J Neurophysiol 2008; 99:2281-90. [DOI: 10.1152/jn.01281.2007] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements ( n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching.
Collapse
|
19
|
Klier EM, Hess BJM, Angelaki DE. Human visuospatial updating after passive translations in three-dimensional space. J Neurophysiol 2008; 99:1799-809. [PMID: 18256164 DOI: 10.1152/jn.01091.2007] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To maintain a stable representation of the visual environment as we move, the brain must update the locations of targets in space using extra-retinal signals. Humans can accurately update after intervening active whole-body translations. But can they also update for passive translations (i.e., without efference copy signals of an outgoing motor command)? We asked six head-fixed subjects to remember the location of a briefly flashed target (five possible targets were located at depths of 23, 33, 43, 63, and 150 cm in front of the cyclopean eye) as they moved 10 cm left, right, up, down, forward, or backward while fixating a head-fixed target at 53 cm. After the movement, the subjects made a saccade to the remembered location of the flash with a combination of version and vergence eye movements. We computed an updating ratio where 0 indicates no updating and 1 indicates perfect updating. For lateral and vertical whole-body motion, where updating performance is judged by the size of the version movement, the updating ratios were similar for leftward and rightward translations, averaging 0.84 +/- 0.28 (mean +/- SD) as compared with 0.51 +/- 0.33 for downward and 1.05 +/- 0.50 for upward translations. For forward/backward movements, where updating performance is judged by the size of the vergence movement, the average updating ratio was 1.12 +/- 0.45. Updating ratios tended to be larger for far targets than near targets, although both intra- and intersubject variabilities were smallest for near targets. Thus in addition to self-generated movements, extra-retinal signals involving otolith and proprioceptive cues can also be used for spatial constancy.
Collapse
Affiliation(s)
- Eliana M Klier
- Department of Neurobiology, Washington University School of Medicine, 660 S. Euclid Ave., St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
20
|
Klier EM, Angelaki DE, Hess BJM. Human visuospatial updating after noncommutative rotations. J Neurophysiol 2007; 98:537-44. [PMID: 17442766 DOI: 10.1152/jn.01229.2006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
As we move our bodies in space, we often undergo head and body rotations about different axes-yaw, pitch, and roll. The order in which we rotate about these axes is an important factor in determining the final position of our bodies in space because rotations, unlike translations, do not commute. Does our brain keep track of the noncommutativity of rotations when computing changes in head and body orientation and then use this information when planning subsequent motor commands? We used a visuospatial updating task to investigate whether saccades to remembered visual targets are accurate after intervening, whole-body rotational sequences. The sequences were reversed, either yaw then roll or roll then yaw, such that the final required eye movements to reach the same space-fixed target were different in each case. While each subject performed consistently irrespective of target location and rotational combination, we found great intersubject variability in their capacity to update. The distance between the noncommutative endpoints was, on average, half of that predicted by perfect noncommutativity. Nevertheless, most subjects did make eye movements to distinct final endpoint locations and not to one unique location in space as predicted by a commutative model. In addition, their noncommutative performance significantly improved when their less than ideal updating performance was taken into account. Thus the brain can produce movements that are consistent with the processing of noncommutative rotations, although it is often poor in using internal estimates of rotation for updating.
Collapse
Affiliation(s)
- Eliana M Klier
- Dept of Neurobiology, Washington University School of Medicine, St Louis, MO 63110, USA.
| | | | | |
Collapse
|
21
|
Van Pelt S, Medendorp WP. Gaze-Centered Updating of Remembered Visual Space During Active Whole-Body Translations. J Neurophysiol 2007; 97:1209-20. [PMID: 17135474 DOI: 10.1152/jn.00882.2006] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Various cortical and sub-cortical brain structures update the gaze-centered coordinates of remembered stimuli to maintain an accurate representation of visual space across eyes rotations and to produce suitable motor plans. A major challenge for the computations by these structures is updating across eye translations. When the eyes translate, objects in front of and behind the eyes’ fixation point shift in opposite directions on the retina due to motion parallax. It is not known if the brain uses gaze coordinates to compute parallax in the translational updating of remembered space or if it uses gaze-independent coordinates to maintain spatial constancy across translational motion. We tested this by having subjects view targets, flashed in darkness in front of or behind fixation, then translate their body sideways, and subsequently reach to the memorized target. Reach responses showed parallax-sensitive updating errors: errors increased with depth from fixation and reversed in lateral direction for targets presented at opposite depths from fixation. In a series of control experiments, we ruled out possible biasing factors such as the presence of a fixation light during the translation, the eyes accompanying the hand to the target, and the presence of visual feedback about hand position. Quantitative geometrical analysis confirmed that updating errors were better described by using gaze-centered than gaze-independent coordinates. We conclude that spatial updating for translational motion operates in gaze-centered coordinates. Neural network simulations are presented suggesting that the brain relies on ego-velocity signals and stereoscopic depth and direction information in spatial updating during self-motion.
Collapse
Affiliation(s)
- Stan Van Pelt
- Nijmegen Institute for Cognition and Information, Radboud University Nijmegen, NL-6500 HE Nijmegen, The Netherlands.
| | | |
Collapse
|
22
|
Wei M, Li N, Newlands SD, Dickman JD, Angelaki DE. Deficits and Recovery in Visuospatial Memory During Head Motion After Bilateral Labyrinthine Lesion. J Neurophysiol 2006; 96:1676-82. [PMID: 16760354 DOI: 10.1152/jn.00012.2006] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To keep a stable internal representation of the environment as we move, extraretinal sensory or motor cues are critical for updating neural maps of visual space. Using a memory-saccade task, we studied whether visuospatial updating uses vestibular information. Specifically, we tested whether trained rhesus monkeys maintain the ability to update the conjugate and vergence components of memory-guided eye movements in response to passive translational or rotational head and body movements after bilateral labyrinthine lesion. We found that lesioned animals were acutely compromised in generating the appropriate horizontal versional responses necessary to update the directional goal of memory-guided eye movements after leftward or rightward rotation/translation. This compromised function recovered in the long term, likely using extravestibular (e.g., somatosensory) signals, such that nearly normal performance was observed 4 mo after the lesion. Animals also lost their ability to adjust memory vergence to account for relative distance changes after motion in depth. Not only were these depth deficits larger than the respective effects on version, but they also showed little recovery. We conclude that intact labyrinthine signals are functionally useful for proper visuospatial memory updating during passive head and body movements.
Collapse
Affiliation(s)
- Min Wei
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | | | | | | | |
Collapse
|
23
|
Klier EM, Hess BJM, Angelaki DE. Differences in the Accuracy of Human Visuospatial Memory After Yaw and Roll Rotations. J Neurophysiol 2006; 95:2692-7. [PMID: 16371458 DOI: 10.1152/jn.01017.2005] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Our ability to keep track of objects in the environment, even as we move, has been attributed to various cues including efference copies, vestibular signals, proprioception, and gravitational cues. However, the presence of some cues, such as gravity, may not be used to the same extent by different axes of motion (e.g., yaw vs. roll). We tested whether changes in gravitational cues can be used to improve visuospatial updating performance for yaw rotations as previously shown for roll. We found differences in updating for yaw and roll rotations in that yaw updating is not only associated with larger systematic errors but is also not facilitated by gravity in the same way as roll updating.
Collapse
Affiliation(s)
- Eliana M Klier
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|