1
|
Tani K, Uehara S, Tanaka S. Psychophysical evidence for the involvement of head/body-centered reference frames in egocentric visuospatial memory: A whole-body roll tilt paradigm. J Vis 2023; 23:16. [PMID: 36689216 PMCID: PMC9900457 DOI: 10.1167/jov.23.1.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 12/29/2022] [Indexed: 01/24/2023] Open
Abstract
Accurate memory regarding the location of an object with respect to one's own body, termed egocentric visuospatial memory, is essential for action directed toward the object. Although researchers have suggested that the brain stores information related to egocentric visuospatial memory not only in the eye-centered reference frame but also in the other egocentric (i.e., head- or body-centered or both) reference frames, experimental evidence is scarce. Here, we tested this possibility by exploiting the perceptual distortion of head/body-centered coordinates via whole-body tilt relative to gravity. We hypothesized that if the head/body-centered reference frames are involved in storing the egocentric representation of a target in memory, then reproduction would be affected by this perceptual distortion. In two experiments, we asked participants to reproduce the remembered location of a visual target relative to their head/body. Using intervening whole-body roll rotations, we manipulated the initial (target presentation) and final (reproduction of the remembered location) body orientations in space and evaluated the effect on the reproduced location. Our results showed significant biases of the reproduced target location and perceived head/body longitudinal axis in the direction of the intervening body rotation. Importantly, the amount of error was correlated across participants. These results provide experimental evidence for the neural encoding and storage of information related to egocentric visuospatial memory in the head/body-centered reference frames.
Collapse
Affiliation(s)
- Keisuke Tani
- Laboratory of Psychology, Hamamatsu University School of Medicine, Shizuoka, Japan
- Faculty of Psychology, Otemon Gakuin University, Osaka, Japan
| | - Shintaro Uehara
- Faculty of Rehabilitation, Fujita Health University School of Health Sciences, Aichi, Japan
| | - Satoshi Tanaka
- Laboratory of Psychology, Hamamatsu University School of Medicine, Shizuoka, Japan
| |
Collapse
|
2
|
The influence of yaw rotation on spatial navigation during development. Neuropsychologia 2021; 154:107774. [PMID: 33600832 DOI: 10.1016/j.neuropsychologia.2021.107774] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Revised: 11/19/2020] [Accepted: 02/01/2021] [Indexed: 11/24/2022]
Abstract
Sensory cues enable navigation through space, as they inform us about movement properties, such as the amount of travelled distance and the heading direction. In this study, we focused on the ability to spatially update one's position when only proprioceptive and vestibular information is available. We aimed to investigate the effect of yaw rotation on path integration across development in the absence of visual feedback. To this end, we utilized the triangle completion task: participants were guided through two legs of a triangle and asked to close the shape by walking along its third imagined leg. To test the influence of yaw rotation across development, we tested children between 6 and 11 years old (y.o.) and adults on their perceptions of angles of different degrees. Our results demonstrated that the amount of turn while executing the angle influences performance at all ages, and in some aspects, also interacted with age. Indeed, whilst adults seemed to adjust their heading towards the end of their walked path, younger children took less advantage of this strategy. The amount of disorientation the path induced also affected participants' full maturational ability to spatially navigate with no visual feedback. Increasing induced disorientation required children to be older to reach adult-level performance. Overall, these results provide novel insights on the maturation of spatial navigation-related processes.
Collapse
|
3
|
Koppen M, Ter Horst AC, Medendorp WP. Weighted Visual and Vestibular Cues for Spatial Updating During Passive Self-Motion. Multisens Res 2019; 32:165-178. [PMID: 31059483 DOI: 10.1163/22134808-20191364] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Accepted: 02/12/2019] [Indexed: 11/19/2022]
Abstract
When walking or driving, it is of the utmost importance to continuously track the spatial relationship between objects in the environment and the moving body in order to prevent collisions. Although this process of spatial updating occurs naturally, it involves the processing of a myriad of noisy and ambiguous sensory signals. Here, using a psychometric approach, we investigated the integration of visual optic flow and vestibular cues in spatially updating a remembered target position during a linear displacement of the body. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They had to remember the position of a target, briefly presented before a sideward translation of the body involving supra-threshold vestibular cues and whole-field optic flow that provided slightly discrepant motion information. After the motion, using a forced response participants indicated whether the location of a brief visual probe was left or right of the remembered target position. Our results show that in a spatial updating task involving passive linear self-motion humans integrate optic flow and vestibular self-displacement information according to a weighted-averaging process with, across subjects, on average about four times as much weight assigned to the visual compared to the vestibular contribution (i.e., 79% visual weight). We discuss our findings with respect to previous literature on the effect of optic flow on spatial updating performance.
Collapse
Affiliation(s)
- Mathieu Koppen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Arjan C Ter Horst
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
4
|
Moreau-Debord I, Martin CZ, Landry M, Green AM. Evidence for a reference frame transformation of vestibular signal contributions to voluntary reaching. J Neurophysiol 2014; 111:1903-19. [DOI: 10.1152/jn.00419.2013] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To contribute appropriately to voluntary reaching during body motion, vestibular signals must be transformed from a head-centered to a body-centered reference frame. We quantitatively investigated the evidence for this transformation during online reach execution by using galvanic vestibular stimulation (GVS) to simulate rotation about a head-fixed, roughly naso-occipital axis as human subjects made planar reaching movements to a remembered location with their head in different orientations. If vestibular signals that contribute to reach execution have been transformed from a head-centered to a body-centered reference frame, the same stimulation should be interpreted as body tilt with the head upright but as vertical-axis rotation with the head inclined forward. Consequently, GVS should perturb reach trajectories in a head-orientation-dependent way. Consistent with this prediction, GVS applied during reach execution induced trajectory deviations that were significantly larger with the head forward compared with upright. Only with the head forward were trajectories consistently deviated in opposite directions for rightward versus leftward simulated rotation, as appropriate to compensate for body vertical-axis rotation. These results demonstrate that vestibular signals contributing to online reach execution have indeed been transformed from a head-centered to a body-centered reference frame. Reach deviation amplitudes were comparable to those predicted for ideal compensation for body rotation using a biomechanical limb model. Finally, by comparing the effects of application of GVS during reach execution versus prior to reach onset we also provide evidence that spatially transformed vestibular signals contribute to at least partially distinct compensation mechanisms for body motion during reach planning versus execution.
Collapse
Affiliation(s)
- Ian Moreau-Debord
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| | | | - Marianne Landry
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| | - Andrea M. Green
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| |
Collapse
|
5
|
Mackrous I, Simoneau M. Generalization of vestibular learning to earth-fixed targets is possible but limited when the polarity of afferent vestibular information is changed. Neuroscience 2014; 260:12-22. [DOI: 10.1016/j.neuroscience.2013.12.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2013] [Revised: 11/29/2013] [Accepted: 12/03/2013] [Indexed: 10/25/2022]
|
6
|
Visuo-vestibular interaction: predicting the position of a visual target during passive body rotation. Neuroscience 2011; 195:45-53. [PMID: 21839149 DOI: 10.1016/j.neuroscience.2011.07.032] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2011] [Revised: 07/11/2011] [Accepted: 07/19/2011] [Indexed: 11/22/2022]
Abstract
Following body rotation, optimal updating of the position of a memorized target is attained when retinal error is perceived and corrective saccade is performed. Thus, it appears that these processes may enable the calibration of the vestibular system by facilitating the sharing of information between both reference frames. Here, it is assessed whether having sensory information regarding body rotation in the target reference frame could enhance an individual's learning rate to predict the position of an earth-fixed target. During rotation, participants had to respond when they felt their body midline had crossed the position of the target and received knowledge of result. During practice blocks, for two groups, visual cues were displayed in the same reference frame of the target, whereas a third group relied on vestibular information (vestibular-only group) to predict the location of the target. Participants, unaware of the role of the visual cues (visual cues group), learned to predict the location of the target and spatial error decreased from 16.2 to 2.0°, reflecting a learning rate of 34.08 trials (determined from fitting a falling exponential model). In contrast, the group aware of the role of the visual cues (explicit visual cues group) showed a faster learning rate (i.e. 2.66 trials) but similar final spatial error 2.9°. For the vestibular-only group, similar accuracy was achieved (final spatial error of 2.3°), but their learning rate was much slower (i.e. 43.29 trials). Transferring to the Post-test (no visual cues and no knowledge of result) increased the spatial error of the explicit visual cues group (9.5°), but it did not change the performance of the vestibular group (1.2°). Overall, these results imply that cognition assists the brain in processing the sensory information within the target reference frame.
Collapse
|
7
|
Absence of spatial updating when the visuomotor system is unsure about stimulus motion. J Neurosci 2011; 31:10558-68. [PMID: 21775600 DOI: 10.1523/jneurosci.0998-11.2011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
How does the visuomotor system decide whether a target is moving or stationary in space or whether it moves relative to the eyes or head? A visual flash during a rapid eye-head gaze shift produces a brief visual streak on the retina that could provide information about target motion, when appropriately combined with eye and head self-motion signals. Indeed, double-step experiments have demonstrated that the visuomotor system incorporates actively generated intervening gaze shifts in the final localization response. Also saccades to brief head-fixed flashes during passive whole-body rotation compensate for vestibular-induced ocular nystagmus. However, both the amount of retinal motion to invoke spatial updating and the default strategy in the absence of detectable retinal motion remain unclear. To study these questions, we determined the contribution of retinal motion and the vestibular canals to spatial updating of visual flashes during passive whole-body rotation. Head- and body-restrained humans made saccades toward very brief (0.5 and 4 ms) and long (100 ms) visual flashes during sinusoidal rotation around the vertical body axis in total darkness. Stimuli were either attached to the chair (head-fixed) or stationary in space and were always well localizable. Surprisingly, spatial updating only occurred when retinal stimulus motion provided sufficient information: long-duration stimuli were always appropriately localized, thus adequately compensating for vestibular nystagmus and the passive head movement during the saccade reaction time. For the shortest stimuli, however, the target was kept in retinocentric coordinates, thus ignoring intervening nystagmus and passive head displacement, regardless of whether the target was moving with the head or not.
Collapse
|
8
|
Medendorp WP. Spatial constancy mechanisms in motor control. Philos Trans R Soc Lond B Biol Sci 2011; 366:476-91. [PMID: 21242137 DOI: 10.1098/rstb.2010.0089] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
Collapse
Affiliation(s)
- W Pieter Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, PO Box 9104, NL-6500 HE Nijmegen, The Netherlands.
| |
Collapse
|
9
|
Daye PM, Blohm G, Lefèvre P. Saccadic Compensation for Smooth Eye and Head Movements During Head-Unrestrained Two-Dimensional Tracking. J Neurophysiol 2010; 103:543-56. [DOI: 10.1152/jn.00656.2009] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Spatial updating is the ability to keep track of the position of world-fixed objects while we move. In the case of vision, this phenomenon is called spatial constancy and has been studied in head-restraint conditions. During head-restrained smooth pursuit, it has been shown that the saccadic system has access to extraretinal information from the pursuit system to update the objects' position in the surrounding environment. However, during head-unrestrained smooth pursuit, the saccadic system needs to keep track of three different motor commands: the ocular smooth pursuit command, the vestibuloocular reflex (VOR), and the head movement command. The question then arises whether saccades compensate for these movements. To address this question, we briefly presented a target during sinusoidal head-unrestrained smooth pursuit in darkness. Subjects were instructed to look at the flash as soon as they saw it. We observed that subjects were able to orient their gaze to the memorized (and spatially updated) position of the flashed target generally using one to three successive saccades. Similar to the behavior in the head-restrained condition, we found that the longer the gaze saccade latency, the better the compensation for intervening smooth gaze displacements; after about 400 ms, 62% of the smooth gaze displacement had been compensated for. This compensation depended on two independent parameters: the latency of the saccade and the eye contribution to the gaze displacement during this latency period. Separating gaze into eye and head contributions, we show that the larger the eye contribution to the gaze displacement, the better the overall compensation. Finally, we found that the compensation was a function of the head oscillation frequency and we suggest that this relationship is linked to the modulation of VOR gain. We conclude that the general mechanisms of compensation for smooth gaze displacements are similar to those observed in the head-restrained condition.
Collapse
Affiliation(s)
- P. M. Daye
- Center for Systems Engineering and Applied Mechanics, Université catholique de Louvain, Louvain-la-Neuve
- Laboratory of Neurophysiology, Université catholique de Louvain, Brussels, Belgium; and
| | - G. Blohm
- Centre for Neurosciences Studies, Queen's University, Kingston, Ontario, Canada
| | - P. Lefèvre
- Center for Systems Engineering and Applied Mechanics, Université catholique de Louvain, Louvain-la-Neuve
- Laboratory of Neurophysiology, Université catholique de Louvain, Brussels, Belgium; and
| |
Collapse
|
10
|
Angelaki DE, Klier EM, Snyder LH. A vestibular sensation: probabilistic approaches to spatial perception. Neuron 2009; 64:448-61. [PMID: 19945388 DOI: 10.1016/j.neuron.2009.11.010] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2009] [Indexed: 10/20/2022]
Abstract
The vestibular system helps maintain equilibrium and clear vision through reflexes, but it also contributes to spatial perception. In recent years, research in the vestibular field has expanded to higher-level processing involving the cortex. Vestibular contributions to spatial cognition have been difficult to study because the circuits involved are inherently multisensory. Computational methods and the application of Bayes theorem are used to form hypotheses about how information from different sensory modalities is combined together with expectations based on past experience in order to obtain optimal estimates of cognitive variables like current spatial orientation. To test these hypotheses, neuronal populations are being recorded during active tasks in which subjects make decisions based on vestibular and visual or somatosensory information. This review highlights what is currently known about the role of vestibular information in these processes, the computations necessary to obtain the appropriate signals, and the benefits that have emerged thus far.
Collapse
Affiliation(s)
- Dora E Angelaki
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
11
|
Carriot J, DiZio P, Nougier V. Vertical frames of reference and control of body orientation. Neurophysiol Clin 2008; 38:423-37. [DOI: 10.1016/j.neucli.2008.09.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2008] [Accepted: 09/10/2008] [Indexed: 11/28/2022] Open
|
12
|
Klier EM, Angelaki DE. Spatial updating and the maintenance of visual constancy. Neuroscience 2008; 156:801-18. [PMID: 18786618 PMCID: PMC2677727 DOI: 10.1016/j.neuroscience.2008.07.079] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2008] [Revised: 07/29/2008] [Accepted: 07/30/2008] [Indexed: 11/16/2022]
Abstract
Spatial updating is the means by which we keep track of the locations of objects in space even as we move. Four decades of research have shown that humans and non-human primates can take the amplitude and direction of intervening movements into account, including saccades (both head-fixed and head-free), pursuit, whole-body rotations and translations. At the neuronal level, spatial updating is thought to be maintained by receptive field locations that shift with changes in gaze, and evidence for such shifts has been shown in several cortical areas. These regions receive information about the intervening movement from several sources including motor efference copies when a voluntary movement is made and vestibular/somatosensory signals when the body is in motion. Many of these updating signals arise from brainstem regions that monitor our ongoing movements and subsequently transmit this information to the cortex via pathways that likely include the thalamus. Several issues of debate include (1) the relative contribution of extra-retinal sensory and efference copy signals to spatial updating, (2) the source of an updating signal for real life, three-dimensional motion that cannot arise from brain areas encoding only two-dimensional commands, and (3) the reference frames used by the brain to integrate updating signals from various sources. This review highlights the relevant spatial updating studies and provides a summary of the field today. We find that spatial constancy is maintained by a highly evolved neural mechanism that keeps track of our movements, transmits this information to relevant brain regions, and then uses this information to change the way in which single neurons respond. In this way, we are able to keep track of relevant objects in the outside world and interact with them in meaningful ways.
Collapse
Affiliation(s)
- E M Klier
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | |
Collapse
|
13
|
Vingerhoets RAA, Medendorp WP, Van Gisbergen JAM. Body-tilt and visual verticality perception during multiple cycles of roll rotation. J Neurophysiol 2008; 99:2264-80. [PMID: 18337369 DOI: 10.1152/jn.00704.2007] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To assess the effects of degrading canal cues for dynamic spatial orientation in human observers, we tested how judgments about visual-line orientation in space (subjective visual vertical task, SVV) and estimates of instantaneous body tilt (subjective body-tilt task, SBT) develop in the course of three cycles of constant-velocity roll rotation. These abilities were tested across the entire tilt range in separate experiments. For comparison, we also obtained SVV data during static roll tilt. We found that as tilt increased, dynamic SVV responses became strongly biased toward the head pole of the body axis (A-effect), as if body tilt was underestimated. However, on entering the range of near-inverse tilts, SVV responses adopted a bimodal pattern, alternating between A-effects (biased toward head-pole) and E-effects (biased toward feet-pole). Apart from an onset effect, this tilt-dependent pattern of systematic SVV errors repeated itself in subsequent rotation cycles with little sign of worsening performance. Static SVV responses were qualitatively similar and consistent with previous reports but showed smaller A-effects. By contrast, dynamic SBT errors were small and unimodal, indicating that errors in visual-verticality estimates were not caused by errors in body-tilt estimation. We discuss these results in terms of predictions from a canal-otolith interaction model extended with a leaky integrator and an egocentric bias mechanism. We conclude that the egocentric-bias mechanism becomes more manifest during constant velocity roll-rotation and that perceptual errors due to incorrect disambiguation of the otolith signal are small despite the decay of canal signals.
Collapse
Affiliation(s)
- R A A Vingerhoets
- Department of Biophysics, Nijmegen Institute for Cognition and Information, Radboud University Nijmegen, Nijmegen, The Netherlands
| | | | | |
Collapse
|
14
|
Klier EM, Hess BJM, Angelaki DE. Human visuospatial updating after passive translations in three-dimensional space. J Neurophysiol 2008; 99:1799-809. [PMID: 18256164 DOI: 10.1152/jn.01091.2007] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To maintain a stable representation of the visual environment as we move, the brain must update the locations of targets in space using extra-retinal signals. Humans can accurately update after intervening active whole-body translations. But can they also update for passive translations (i.e., without efference copy signals of an outgoing motor command)? We asked six head-fixed subjects to remember the location of a briefly flashed target (five possible targets were located at depths of 23, 33, 43, 63, and 150 cm in front of the cyclopean eye) as they moved 10 cm left, right, up, down, forward, or backward while fixating a head-fixed target at 53 cm. After the movement, the subjects made a saccade to the remembered location of the flash with a combination of version and vergence eye movements. We computed an updating ratio where 0 indicates no updating and 1 indicates perfect updating. For lateral and vertical whole-body motion, where updating performance is judged by the size of the version movement, the updating ratios were similar for leftward and rightward translations, averaging 0.84 +/- 0.28 (mean +/- SD) as compared with 0.51 +/- 0.33 for downward and 1.05 +/- 0.50 for upward translations. For forward/backward movements, where updating performance is judged by the size of the vergence movement, the average updating ratio was 1.12 +/- 0.45. Updating ratios tended to be larger for far targets than near targets, although both intra- and intersubject variabilities were smallest for near targets. Thus in addition to self-generated movements, extra-retinal signals involving otolith and proprioceptive cues can also be used for spatial constancy.
Collapse
Affiliation(s)
- Eliana M Klier
- Department of Neurobiology, Washington University School of Medicine, 660 S. Euclid Ave., St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
15
|
Klier EM, Angelaki DE, Hess BJM. Human visuospatial updating after noncommutative rotations. J Neurophysiol 2007; 98:537-44. [PMID: 17442766 DOI: 10.1152/jn.01229.2006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
As we move our bodies in space, we often undergo head and body rotations about different axes-yaw, pitch, and roll. The order in which we rotate about these axes is an important factor in determining the final position of our bodies in space because rotations, unlike translations, do not commute. Does our brain keep track of the noncommutativity of rotations when computing changes in head and body orientation and then use this information when planning subsequent motor commands? We used a visuospatial updating task to investigate whether saccades to remembered visual targets are accurate after intervening, whole-body rotational sequences. The sequences were reversed, either yaw then roll or roll then yaw, such that the final required eye movements to reach the same space-fixed target were different in each case. While each subject performed consistently irrespective of target location and rotational combination, we found great intersubject variability in their capacity to update. The distance between the noncommutative endpoints was, on average, half of that predicted by perfect noncommutativity. Nevertheless, most subjects did make eye movements to distinct final endpoint locations and not to one unique location in space as predicted by a commutative model. In addition, their noncommutative performance significantly improved when their less than ideal updating performance was taken into account. Thus the brain can produce movements that are consistent with the processing of noncommutative rotations, although it is often poor in using internal estimates of rotation for updating.
Collapse
Affiliation(s)
- Eliana M Klier
- Dept of Neurobiology, Washington University School of Medicine, St Louis, MO 63110, USA.
| | | | | |
Collapse
|
16
|
Glasauer S, Brandt T. Noncommutative updating of perceived self-orientation in three dimensions. J Neurophysiol 2007; 97:2958-64. [PMID: 17287442 DOI: 10.1152/jn.00655.2006] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
After whole body rotations around an earth-vertical axis in darkness, subjects can indicate their orientation in space with respect to their initial orientation reasonably well. This is possible because the brain is able to mathematically integrate self-velocity information provided by the vestibular system to obtain self-orientation, a process called path integration. For rotations around multiple axes, however, computations are more demanding to accurately update self-orientation with respect to space. In such a case, simple integration is no longer sufficient because of the noncommutativity of rotations. We investigated whether such updating is possible after three-dimensional whole body rotations and whether the noncommutativity of three-dimensional rotations is taken into account. The ability of ten subjects to indicate their spatial orientation in the earth-horizontal plane was tested after different rotational paths from upright to supine positions. Initial and final orientations of the subjects were the same in all cases, but the paths taken were different, and so were the angular velocities sensed by the vestibular system. The results show that seven of the ten subjects could consistently indicate their final orientation within the earth-horizontal plane. Thus perceived final orientation was independent of the path taken, i.e., the noncommutativity of rotations was taken into account.
Collapse
Affiliation(s)
- Stefan Glasauer
- Department of Neurology with Center for Sensorimotor Research, Klinikum Grosshadern, Ludwig-Maximilians University, Munich, Germany.
| | | |
Collapse
|
17
|
Wei M, Li N, Newlands SD, Dickman JD, Angelaki DE. Deficits and Recovery in Visuospatial Memory During Head Motion After Bilateral Labyrinthine Lesion. J Neurophysiol 2006; 96:1676-82. [PMID: 16760354 DOI: 10.1152/jn.00012.2006] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To keep a stable internal representation of the environment as we move, extraretinal sensory or motor cues are critical for updating neural maps of visual space. Using a memory-saccade task, we studied whether visuospatial updating uses vestibular information. Specifically, we tested whether trained rhesus monkeys maintain the ability to update the conjugate and vergence components of memory-guided eye movements in response to passive translational or rotational head and body movements after bilateral labyrinthine lesion. We found that lesioned animals were acutely compromised in generating the appropriate horizontal versional responses necessary to update the directional goal of memory-guided eye movements after leftward or rightward rotation/translation. This compromised function recovered in the long term, likely using extravestibular (e.g., somatosensory) signals, such that nearly normal performance was observed 4 mo after the lesion. Animals also lost their ability to adjust memory vergence to account for relative distance changes after motion in depth. Not only were these depth deficits larger than the respective effects on version, but they also showed little recovery. We conclude that intact labyrinthine signals are functionally useful for proper visuospatial memory updating during passive head and body movements.
Collapse
Affiliation(s)
- Min Wei
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | | | | | | | |
Collapse
|