1
|
Glennerster A. Understanding 3D vision as a policy network. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210448. [PMID: 36511403 PMCID: PMC9745881 DOI: 10.1098/rstb.2021.0448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
It is often assumed that the brain builds 3D coordinate frames, in retinal coordinates (with binocular disparity giving the third dimension), head-centred, body-centred and world-centred coordinates. This paper questions that assumption and begins to sketch an alternative based on, essentially, a set of reflexes. A 'policy network' is a term used in reinforcement learning to describe the set of actions that are generated by an agent depending on its current state. This is an untypical starting point for describing 3D vision, but a policy network can serve as a useful representation both for the 3D layout of a scene and the location of the observer within it. It avoids 3D reconstruction of the type used in computer vision but is similar to recent representations for navigation generated through reinforcement learning. A policy network for saccades (pure rotations of the camera/eye) is a logical starting point for understanding (i) an ego-centric representation of space (e.g. Marr's (Marr 1982 Vision: a computational investigation into the human representation and processing of visual information) 2[Formula: see text]-D sketch) and (ii) a hierarchical, compositional representation for navigation. The potential neural implementation of policy networks is straightforward; a network with a large range of sensory and task-related inputs such as the cerebellum would be capable of implementing this input/output function. This is not the case for 3D coordinate transformations in the brain: no neurally implementable proposals have yet been put forward that could carry out a transformation of a visual scene from retinal to world-based coordinates. Hence, if the representation underlying 3D vision can be described as a policy network (in which the actions are either saccades or head translations), this would be a significant step towards a neurally plausible model of 3D vision. This article is part of the theme issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Andrew Glennerster
- School of Psychology and Clinical Language Sciences, University of Reading, RG6 6AL Reading, UK
| |
Collapse
|
2
|
Hadjidimitrakis K, De Vitis M, Ghodrati M, Filippini M, Fattori P. Anterior-posterior gradient in the integrated processing of forelimb movement direction and distance in macaque parietal cortex. Cell Rep 2022; 41:111608. [DOI: 10.1016/j.celrep.2022.111608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 07/16/2022] [Accepted: 10/14/2022] [Indexed: 11/09/2022] Open
|
3
|
Sulpizio V, Galati G, Fattori P, Galletti C, Pitzalis S. A common neural substrate for processing scenes and egomotion-compatible visual motion. Brain Struct Funct 2020; 225:2091-2110. [PMID: 32647918 PMCID: PMC7473967 DOI: 10.1007/s00429-020-02112-8] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Accepted: 07/02/2020] [Indexed: 12/20/2022]
Abstract
Neuroimaging studies have revealed two separate classes of category-selective regions specialized in optic flow (egomotion-compatible) processing and in scene/place perception. Despite the importance of both optic flow and scene/place recognition to estimate changes in position and orientation within the environment during self-motion, the possible functional link between egomotion- and scene-selective regions has not yet been established. Here we reanalyzed functional magnetic resonance images from a large sample of participants performing two well-known “localizer” fMRI experiments, consisting in passive viewing of navigationally relevant stimuli such as buildings and places (scene/place stimulus) and coherently moving fields of dots simulating the visual stimulation during self-motion (flow fields). After interrogating the egomotion-selective areas with respect to the scene/place stimulus and the scene-selective areas with respect to flow fields, we found that the egomotion-selective areas V6+ and pIPS/V3A responded bilaterally more to scenes/places compared to faces, and all the scene-selective areas (parahippocampal place area or PPA, retrosplenial complex or RSC, and occipital place area or OPA) responded more to egomotion-compatible optic flow compared to random motion. The conjunction analysis between scene/place and flow field stimuli revealed that the most important focus of common activation was found in the dorsolateral parieto-occipital cortex, spanning the scene-selective OPA and the egomotion-selective pIPS/V3A. Individual inspection of the relative locations of these two regions revealed a partial overlap and a similar response profile to an independent low-level visual motion stimulus, suggesting that OPA and pIPS/V3A may be part of a unique motion-selective complex specialized in encoding both egomotion- and scene-relevant information, likely for the control of navigation in a structured environment.
Collapse
Affiliation(s)
- Valentina Sulpizio
- Department of Biomedical and Neuromotor Sciences-DIBINEM, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy. .,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Gaspare Galati
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,Brain Imaging Laboratory, Department of Psychology, Sapienza University, Rome, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences-DIBINEM, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences-DIBINEM, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
| | - Sabrina Pitzalis
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| |
Collapse
|
4
|
Pitzalis S, Serra C, Sulpizio V, Committeri G, de Pasquale F, Fattori P, Galletti C, Sepe R, Galati G. Neural bases of self- and object-motion in a naturalistic vision. Hum Brain Mapp 2019; 41:1084-1111. [PMID: 31713304 PMCID: PMC7267932 DOI: 10.1002/hbm.24862] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2019] [Revised: 10/19/2019] [Accepted: 10/31/2019] [Indexed: 12/16/2022] Open
Abstract
To plan movements toward objects our brain must recognize whether retinal displacement is due to self-motion and/or to object-motion. Here, we aimed to test whether motion areas are able to segregate these types of motion. We combined an event-related functional magnetic resonance imaging experiment, brain mapping techniques, and wide-field stimulation to study the responsivity of motion-sensitive areas to pure and combined self- and object-motion conditions during virtual movies of a train running within a realistic landscape. We observed a selective response in MT to the pure object-motion condition, and in medial (PEc, pCi, CSv, and CMA) and lateral (PIC and LOR) areas to the pure self-motion condition. Some other regions (like V6) responded more to complex visual stimulation where both object- and self-motion were present. Among all, we found that some motion regions (V3A, LOR, MT, V6, and IPSmot) could extract object-motion information from the overall motion, recognizing the real movement of the train even when the images remain still (on the screen), or moved, because of self-movements. We propose that these motion areas might be good candidates for the "flow parsing mechanism," that is the capability to extract object-motion information from retinal motion signals by subtracting out the optic flow components.
Collapse
Affiliation(s)
- Sabrina Pitzalis
- Department of Movement, Human and Health Sciences, University of Rome Foro Italico, Rome, Italy.,Cognitive and Motor Rehabilitation Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Chiara Serra
- Department of Movement, Human and Health Sciences, University of Rome Foro Italico, Rome, Italy.,Cognitive and Motor Rehabilitation Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Valentina Sulpizio
- Cognitive and Motor Rehabilitation Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Giorgia Committeri
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience, Imaging and Clinical Sciences, and Institute for Advanced Biomedical Technologies (ITAB), University G. d'Annunzio, Chieti, Italy
| | - Francesco de Pasquale
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience, Imaging and Clinical Sciences, and Institute for Advanced Biomedical Technologies (ITAB), University G. d'Annunzio, Chieti, Italy.,Faculty of Veterinary Medicine, University of Teramo, Teramo, Italy
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Claudio Galletti
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Rosamaria Sepe
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience, Imaging and Clinical Sciences, and Institute for Advanced Biomedical Technologies (ITAB), University G. d'Annunzio, Chieti, Italy
| | - Gaspare Galati
- Cognitive and Motor Rehabilitation Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,Brain Imaging Laboratory, Department of Psychology, Sapienza University, Rome, Italy
| |
Collapse
|
5
|
Working memory in action: inspecting the systematic and unsystematic errors of spatial memory across saccades. Exp Brain Res 2019; 237:2939-2956. [PMID: 31506709 DOI: 10.1007/s00221-019-05623-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Accepted: 08/06/2019] [Indexed: 10/26/2022]
Abstract
Our ability to interact with the world depends on memory buffers that flexibly store and process information for short periods of time. Current working memory research, however, mainly uses tasks that avoid eye movements, whereas in daily life we need to remember information across saccades. Because saccades disrupt perception and attention, the brain might use special transsaccadic memory systems. Therefore, to compare working memory systems between and across saccades, the current study devised transsaccadic memory tasks that evaluated the influence of memory load on several kinds of systematic and unsystematic spatial errors, and tested whether these measures predicted performance in more established working memory paradigms. Experiment 1 used a line intersection task that had people integrate lines shown before and after saccades, and it administered a 2-back task. Experiments 2 and 3 asked people to point at one of several locations within a memory array flashed before an eye movement, and we tested change detection and 2-back performance. We found that unsystematic transsaccadic errors increased with memory load and were correlated with 2-back performance. Systematic errors produced similar results, although effects varied as a function of the geometric layout of the memory arrays. Surprisingly, transsaccadic errors did not predict change detection performance despite the latter being a widely accepted measure of working memory capacity. Our results suggest that working memory systems between and across saccades share, in part, similar neural resources. Nevertheless, our data highlight the importance of investigating working memory across saccades.
Collapse
|
6
|
Koppen M, Ter Horst AC, Medendorp WP. Weighted Visual and Vestibular Cues for Spatial Updating During Passive Self-Motion. Multisens Res 2019; 32:165-178. [PMID: 31059483 DOI: 10.1163/22134808-20191364] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Accepted: 02/12/2019] [Indexed: 11/19/2022]
Abstract
When walking or driving, it is of the utmost importance to continuously track the spatial relationship between objects in the environment and the moving body in order to prevent collisions. Although this process of spatial updating occurs naturally, it involves the processing of a myriad of noisy and ambiguous sensory signals. Here, using a psychometric approach, we investigated the integration of visual optic flow and vestibular cues in spatially updating a remembered target position during a linear displacement of the body. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They had to remember the position of a target, briefly presented before a sideward translation of the body involving supra-threshold vestibular cues and whole-field optic flow that provided slightly discrepant motion information. After the motion, using a forced response participants indicated whether the location of a brief visual probe was left or right of the remembered target position. Our results show that in a spatial updating task involving passive linear self-motion humans integrate optic flow and vestibular self-displacement information according to a weighted-averaging process with, across subjects, on average about four times as much weight assigned to the visual compared to the vestibular contribution (i.e., 79% visual weight). We discuss our findings with respect to previous literature on the effect of optic flow on spatial updating performance.
Collapse
Affiliation(s)
- Mathieu Koppen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Arjan C Ter Horst
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
7
|
Vestibular contributions to high-level sensorimotor functions. Neuropsychologia 2017; 105:144-152. [DOI: 10.1016/j.neuropsychologia.2017.02.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2016] [Revised: 01/31/2017] [Accepted: 02/01/2017] [Indexed: 02/01/2023]
|
8
|
Dash S, Nazari SA, Yan X, Wang H, Crawford JD. Superior Colliculus Responses to Attended, Unattended, and Remembered Saccade Targets during Smooth Pursuit Eye Movements. Front Syst Neurosci 2016; 10:34. [PMID: 27147987 PMCID: PMC4828430 DOI: 10.3389/fnsys.2016.00034] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 03/30/2016] [Indexed: 11/16/2022] Open
Abstract
In realistic environments, keeping track of multiple visual targets during eye movements likely involves an interaction between vision, top-down spatial attention, memory, and self-motion information. Recently we found that the superior colliculus (SC) visual memory response is attention-sensitive and continuously updated relative to gaze direction. In that study, animals were trained to remember the location of a saccade target across an intervening smooth pursuit (SP) eye movement (Dash et al., 2015). Here, we modified this paradigm to directly compare the properties of visual and memory updating responses to attended and unattended targets. Our analysis shows that during SP, active SC visual vs. memory updating responses share similar gaze-centered spatio-temporal profiles (suggesting a common mechanism), but updating was weaker by ~25%, delayed by ~55 ms, and far more dependent on attention. Further, during SP the sum of passive visual responses (to distracter stimuli) and memory updating responses (to saccade targets) closely resembled the responses for active attentional tracking of visible saccade targets. These results suggest that SP updating signals provide a damped, delayed estimate of attended location that contributes to the gaze-centered tracking of both remembered and visible saccade targets.
Collapse
Affiliation(s)
- Suryadeep Dash
- Center for Vision Research, York UniversityToronto, ON, Canada; Department of Physiology and Pharmacology, Robarts Research Institute, Western UniversityLondon, ON, Canada
| | | | - Xiaogang Yan
- Center for Vision Research, York University Toronto, ON, Canada
| | - Hongying Wang
- Center for Vision Research, York University Toronto, ON, Canada
| | - J Douglas Crawford
- Center for Vision Research, York UniversityToronto, ON, Canada; Department of Psychology, Biology and Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| |
Collapse
|
9
|
Schindler A, Bartels A. Motion parallax links visual motion areas and scene regions. Neuroimage 2015; 125:803-812. [PMID: 26515906 DOI: 10.1016/j.neuroimage.2015.10.066] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2015] [Revised: 09/09/2015] [Accepted: 10/24/2015] [Indexed: 11/15/2022] Open
Abstract
When we move, the retinal velocities of objects in our surrounding differ according to their relative distances and give rise to a powerful three-dimensional visual cue referred to as motion parallax. Motion parallax allows us to infer our surrounding's 3D structure as well as self-motion based on 2D retinal information. However, the neural substrates mediating the link between visual motion and scene processing are largely unexplored. We used fMRI in human observers to study motion parallax by means of an ecologically relevant yet highly controlled stimulus that mimicked the observer's lateral motion past a depth-layered scene. We found parallax selective responses in parietal regions IPS3 and IPS4, and in a region lateral to scene selective occipital place area (OPA). The traditionally defined scene responsive regions OPA, the para-hippocampal place area (PPA) and the retrosplenial cortex (RSC) did not respond to parallax. During parallax processing, the occipital parallax selective region entertained highly specific functional connectivity with IPS3 and with scene selective PPA. These results establish a network linking dorsal motion and ventral scene processing regions specifically during parallax processing, which may underlie the brain's ability to derive 3D scene information from motion parallax.
Collapse
Affiliation(s)
- Andreas Schindler
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Tübingen 72076, Germany.
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Tübingen 72076, Germany.
| |
Collapse
|
10
|
Tramper JJ, Medendorp WP. Parallel updating and weighting of multiple spatial maps for visual stability during whole body motion. J Neurophysiol 2015; 114:3211-9. [PMID: 26490289 DOI: 10.1152/jn.00576.2015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Accepted: 10/21/2015] [Indexed: 11/22/2022] Open
Abstract
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.
Collapse
Affiliation(s)
- J J Tramper
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - W P Medendorp
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
11
|
Hadjidimitrakis K, Dal Bo' G, Breveglieri R, Galletti C, Fattori P. Overlapping representations for reach depth and direction in caudal superior parietal lobule of macaques. J Neurophysiol 2015; 114:2340-52. [PMID: 26269557 DOI: 10.1152/jn.00486.2015] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Accepted: 08/07/2015] [Indexed: 11/22/2022] Open
Abstract
Reaching movements in the real world have typically a direction and a depth component. Despite numerous behavioral studies, there is no consensus on whether reach coordinates are processed in separate or common visuomotor channels. Furthermore, the neural substrates of reach depth in parietal cortex have been ignored in most neurophysiological studies. In the medial posterior parietal area V6A, we recently demonstrated the strong presence of depth signals and the extensive convergence of depth and direction information on single neurons during all phases of a fixate-to-reach task in 3-dimensional (3D) space. Using the same task, in the present work we examined the processing of direction and depth information in area PEc of the caudal superior parietal lobule (SPL) in three Macaca fascicularis monkeys. Across the task, depth and direction had a similar, high incidence of modulatory effect. The effect of direction was stronger than depth during the initial fixation period. As the task progressed toward arm movement execution, depth tuning became more prominent than directional tuning and the number of cells modulated by both depth and direction increased significantly. Neurons tuned by depth showed a small bias for far peripersonal space. Cells with directional modulations were more frequently tuned toward contralateral spatial locations, but ipsilateral space was also represented. These findings, combined with results from neighboring areas V6A and PE, support a rostral-to-caudal gradient of overlapping representations for reach depth and direction in SPL. These findings also support a progressive change from visuospatial (vergence angle) to somatomotor representations of 3D space in SPL.
Collapse
Affiliation(s)
- Kostas Hadjidimitrakis
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and Department of Physiology, Monash University, Clayton, Victoria, Australia
| | - Giulia Dal Bo'
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| | - Rossella Breveglieri
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| | - Claudio Galletti
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| |
Collapse
|
12
|
Mackrous I, Simoneau M. Improving spatial updating accuracy in absence of external feedback. Neuroscience 2015; 300:155-62. [PMID: 25987200 DOI: 10.1016/j.neuroscience.2015.05.024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Revised: 04/23/2015] [Accepted: 05/11/2015] [Indexed: 10/23/2022]
Abstract
Updating the position of an earth-fixed target during whole-body rotation seems to rely on cognitive processes such as the utilization of external feedback. According to perceptual learning models, improvement in performance can also occur without external feedback. The aim of this study was to assess spatial updating improvement in the absence and in the presence of external feedback. While being rotated counterclockwise (CCW), participants had to predict when their body midline had crossed the position of a memorized target. Four experimental conditions were tested: (1) Pre-test: the target was presented 30° in the CCW direction from participant's midline. (2) Practice: the target was located 45° in the CCW direction from participant's midline. One group received external feedback about their spatial accuracy (Mackrous and Simoneau, 2014) while the other group did not. (3) Transfer T(30)CCW: the target was presented 30° in the CCW direction to evaluate whether improvement in performance, during practice, generalized to other target eccentricity. (4) Transfer T(30)CW: the target was presented 30° in the clockwise (CW) direction and participants were rotated CW. This transfer condition evaluated whether improvement in performance generalized to the untrained rotation direction. With practice, performance improved in the absence of external feedback (p=0.004). Nonetheless, larger improvement occurred when external feedback was provided (ps=0.002). During T(30)CCW, performance remained better for the feedback than the no-feedback group (p=0.005). However, no group difference was observed for the untrained direction (p=0.22). We demonstrated that spatial updating improved without external feedback but less than when external feedback was given. These observations are explained by a mixture of calibration processes and supervised vestibular learning.
Collapse
Affiliation(s)
- I Mackrous
- Département de kinésiologie, Faculté de médecine, Université Laval, Québec, QC, Canada; Centre de recherche du CHU de Québec, Québec, QC, Canada
| | - M Simoneau
- Département de kinésiologie, Faculté de médecine, Université Laval, Québec, QC, Canada; Centre de recherche du CHU de Québec, Québec, QC, Canada.
| |
Collapse
|
13
|
Continuous updating of visuospatial memory in superior colliculus during slow eye movements. Curr Biol 2015; 25:267-274. [PMID: 25601549 DOI: 10.1016/j.cub.2014.11.064] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2013] [Revised: 10/15/2014] [Accepted: 11/25/2014] [Indexed: 11/23/2022]
Abstract
BACKGROUND Primates can remember and spatially update the visual direction of previously viewed objects during various types of self-motion. It is known that the brain "remaps" visual memory traces relative to gaze just before and after, but not during, discrete gaze shifts called saccades. However, it is not known how visual memory is updated during slow, continuous motion of the eyes. RESULTS Here, we recorded the midbrain superior colliculus (SC) of two rhesus monkeys that were trained to spatially update the location of a saccade target across an intervening smooth pursuit (SP) eye movement. Saccade target location was varied across trials so that it passed through the neuron's receptive field at different points of the SP trajectory. Nearly all (99% of) visual responsive neurons, but no motor neurons, showed a transient memory response that continuously updated the saccade goal during SP. These responses were gaze centered (i.e., shifting across the SC's retinotopic map in opposition to gaze). Furthermore, this response was strongly enhanced by attention and/or saccade target selection. CONCLUSIONS This is the first demonstration of continuous updating of visual memory responses during eye motion. We expect that this would generalize to other visuomotor structures when gaze shifts in a continuous, unpredictable fashion.
Collapse
|
14
|
Gutteling TP, Selen LPJ, Medendorp WP. Parallax-sensitive remapping of visual space in occipito-parietal alpha-band activity during whole-body motion. J Neurophysiol 2014; 113:1574-84. [PMID: 25505108 DOI: 10.1152/jn.00477.2014] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Despite the constantly changing retinal image due to eye, head, and body movements, we are able to maintain a stable representation of the visual environment. Various studies on retinal image shifts caused by saccades have suggested that occipital and parietal areas correct for these perturbations by a gaze-centered remapping of the neural image. However, such a uniform, rotational, remapping mechanism cannot work during translations when objects shift on the retina in a more complex, depth-dependent fashion due to motion parallax. Here we tested whether the brain's activity patterns show parallax-sensitive remapping of remembered visual space during whole-body motion. Under continuous recording of electroencephalography (EEG), we passively translated human subjects while they had to remember the location of a world-fixed visual target, briefly presented in front of or behind the eyes' fixation point prior to the motion. Using a psychometric approach we assessed the quality of the memory update, which had to be made based on vestibular feedback and other extraretinal motion cues. All subjects showed a variable amount of parallax-sensitive updating errors, i.e., the direction of the errors depended on the depth of the target relative to fixation. The EEG recordings show a neural correlate of this parallax-sensitive remapping in the alpha-band power at occipito-parietal electrodes. At parietal electrodes, the strength of these alpha-band modulations correlated significantly with updating performance. These results suggest that alpha-band oscillatory activity reflects the time-varying updating of gaze-centered spatial information during parallax-sensitive remapping during whole-body motion.
Collapse
Affiliation(s)
- T P Gutteling
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - L P J Selen
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - W P Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
15
|
Ohnuma T, Hashidate H, Yoshimatsu T, Abe T. Clinical usefulness of indoor life-space assessment in community-dwelling older adults certified as needing support or care. Nihon Ronen Igakkai Zasshi 2014; 51:151-60. [PMID: 24858119 DOI: 10.3143/geriatrics.51.151] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE This study aimed to develop a questionnaire to evaluate indoor life-space mobility and assess its validity in community-dwelling older adults certified as needing support or care. METHODS The participants included 37 community-dwelling older adults undergoing home-visit rehabilitation (mean age: 78.5±7.0 years). We developed a questionnaire to assess the degree of indoor life-space mobility (home-based life-space assessment (Hb-LSA)), evaluating the functional status (life-space assessment (LSA), time spent away from bed, functional independence measure (FIM), bedside mobility scale (BMS)), physical function (hand grip power (HGP), 30-second chair stand (CS-30), one-leg standing (OLS)) and cognitive status (mental status questionnaire (MSQ)). RESULTS The average Hb-LSA score was 56.3±24.3 (minimum 4 to maximum 102.5). The test-retest reliability was high (intraclass correlation coefficients: (1, 1)=0.986, (1, 2)=0.993). The Hb-LSA scores were significantly associated with the LSA (r=0.897), time spent away from bed (r=0.497), FIM (r=0.786), BMS (r=0.720), HGP (r=0.388), CS-30 (r=0.541) and OLS (r=0.455). There were no significant associations between the Hb-LSA scores and the FIM cognitive subscores (r=0.180) or MSQ scores (r=-0.240). The Hb-LSA scores were significantly higher among the participants able to move independently indoors (75.8±18.8 points) than in those who required help to move (45.7±20.2 points). CONCLUSIONS The Hb-LSA is a useful, reliable and valid tool for assessing the degree of indoor physical mobility in the life-space. The Hb-LSA score is related to the degree of independence of indoor mobility.
Collapse
|
16
|
Mackrous I, Simoneau M. Generalization of vestibular learning to earth-fixed targets is possible but limited when the polarity of afferent vestibular information is changed. Neuroscience 2014; 260:12-22. [DOI: 10.1016/j.neuroscience.2013.12.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2013] [Revised: 11/29/2013] [Accepted: 12/03/2013] [Indexed: 10/25/2022]
|
17
|
Harley CM, Rossi M, Cienfuegos J, Wagenaar D. Discontinuous locomotion and prey sensing in the leech. ACTA ACUST UNITED AC 2014; 216:1890-7. [PMID: 23785108 DOI: 10.1242/jeb.075911] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
The medicinal leech, Hirudo verbana, is an aquatic predator that utilizes water waves to locate its prey. However, to reach their prey, the leeches must move within the same water that they are using to sense prey. This requires that they either move ballistically towards a pre-determined prey location or that they account for their self-movement and continually track prey. We found that leeches do not localize prey ballistically. Instead, they require continual sensory information to track their prey. Indeed, in the event that the prey moves, leeches will approach the prey's new location. While leeches need to continually sense water disturbances to update their percept of prey location, their own behavior is discontinuous--prey involves switching between swimming, crawling and non-locomoting. Each of these behaviors may allow for different sensory capabilities and may require different sensory filters. Here, we examined the sensory capabilities of leeches during each of these behaviors. We found that while one could expect the non-locomoting phases to direct subsequent behaviors, crawling phases were more effective than non-locomotor phases for providing direction. During crawling bouts, leeches adjusted their heading so as to become more directed towards the stimulus. This was not observed during swimming. Furthermore, in the presence of prey-like stimuli, leeches crawled more often and for longer periods of time.
Collapse
Affiliation(s)
- Cynthia M Harley
- California Institute of Technology, Division of Biology, Pasadena, CA 91125, USA.
| | | | | | | |
Collapse
|
18
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
19
|
Medendorp WP. Spatial constancy mechanisms in motor control. Philos Trans R Soc Lond B Biol Sci 2011; 366:476-91. [PMID: 21242137 DOI: 10.1098/rstb.2010.0089] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
Collapse
Affiliation(s)
- W Pieter Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, PO Box 9104, NL-6500 HE Nijmegen, The Netherlands.
| |
Collapse
|
20
|
Jones SAH, Henriques DYP. Memory for proprioceptive and multisensory targets is partially coded relative to gaze. Neuropsychologia 2010; 48:3782-92. [PMID: 20934442 DOI: 10.1016/j.neuropsychologia.2010.10.001] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2010] [Revised: 09/21/2010] [Accepted: 10/01/2010] [Indexed: 11/25/2022]
Abstract
We examined the effect of gaze direction relative to target location on reach endpoint errors made to proprioceptive and multisensory targets. We also explored if and how visual and proprioceptive information about target location are integrated to guide reaches. Participants reached to their unseen left hand in one of three target locations (left of body midline, body midline, or right or body midline), while it remained at a target site (online), or after it was removed from this location (remembered), and also after the target hand had been briefly lit before reaching (multisensory target). The target hand was guided to a target location using a robot-generated path. Reaches were made with the right hand in complete darkness, while gaze was varied in one of four eccentric directions. Horizontal reach errors systematically varied relative to gaze for all target modalities; not only for visually remembered and online proprioceptive targets as has been found in previous studies, but for the first time, also for remembered proprioceptive targets and proprioceptive targets that were briefly visible. These results suggest that the brain represents the locations of online and remembered proprioceptive reach targets, as well as visual-proprioceptive reach targets relative to gaze, along with other motor-related representations. Our results, however, do not suggest that visual and proprioceptive information are optimally integrated when coding the location of multisensory reach targets in this paradigm.
Collapse
|
21
|
Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating. Vision Res 2010; 50:2661-70. [PMID: 20816887 DOI: 10.1016/j.visres.2010.08.038] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2010] [Revised: 08/16/2010] [Accepted: 08/31/2010] [Indexed: 11/22/2022]
Abstract
Numerous studies have investigated the phenomenon of egocentric spatial updating in gaze-centered coordinates, and some have studied the use of allocentric cues in visually-guided movement, but it is not known how these two mechanisms interact. Here, we tested whether gaze-centered and allocentric information combine at the time of viewing the target, or if the brain waits until the last possible moment. To do this, we took advantage of the well-known fact that pointing and reaching movements show gaze-centered 'retinal magnification' errors (RME) that update across saccades. During gaze fixation, we found that visual landmarks, and hence allocentric information, reduces RME for targets in the left visual hemifield but not in the right. When a saccade was made between viewing and reaching, this landmark-induced reduction in RME only depended on gaze at reach, not at encoding. Based on this finding, we argue that egocentric-allocentric combination occurs after the intervening saccade. This is consistent with previous findings in healthy and brain damaged subjects suggesting that the brain updates early spatial representations during eye movement and combines them at the time of action.
Collapse
|
22
|
Byrne PA, Crawford JD. Cue Reliability and a Landmark Stability Heuristic Determine Relative Weighting Between Egocentric and Allocentric Visual Information in Memory-Guided Reach. J Neurophysiol 2010; 103:3054-69. [DOI: 10.1152/jn.01008.2009] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark “shift” during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric–allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration—despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment—had a strong influence on egocentric–allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
Collapse
Affiliation(s)
- Patrick A. Byrne
- Centre for Vision Research,
- Canadian Action and Perception Network, and
| | - J. Douglas Crawford
- Centre for Vision Research,
- Canadian Action and Perception Network, and
- Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Canada
| |
Collapse
|
23
|
Angelaki DE, Klier EM, Snyder LH. A vestibular sensation: probabilistic approaches to spatial perception. Neuron 2009; 64:448-61. [PMID: 19945388 DOI: 10.1016/j.neuron.2009.11.010] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2009] [Indexed: 10/20/2022]
Abstract
The vestibular system helps maintain equilibrium and clear vision through reflexes, but it also contributes to spatial perception. In recent years, research in the vestibular field has expanded to higher-level processing involving the cortex. Vestibular contributions to spatial cognition have been difficult to study because the circuits involved are inherently multisensory. Computational methods and the application of Bayes theorem are used to form hypotheses about how information from different sensory modalities is combined together with expectations based on past experience in order to obtain optimal estimates of cognitive variables like current spatial orientation. To test these hypotheses, neuronal populations are being recorded during active tasks in which subjects make decisions based on vestibular and visual or somatosensory information. This review highlights what is currently known about the role of vestibular information in these processes, the computations necessary to obtain the appropriate signals, and the benefits that have emerged thus far.
Collapse
Affiliation(s)
- Dora E Angelaki
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
24
|
Keith GP, Blohm G, Crawford JD. Influence of saccade efference copy on the spatiotemporal properties of remapping: a neural network study. J Neurophysiol 2009; 103:117-39. [PMID: 19846615 DOI: 10.1152/jn.91191.2008] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Remapping of gaze-centered target-position signals across saccades has been observed in the superior colliculus and several cortical areas. It is generally assumed that this remapping is driven by saccade-related signals. What is not known is how the different potential forms of this signal (i.e., visual, visuomotor, or motor) might influence this remapping. We trained a three-layer recurrent neural network to update target position (represented as a "hill" of activity in a gaze-centered topographic map) across saccades, using discrete time steps and backpropagation-through-time algorithm. Updating was driven by an efference copy of one of three saccade-related signals: a transient visual response to the saccade-target in two-dimensional (2-D) topographic coordinates (Vtop), a temporally extended motor burst in 2-D topographic coordinates (Mtop), or a 3-D eye velocity signal in brain stem coordinates (EV). The Vtop model produced presaccadic remapping in the output layer, with a "jumping hill" of activity and intrasaccadic suppression. The Mtop model also produced presaccadic remapping with a dispersed moving hill of activity that closely reproduced the quantitative results of Sommer and Wurtz. The EV model produced a coherent moving hill of activity but failed to produce presaccadic remapping. When eye velocity and a topographic (Vtop or Mtop) updater signal were used together, the remapping relied primarily on the topographic signal. An analysis of the hidden layer activity revealed that the transient remapping was highly dispersed across hidden-layer units in both Vtop and Mtop models but tightly clustered in the EV model. These results show that the nature of the updater signal influences both the mechanism and final dynamics of remapping. Taken together with the currently known physiology, our simulations suggest that different brain areas might rely on different signals and mechanisms for updating that should be further distinguishable through currently available single- and multiunit recording paradigms.
Collapse
Affiliation(s)
- Gerald P Keith
- York Centre for Vision Research, and Canadian Institute of Health Research Group, York University, 4700 Keele St., Toronto, Ontario, Canada
| | | | | |
Collapse
|
25
|
Tudusciuc O, Nieder A. Contributions of primate prefrontal and posterior parietal cortices to length and numerosity representation. J Neurophysiol 2009; 101:2984-94. [PMID: 19321641 DOI: 10.1152/jn.90713.2008] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The ability to understand and manipulate quantities ensures the survival of animals and humans alike. The frontoparietal network in primates has been implicated in representing, along with other cognitive abilities, abstract quantity. The respective roles of the prefrontal and parietal areas and the way continuous quantities, as opposed to discrete ones, are represented in this network, however, are unknown. We investigated this issue by simultaneously analyzing recorded single-unit activity in the prefrontal cortex (PFC) and the fundus of the intraparietal sulcus (IPS) of two macaque monkeys while they were engaged in delayed match-to-sample tasks discriminating line length and numerosity. In both areas, we found anatomically intermingled neurons encoding either length, numerosity, or both types of quantities. Even though different sets of neurons coded these quantities, the representation of length and numerosity was similar within the IPS and PFC. Both length and numerosity were coded by tuning functions peaking at the preferred quantity, thus supporting a labeled-line code for continuous and discrete quantity. A comparison of the response characteristics between parietal and frontal areas revealed a larger proportion of IPS neurons representing each quantity type in the early sample phase, in addition to shorter response latencies to quantity for IPS neurons. Moreover, IPS neurons discriminated quantities during the sample phase better than PFC neurons, as quantified by the receiver operating characteristic area. In the memory period, the discharge properties of PFC and IPS neurons were comparable. These single-cell results are in good agreement with functional imaging data from humans and support the notion that representations of continuous and discrete quantities share a frontoparietal substrate, with IPS neurons constituting the putative entry stage of the processing hierarchy.
Collapse
Affiliation(s)
- Oana Tudusciuc
- Department of Animal Physiology, Institute of Zoology, University of Tübingen, 72076 Tübingen, Germany
| | | |
Collapse
|
26
|
Keith GP, DeSouza JFX, Yan X, Wang H, Crawford JD. A method for mapping response fields and determining intrinsic reference frames of single-unit activity: applied to 3D head-unrestrained gaze shifts. J Neurosci Methods 2009; 180:171-84. [PMID: 19427544 DOI: 10.1016/j.jneumeth.2009.03.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2009] [Revised: 03/08/2009] [Accepted: 03/09/2009] [Indexed: 10/21/2022]
Abstract
Natural movements towards a target show metric variations between trials. When movements combine contributions from multiple body-parts, such as head-unrestrained gaze shifts involving both eye and head rotation, the individual body-part movements may vary even more than the overall movement. The goal of this investigation was to develop a general method for both mapping sensory or motor response fields of neurons and determining their intrinsic reference frames, where these movement variations are actually utilized rather than avoided. We used head-unrestrained gaze shifts, three-dimensional (3D) geometry, and naturalistic distributions of eye and head orientation to explore the theoretical relationship between the intrinsic reference frame of a sensorimotor neuron's response field and the coherence of the activity when this response field is fitted non-parametrically using different kernel bandwidths in different reference frames. We measure how well the regression surface predicts unfitted data using the PREdictive Sum-of-Squares (PRESS) statistic. The reference frame with the smallest PRESS statistic was categorized as the intrinsic reference frame if the PRESS statistic was significantly larger in other reference frames. We show that the method works best when targets are at regularly spaced positions within the response field's active region, and that the method identifies the best kernel bandwidth for response field estimation. We describe how gain-field effects may be dealt with, and how to test neurons within a population that fall on a continuum between specific reference frames. This method may be applied to any spatially coherent single-unit activity related to sensation and/or movement during naturally varying behaviors.
Collapse
Affiliation(s)
- Gerald P Keith
- Canadian Action and Perception Network, York University, 4700 Keele Street, Toronto, Ontario M3J1P3, Canada
| | | | | | | | | |
Collapse
|
27
|
Klier EM, Angelaki DE. Spatial updating and the maintenance of visual constancy. Neuroscience 2008; 156:801-18. [PMID: 18786618 DOI: 10.1016/j.neuroscience.2008.07.079] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2008] [Revised: 07/29/2008] [Accepted: 07/30/2008] [Indexed: 11/16/2022]
Abstract
Spatial updating is the means by which we keep track of the locations of objects in space even as we move. Four decades of research have shown that humans and non-human primates can take the amplitude and direction of intervening movements into account, including saccades (both head-fixed and head-free), pursuit, whole-body rotations and translations. At the neuronal level, spatial updating is thought to be maintained by receptive field locations that shift with changes in gaze, and evidence for such shifts has been shown in several cortical areas. These regions receive information about the intervening movement from several sources including motor efference copies when a voluntary movement is made and vestibular/somatosensory signals when the body is in motion. Many of these updating signals arise from brainstem regions that monitor our ongoing movements and subsequently transmit this information to the cortex via pathways that likely include the thalamus. Several issues of debate include (1) the relative contribution of extra-retinal sensory and efference copy signals to spatial updating, (2) the source of an updating signal for real life, three-dimensional motion that cannot arise from brain areas encoding only two-dimensional commands, and (3) the reference frames used by the brain to integrate updating signals from various sources. This review highlights the relevant spatial updating studies and provides a summary of the field today. We find that spatial constancy is maintained by a highly evolved neural mechanism that keeps track of our movements, transmits this information to relevant brain regions, and then uses this information to change the way in which single neurons respond. In this way, we are able to keep track of relevant objects in the outside world and interact with them in meaningful ways.
Collapse
Affiliation(s)
- E M Klier
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | |
Collapse
|
28
|
Medendorp WP, Beurze SM, Van Pelt S, Van Der Werf J. Behavioral and cortical mechanisms for spatial coding and action planning. Cortex 2008; 44:587-97. [DOI: 10.1016/j.cortex.2007.06.001] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2007] [Revised: 06/04/2007] [Accepted: 06/26/2007] [Indexed: 11/29/2022]
|
29
|
Van Pelt S, Medendorp WP. Updating Target Distance Across Eye Movements in Depth. J Neurophysiol 2008; 99:2281-90. [DOI: 10.1152/jn.01281.2007] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements ( n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching.
Collapse
|
30
|
Klier EM, Hess BJM, Angelaki DE. Human visuospatial updating after passive translations in three-dimensional space. J Neurophysiol 2008; 99:1799-809. [PMID: 18256164 DOI: 10.1152/jn.01091.2007] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To maintain a stable representation of the visual environment as we move, the brain must update the locations of targets in space using extra-retinal signals. Humans can accurately update after intervening active whole-body translations. But can they also update for passive translations (i.e., without efference copy signals of an outgoing motor command)? We asked six head-fixed subjects to remember the location of a briefly flashed target (five possible targets were located at depths of 23, 33, 43, 63, and 150 cm in front of the cyclopean eye) as they moved 10 cm left, right, up, down, forward, or backward while fixating a head-fixed target at 53 cm. After the movement, the subjects made a saccade to the remembered location of the flash with a combination of version and vergence eye movements. We computed an updating ratio where 0 indicates no updating and 1 indicates perfect updating. For lateral and vertical whole-body motion, where updating performance is judged by the size of the version movement, the updating ratios were similar for leftward and rightward translations, averaging 0.84 +/- 0.28 (mean +/- SD) as compared with 0.51 +/- 0.33 for downward and 1.05 +/- 0.50 for upward translations. For forward/backward movements, where updating performance is judged by the size of the vergence movement, the average updating ratio was 1.12 +/- 0.45. Updating ratios tended to be larger for far targets than near targets, although both intra- and intersubject variabilities were smallest for near targets. Thus in addition to self-generated movements, extra-retinal signals involving otolith and proprioceptive cues can also be used for spatial constancy.
Collapse
Affiliation(s)
- Eliana M Klier
- Department of Neurobiology, Washington University School of Medicine, 660 S. Euclid Ave., St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
31
|
Klier EM, Angelaki DE, Hess BJM. Human visuospatial updating after noncommutative rotations. J Neurophysiol 2007; 98:537-44. [PMID: 17442766 DOI: 10.1152/jn.01229.2006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
As we move our bodies in space, we often undergo head and body rotations about different axes-yaw, pitch, and roll. The order in which we rotate about these axes is an important factor in determining the final position of our bodies in space because rotations, unlike translations, do not commute. Does our brain keep track of the noncommutativity of rotations when computing changes in head and body orientation and then use this information when planning subsequent motor commands? We used a visuospatial updating task to investigate whether saccades to remembered visual targets are accurate after intervening, whole-body rotational sequences. The sequences were reversed, either yaw then roll or roll then yaw, such that the final required eye movements to reach the same space-fixed target were different in each case. While each subject performed consistently irrespective of target location and rotational combination, we found great intersubject variability in their capacity to update. The distance between the noncommutative endpoints was, on average, half of that predicted by perfect noncommutativity. Nevertheless, most subjects did make eye movements to distinct final endpoint locations and not to one unique location in space as predicted by a commutative model. In addition, their noncommutative performance significantly improved when their less than ideal updating performance was taken into account. Thus the brain can produce movements that are consistent with the processing of noncommutative rotations, although it is often poor in using internal estimates of rotation for updating.
Collapse
Affiliation(s)
- Eliana M Klier
- Dept of Neurobiology, Washington University School of Medicine, St Louis, MO 63110, USA.
| | | | | |
Collapse
|
32
|
Van Pelt S, Medendorp WP. Gaze-Centered Updating of Remembered Visual Space During Active Whole-Body Translations. J Neurophysiol 2007; 97:1209-20. [PMID: 17135474 DOI: 10.1152/jn.00882.2006] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Various cortical and sub-cortical brain structures update the gaze-centered coordinates of remembered stimuli to maintain an accurate representation of visual space across eyes rotations and to produce suitable motor plans. A major challenge for the computations by these structures is updating across eye translations. When the eyes translate, objects in front of and behind the eyes’ fixation point shift in opposite directions on the retina due to motion parallax. It is not known if the brain uses gaze coordinates to compute parallax in the translational updating of remembered space or if it uses gaze-independent coordinates to maintain spatial constancy across translational motion. We tested this by having subjects view targets, flashed in darkness in front of or behind fixation, then translate their body sideways, and subsequently reach to the memorized target. Reach responses showed parallax-sensitive updating errors: errors increased with depth from fixation and reversed in lateral direction for targets presented at opposite depths from fixation. In a series of control experiments, we ruled out possible biasing factors such as the presence of a fixation light during the translation, the eyes accompanying the hand to the target, and the presence of visual feedback about hand position. Quantitative geometrical analysis confirmed that updating errors were better described by using gaze-centered than gaze-independent coordinates. We conclude that spatial updating for translational motion operates in gaze-centered coordinates. Neural network simulations are presented suggesting that the brain relies on ego-velocity signals and stereoscopic depth and direction information in spatial updating during self-motion.
Collapse
Affiliation(s)
- Stan Van Pelt
- Nijmegen Institute for Cognition and Information, Radboud University Nijmegen, NL-6500 HE Nijmegen, The Netherlands.
| | | |
Collapse
|
33
|
Niemeier M, Crawford JD, Tweed DB. Optimal inference explains dimension-specific contractions of spatial perception. Exp Brain Res 2006; 179:313-23. [PMID: 17131113 DOI: 10.1007/s00221-006-0788-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2006] [Accepted: 10/31/2006] [Indexed: 11/24/2022]
Abstract
It is known that people misperceive scenes they see during rapid eye movements called saccades. It has been suggested that some of these misperceptions could be an artifact of neurophysiological processes related to the internal remapping of spatial coordinates during saccades. Alternatively, we have recently suggested, based on a computational model, that transsaccadic misperceptions result from optimal inference. As one of the properties of the model, sudden object displacements that occur in sync with a saccade should be perceived as contracted in a non-linear fashion. To explore this model property, here we use computer simulations and psychophysical methods first to test how robust the model is to close-to-optimal approximations and second to test two model predictions: (a) contracted transsaccadic perception should be dimension-specific with more contraction for jumps parallel to the saccade than orthogonal to it, and (b) contraction should rise as a function of visuomotor noise. Our results are consistent with these predictions. They support the idea that human transsaccadic integration is governed by close-to-optimal inference.
Collapse
Affiliation(s)
- Matthias Niemeier
- Centre for Computational Cognitive Neuroscience, Department of Life Sciences, University of Toronto at Scarborough, 1265 Military Trail, M1C 1A4, Toronto, Canada.
| | | | | |
Collapse
|
34
|
Keith GP, Smith MA, Crawford JD. Functional organization within a neural network trained to update target representations across 3-D saccades. J Comput Neurosci 2006; 22:191-209. [PMID: 17120151 DOI: 10.1007/s10827-006-0007-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2005] [Revised: 08/18/2006] [Accepted: 08/21/2006] [Indexed: 10/24/2022]
Abstract
The goal of this study was to understand how neural networks solve the 3-D aspects of updating in the double-saccade task, where subjects make sequential saccades to the remembered locations of two targets. We trained a 3-layer, feed-forward neural network, using back-propagation, to calculate the 3-D motor error the second saccade. Network inputs were a 2-D topographic map of the direction of the second target in retinal coordinates, and 3-D vector representations of initial eye orientation and motor error of the first saccade in head-fixed coordinates. The network learned to account for all 3-D aspects of updating. Hidden-layer units (HLUs) showed retinal-coordinate visual receptive fields that were remapped across the first saccade. Two classes of HLUs emerged from the training, one class primarily implementing the linear aspects of updating using vector subtraction, the second class implementing the eye-orientation-dependent, non-linear aspects of updating. These mechanisms interacted at the unit level through gain-field-like input summations, and through the parallel "tweaking" of optimally-tuned HLU contributions to the output that shifted the overall population output vector to the correct second-saccade motor error. These observations may provide clues for the biological implementation of updating.
Collapse
Affiliation(s)
- Gerald P Keith
- Department of Psychology, Centre for Vision Research and Canadian Institute of Health Research Group, York University, 4700 Keele Street, Toronto, Ontario, Canada
| | | | | |
Collapse
|
35
|
Wei M, Li N, Newlands SD, Dickman JD, Angelaki DE. Deficits and Recovery in Visuospatial Memory During Head Motion After Bilateral Labyrinthine Lesion. J Neurophysiol 2006; 96:1676-82. [PMID: 16760354 DOI: 10.1152/jn.00012.2006] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To keep a stable internal representation of the environment as we move, extraretinal sensory or motor cues are critical for updating neural maps of visual space. Using a memory-saccade task, we studied whether visuospatial updating uses vestibular information. Specifically, we tested whether trained rhesus monkeys maintain the ability to update the conjugate and vergence components of memory-guided eye movements in response to passive translational or rotational head and body movements after bilateral labyrinthine lesion. We found that lesioned animals were acutely compromised in generating the appropriate horizontal versional responses necessary to update the directional goal of memory-guided eye movements after leftward or rightward rotation/translation. This compromised function recovered in the long term, likely using extravestibular (e.g., somatosensory) signals, such that nearly normal performance was observed 4 mo after the lesion. Animals also lost their ability to adjust memory vergence to account for relative distance changes after motion in depth. Not only were these depth deficits larger than the respective effects on version, but they also showed little recovery. We conclude that intact labyrinthine signals are functionally useful for proper visuospatial memory updating during passive head and body movements.
Collapse
Affiliation(s)
- Min Wei
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | | | | | | | |
Collapse
|
36
|
Li N, Angelaki DE. Updating visual space during motion in depth. Neuron 2006; 48:149-58. [PMID: 16202715 DOI: 10.1016/j.neuron.2005.08.021] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2004] [Revised: 04/04/2005] [Accepted: 08/15/2005] [Indexed: 10/25/2022]
Abstract
Whether we are riding in a car or walking, our internal map of the environment must be continuously updated to maintain spatial constancy. Using a memory eye movement task, we examined whether nonhuman primates can keep track of changes in the distance of nearby objects when moved toward or away from them. We report that memory-guided eye movements take into account the change in distance traveled, illustrating that monkeys can update retinal disparity information in order to reconstruct three-dimensional visual space during motion in depth. This ability was compromised after destruction of the vestibular labyrinths, suggesting that the extraretinal signals needed for updating can arise from vestibular information signaling self-motion through space.
Collapse
Affiliation(s)
- Nuo Li
- Department of Neurobiology and Biomedical Engineering, Washington University School of Medicine, St. Louis, Missouri 63110, USA
| | | |
Collapse
|
37
|
Vingerhoets RAA, Medendorp WP, Van Gisbergen JAM. Time course and magnitude of illusory translation perception during off-vertical axis rotation. J Neurophysiol 2005; 95:1571-87. [PMID: 16319215 DOI: 10.1152/jn.00613.2005] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Human spatial orientation relies on vision, somatosensory cues, and signals from the semicircular canals and the otoliths. The canals measure rotation, whereas the otoliths are linear accelerometers, sensitive to tilt and translation. To disambiguate the otolith signal, two main hypotheses have been proposed: frequency segregation and canal-otolith interaction. So far these models were based mainly on oculomotor behavior. In this study we investigated their applicability to human self-motion perception. Six subjects were rotated in yaw about an off-vertical axis (OVAR) at various speeds and tilt angles, in darkness. During the rotation, subjects indicated at regular intervals whether a briefly presented dot moved faster or slower than their perceived self-motion. Based on such responses, we determined the time course of the self-motion percept and characterized its steady state by a psychometric function. The psychophysical results were consistent with anecdotal reports. All subjects initially sensed rotation, but then gradually developed a percept of being translated along a cone. The rotation percept could be described by a decaying exponential with a time constant of about 20 s. Translation percept magnitude typically followed a delayed increasing exponential with delays up to 50 s and a time constant of about 15 s. The asymptotic magnitude of perceived translation increased with rotation speed and tilt angle, but never exceeded 14 cm/s. These results were most consistent with predictions of the canal-otolith-interaction model, but required parameter values that differed from the original proposal. We conclude that canal-otolith interaction is an important governing principle for self-motion perception that can be deployed flexibly, dependent on stimulus conditions.
Collapse
Affiliation(s)
- R A A Vingerhoets
- Department of Biophysics, Radboud University Nijmegen, PO Box 9101, 6500 HB Nijmegen, The Netherlands.
| | | | | |
Collapse
|
38
|
Van Pelt S, Van Gisbergen JAM, Medendorp WP. Visuospatial Memory Computations During Whole-Body Rotations in Roll. J Neurophysiol 2005; 94:1432-42. [PMID: 15857971 DOI: 10.1152/jn.00018.2005] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We used a memory-saccade task to test whether the location of a target, briefly presented before a whole-body rotation in roll, is stored in egocentric or in allocentric coordinates. To make this distinction, we exploited the fact that subjects, when tilted sideways in darkness, make systematic errors when indicating the direction of gravity (an allocentric task) even though they have a veridical percept of their self-orientation in space. We hypothesized that if spatial memory is coded allocentrically, these distortions affect the coding of remembered targets and their readout after a body rotation. Alternatively, if coding is egocentric, updating for body rotation becomes essential and errors in performance should be related to the amount of intervening rotation. Subjects ( n = 6) were tested making saccades to remembered world-fixed targets after passive body tilts. Initial and final tilt angle ranged between −120° CCW and 120° CW. The results showed that subjects made large systematic directional errors in their saccades (up to 90°). These errors did not occur in the absence of intervening body rotation, ruling out a memory degradation effect. Regression analysis showed that the errors were closely related to the amount of subjective allocentric distortion at both the initial and final tilt angle, rather than to the amount of intervening rotation. We conclude that the brain uses an allocentric reference frame, possibly gravity-based, to code visuospatial memories during whole-body tilts. This supports the notion that the brain can define information in multiple frames of reference, depending on sensory inputs and task demands.
Collapse
Affiliation(s)
- S Van Pelt
- Nijmegen Institute for Cognition and Information, Radboud University Nijmegen, The Netherlands.
| | | | | |
Collapse
|
39
|
Li N, Wei M, Angelaki DE. Primate memory saccade amplitude after intervened motion depends on target distance. J Neurophysiol 2005; 94:722-33. [PMID: 15788513 DOI: 10.1152/jn.01339.2004] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To keep a stable internal representation of the visual world as our eyes, head, and body move around, humans and monkeys must continuously adjust neural maps of visual space using extraretinal sensory or motor cues. When such movements include translation, the amount of body displacement must be weighted differently in the updating of far versus near targets. Using a memory-saccade task, we have investigated whether nonhuman primates can benefit from this geometry when passively moved sideways. We report that monkeys made appropriate memory saccades, taking into account not only the amplitude and nature (rotation vs. translation) of the movement, but also the distance of the memorized target: i.e., the amplitude of memory saccades was larger for near versus far targets. The scaling by viewing distance, however, was less than geometrically required, such that memory saccades consistently undershot near targets. Such a less-than-ideal scaling of memory saccades is reminiscent of the viewing distance-dependent properties of the vestibuloocular reflex. We propose that a similar viewing distance-dependent vestibular signal is used as an extraretinal compensation for the visuomotor consequences of the geometry of motion parallax by scaling both memory saccades and reflexive eye movements during motion through space.
Collapse
Affiliation(s)
- Nuo Li
- Department of Anatomy and Neurobiology, Box 8108, Washington University School of Medicine, 660 South Euclid Avenue, St. Louis, Missouri 63110, USA
| | | | | |
Collapse
|
40
|
Medendorp WP, Goltz HC, Crawford JD, Vilis T. Integration of Target and Effector Information in Human Posterior Parietal Cortex for the Planning of Action. J Neurophysiol 2005; 93:954-62. [PMID: 15356184 DOI: 10.1152/jn.00725.2004] [Citation(s) in RCA: 153] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recently, using event-related functional MRI (fMRI), we located a bilateral region in the human posterior parietal cortex (retIPS) that topographically represents and updates targets for saccades and pointing movements in eye-centered coordinates. To generate movements, this spatial information must be integrated with the selected effector. We now tested whether the activation in retIPS is dependent on the hand selected. Using 4-T fMRI, we compared the activation produced by movements, using either eyes or the left or right hand, to targets presented either leftward or rightward of central fixation. The majority of the regions activated during saccades were also activated during pointing movements, including occipital, posterior parietal, and premotor cortex. The topographic retIPS region was activated more strongly for saccades than for pointing. The activation associated with pointing was significantly greater when pointing with the unseen hand to targets ipsilateral to the hand. For example, although there was activation in the left retIPS when pointing to targets on the right with the left hand, the activation was significantly greater when using the right hand. The mirror symmetric effect was observed in the right retIPS. Similar hand preferences were observed in a nearby anterior occipital region. This effector specificity is consistent with previous clinical and behavioral studies showing that each hand is more effective in directing movements to targets in ipsilateral visual space. We conclude that not only do these regions code target location, but they also appear to integrate target selection with effector selection.
Collapse
Affiliation(s)
- W Pieter Medendorp
- Nijmegen Institute for Cognition and Information and Information and FC Donders Centre for Cognitive Neuroimaging, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | | | | | | |
Collapse
|
41
|
Schmid M, Schieppati M. Neck muscle fatigue and spatial orientation during stepping in place in humans. J Appl Physiol (1985) 2004; 99:141-53. [PMID: 15489256 DOI: 10.1152/japplphysiol.00494.2004] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Neck proprioceptive input, as elicited by muscle vibration, can produce destabilizing effects on stance and locomotion. Neck muscle fatigue produces destabilizing effects on stance, too. Our aim was to assess whether neck muscle fatigue can also perturb the orientation in space during a walking task. Direction and amplitude of the path covered during stepping in place were measured in 10 blindfolded subjects, who performed five 30-s stepping trials before and after a 5-min period of isometric dorsal neck muscle contraction against a load. Neck muscle electromyogram amplitude and median frequency during the head extensor effort were used to compute a fatigue index. Head and body kinematics were recorded by an optoelectronic system, and stepping cadence was measured by sensorized insoles. Before the contraction period, subjects normally stepped on the spot or drifted forward. After contraction, some subjects reproduced the same behavior, whereas others reduced their forward progression or even stepped backward. The former subjects showed minimal signs of fatigue and the latter ones marked signs of fatigue, as quantified by the dorsal neck electromyogram index. Head position and cadence were unaffected in either group of subjects. We argue that the abnormal fatigue-induced afferent input originating in the receptors transducing the neck muscle metabolic state can modulate the egocentric spatial reference frame. Notably, the effects of neck muscle fatigue on orientation are opposite to those produced by neck proprioception. The neck represents a complex source of inputs capable of modifying our orientation in space during a locomotor task.
Collapse
Affiliation(s)
- Micaela Schmid
- Human Movement Laboratory, Centro Studi Attività Motorie, Fondazione Salvatore Maugeri, Istituto Scientifico di Pavia, Via Ferrata 8, I-27100 Pavia, Italy
| | | |
Collapse
|
42
|
Abstract
Eye–hand coordination is complex because it involves the visual guidance of both the eyes and hands, while simultaneously using eye movements to optimize vision. Since only hand motion directly affects the external world, eye movements are the slave in this system. This eye– hand visuomotor system incorporates closed-loop visual feedback but here we focus on early feedforward mechanisms that allow primates to make spatially accurate reaches. First, we consider how the parietal cortex might store and update gaze-centered representations of reach targets during a sequence of gaze shifts and fixations. Recent evidence suggests that such representations might be compared with hand position signals within this early gaze-centered frame. However, the resulting motor error commands cannot be treated independently of their frame of origin or the frame of their destined motor command. Behavioral experiments show that the brain deals with the nonlinear aspects of such reference frame transformations, and incorporates internal models of the complex linkage geometry of the eye–head–shoulder system. These transformations are modeled as a series of vector displacement commands, rotated by eye and head orientation, and implemented between parietal and frontal cortex through efficient parallel neuronal architectures. Finally, we consider how this reach system might interact with the visually guided grasp system through both parallel and coordinated neural algorithms.
Collapse
Affiliation(s)
- J D Crawford
- Canadian Institutes of Health Research Group for Action and Perception, York Centre for Vision Research, Department of Psychology, York University, 4700 Keele St., Toronto, Ontario M3J 1P3, Canada.
| | | | | |
Collapse
|
43
|
Crawford JD, Martinez-Trujillo JC, Klier EM. Neural control of three-dimensional eye and head movements. Curr Opin Neurobiol 2004; 13:655-62. [PMID: 14662365 DOI: 10.1016/j.conb.2003.10.009] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Although the eyes and head can potentially rotate about any three-dimensional axis during orienting gaze shifts, behavioral recordings have shown that certain lawful strategies--such as Listing's law and Donders' law--determine which axis is used for a particular sensory input. Here, we review recent advances in understanding the neuromuscular mechanisms for these laws, the neural mechanisms that control three-dimensional head posture, and the neural mechanisms that coordinate three-dimensional eye orientation with head motion. Finally, we consider how the brain copes with the perceptual consequences of these motor acts.
Collapse
Affiliation(s)
- J D Crawford
- York Center for Vision Research, York University, 4700 Keele Street, Toronto, Ontario, M3J 1P3, Canada.
| | | | | |
Collapse
|