1
|
Barliya A, Krausz N, Naaman H, Chiovetto E, Giese M, Flash T. Human arm redundancy: a new approach for the inverse kinematics problem. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231036. [PMID: 38420627 PMCID: PMC10898979 DOI: 10.1098/rsos.231036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 02/02/2024] [Indexed: 03/02/2024]
Abstract
The inverse kinematics (IK) problem addresses how both humans and robotic systems coordinate movement to resolve redundancy, as in the case of arm reaching where more degrees of freedom are available at the joint versus hand level. This work focuses on which coordinate frames best represent human movements, enabling the motor system to solve the IK problem in the presence of kinematic redundancies. We used a multi-dimensional sparse source separation method to derive sets of basis (or source) functions for both the task and joint spaces, with joint space represented by either absolute or anatomical joint angles. We assessed the similarities between joint and task sources in each of these joint representations, finding that the time-dependent profiles of the absolute reference frame's sources show greater similarity to corresponding sources in the task space. This result was found to be statistically significant. Our analysis suggests that the nervous system represents multi-joint arm movements using a limited number of basis functions, allowing for simple transformations between task and joint spaces. Additionally, joint space seems to be represented in an absolute reference frame to simplify the IK transformations, given redundancies. Further studies will assess this finding's generalizability and implications for neural control of movement.
Collapse
Affiliation(s)
- Avi Barliya
- Motor Control for Humans and Robotic Systems Laboratory, Weizmann Institute of Science, Rehovot, Central, Israel
| | - Nili Krausz
- Motor Control for Humans and Robotic Systems Laboratory, Weizmann Institute of Science, Rehovot, Central, Israel
- Neurobotics and Bionic Limbs (eNaBLe) Laboratory, Technion—Israel Institute of Technology, Haifa, Haifa, Israel
| | - Hila Naaman
- Motor Control for Humans and Robotic Systems Laboratory, Weizmann Institute of Science, Rehovot, Central, Israel
| | - Enrico Chiovetto
- Section Theoretical Sensomotorics, HIH/CIN, University Clinic of Tübingen, Tubingen, Baden-Württemberg, Germany
| | - Martin Giese
- Section Theoretical Sensomotorics, HIH/CIN, University Clinic of Tübingen, Tubingen, Baden-Württemberg, Germany
| | - Tamar Flash
- Motor Control for Humans and Robotic Systems Laboratory, Weizmann Institute of Science, Rehovot, Central, Israel
| |
Collapse
|
2
|
Bartolo A, Rossetti Y, Revol P, Urquizar C, Pisella L, Coello Y. Reachability judgement in optic ataxia: Effect of peripheral vision on hand and target perception in depth. Cortex 2017. [PMID: 28625347 DOI: 10.1016/j.cortex.2017.05.013] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
The concept of peripersonal space was first proposed by Rizzolatti, Scandolara, Matelli, and Gentilucci (1981), who introduced the term to highlight the close links between somatosensory and visual processing for stimuli close to the body and suggested that this near-body space could in fact be characterized as an action space (Rizzolatti, Fadiga, Fogassi, & Gallese, 1997). Supporting this idea, patients with right hemisphere lesions have been described as impaired in performing actions towards objects and in perceiving their location - but only when the objects were presented within arm's reach (Bartolo, Carlier, Hassaini, Martin, & Coello, 2014; Brain, 1941). Whether the deficit of optic ataxia patients in processing target locations for action has an effect on the representation of peripersonal space has never been explored. The present study highlights optic ataxia patients' specific difficulties in processing hand-to-target distances in a motor task and in a perceptual task requiring identification of what is reachable in the visual environment. The difficulties are especially evident when both the target and the hand are perceived in the visual periphery. Indeed, when patient I.G. was able to fixate the target, her reaching accuracy and her perception of reachable space both largely improved. Furthermore, the difficulties were enhanced when the hand and the target were both in the lower visual field (in a fixed-far condition vs a fixed-near condition). This novel up-down dimension of optic ataxia fits with the larger representation of the lower visual field in the posterior parietal cortex (Pitzalis et al., 2013; Previc, 1990).
Collapse
Affiliation(s)
- Angela Bartolo
- Cognitive and Affective Sciences Laboratory (SCALab), UMR CNRS 9193, University of Lille, Villeneuve d'Ascq, France; Institut Universitaire de France, Paris, France
| | - Yves Rossetti
- Plate-forme 'Mouvement et Handicap', Hôpital Henry-Gabrielle, Hospices Civils de Lyon, Saint-Genis-Laval, France; Inserm UMR-S 1028, CNRS UMR 5292, ImpAct, Centre de Recherche en Neurosciences de Lyon, Université Lyon-1, Bron, France.
| | - Patrice Revol
- Plate-forme 'Mouvement et Handicap', Hôpital Henry-Gabrielle, Hospices Civils de Lyon, Saint-Genis-Laval, France; Inserm UMR-S 1028, CNRS UMR 5292, ImpAct, Centre de Recherche en Neurosciences de Lyon, Université Lyon-1, Bron, France
| | - Christian Urquizar
- Plate-forme 'Mouvement et Handicap', Hôpital Henry-Gabrielle, Hospices Civils de Lyon, Saint-Genis-Laval, France; Inserm UMR-S 1028, CNRS UMR 5292, ImpAct, Centre de Recherche en Neurosciences de Lyon, Université Lyon-1, Bron, France
| | - Laure Pisella
- Inserm UMR-S 1028, CNRS UMR 5292, ImpAct, Centre de Recherche en Neurosciences de Lyon, Université Lyon-1, Bron, France
| | - Yann Coello
- Cognitive and Affective Sciences Laboratory (SCALab), UMR CNRS 9193, University of Lille, Villeneuve d'Ascq, France.
| |
Collapse
|
3
|
Thompson AA, Byrne PA, Henriques DYP. Visual targets aren't irreversibly converted to motor coordinates: eye-centered updating of visuospatial memory in online reach control. PLoS One 2014; 9:e92455. [PMID: 24643008 PMCID: PMC3958509 DOI: 10.1371/journal.pone.0092455] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2013] [Accepted: 02/21/2014] [Indexed: 01/19/2023] Open
Abstract
Counter to current and widely accepted hypotheses that sensorimotor transformations involve converting target locations in spatial memory from an eye-fixed reference frame into a more stable motor-based reference frame, we show that this is not strictly the case. Eye-centered representations continue to dominate reach control even during movement execution; the eye-centered target representation persists after conversion to a motor-based frame and is continuously updated as the eyes move during reach, and is used to modify the reach plan accordingly during online control. While reaches are known to be adjusted online when targets physically shift, our results are the first to show that similar adjustments occur in response to changes in representations of remembered target locations. Specifically, we find that shifts in gaze direction, which produce predictable changes in the internal (specifically eye-centered) representation of remembered target locations also produce mid-transport changes in reach kinematics. This indicates that representations of remembered reach targets (and visuospatial memory in general) continue to be updated relative to gaze even after reach onset. Thus, online motor control is influenced dynamically by both the external and internal updating mechanisms.
Collapse
Affiliation(s)
- Aidan A Thompson
- Centre for Vision Research, York University, Toronto, Ontario, Canada; School of Kinesiology & Health Science, York University, Toronto, Ontario, Canada
| | - Patrick A Byrne
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Denise Y P Henriques
- Centre for Vision Research, York University, Toronto, Ontario, Canada; School of Kinesiology & Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Ambrosini E, Ciavarro M, Pelle G, Perrucci MG, Galati G, Fattori P, Galletti C, Committeri G. Behavioral investigation on the frames of reference involved in visuomotor transformations during peripheral arm reaching. PLoS One 2012; 7:e51856. [PMID: 23272180 PMCID: PMC3521756 DOI: 10.1371/journal.pone.0051856] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2012] [Accepted: 11/13/2012] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Several psychophysical experiments found evidence for the involvement of gaze-centered and/or body-centered coordinates in arm-movement planning and execution. Here we aimed at investigating the frames of reference involved in the visuomotor transformations for reaching towards visual targets in space by taking target eccentricity and performing hand into account. METHODOLOGY/PRINCIPAL FINDINGS We examined several performance measures while subjects reached, in complete darkness, memorized targets situated at different locations relative to the gaze and/or to the body, thus distinguishing between an eye-centered and a body-centered frame of reference involved in the computation of the movement vector. The errors seem to be mainly affected by the visual hemifield of the target, independently from its location relative to the body, with an overestimation error in the horizontal reaching dimension (retinal exaggeration effect). The use of several target locations within the perifoveal visual field allowed us to reveal a novel finding, that is, a positive linear correlation between horizontal overestimation errors and target retinal eccentricity. In addition, we found an independent influence of the performing hand on the visuomotor transformation process, with each hand misreaching towards the ipsilateral side. CONCLUSIONS While supporting the existence of an internal mechanism of target-effector integration in multiple frames of reference, the present data, especially the linear overshoot at small target eccentricities, clearly indicate the primary role of gaze-centered coding of target location in the visuomotor transformation for reaching.
Collapse
Affiliation(s)
- Ettore Ambrosini
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University “G. d’Annunzio”, Chieti, Italy
- Institute of Advanced Biomedical Technologies - ITAB, Foundation G. d’Annunzio, Chieti, Italy
| | - Marco Ciavarro
- Institute of Advanced Biomedical Technologies - ITAB, Foundation G. d’Annunzio, Chieti, Italy
- Department of Human and General Physiology and Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Gina Pelle
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University “G. d’Annunzio”, Chieti, Italy
- Institute of Advanced Biomedical Technologies - ITAB, Foundation G. d’Annunzio, Chieti, Italy
| | - Mauro Gianni Perrucci
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University “G. d’Annunzio”, Chieti, Italy
- Institute of Advanced Biomedical Technologies - ITAB, Foundation G. d’Annunzio, Chieti, Italy
| | - Gaspare Galati
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- Laboratory of Neuropsychology, Foundation Santa Lucia, Rome, Italy
| | - Patrizia Fattori
- Department of Human and General Physiology and Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Claudio Galletti
- Department of Human and General Physiology and Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Giorgia Committeri
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University “G. d’Annunzio”, Chieti, Italy
- Institute of Advanced Biomedical Technologies - ITAB, Foundation G. d’Annunzio, Chieti, Italy
- * E-mail:
| |
Collapse
|
5
|
Thompson AA, Glover CV, Henriques DY. Allocentrically implied target locations are updated in an eye-centred reference frame. Neurosci Lett 2012; 514:214-8. [PMID: 22425720 DOI: 10.1016/j.neulet.2012.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2011] [Revised: 02/16/2012] [Accepted: 03/01/2012] [Indexed: 10/28/2022]
|
6
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
7
|
Medendorp WP. Spatial constancy mechanisms in motor control. Philos Trans R Soc Lond B Biol Sci 2011; 366:476-91. [PMID: 21242137 DOI: 10.1098/rstb.2010.0089] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
Collapse
Affiliation(s)
- W Pieter Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, PO Box 9104, NL-6500 HE Nijmegen, The Netherlands.
| |
Collapse
|
8
|
Thompson AA, Henriques DY. The coding and updating of visuospatial memory for goal-directed reaching and pointing. Vision Res 2011; 51:819-26. [DOI: 10.1016/j.visres.2011.01.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2010] [Revised: 12/15/2010] [Accepted: 01/10/2011] [Indexed: 10/18/2022]
|
9
|
Selen L, Medendorp W. Saccadic updating of object orientation for grasping movements. Vision Res 2011; 51:898-907. [DOI: 10.1016/j.visres.2011.01.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2010] [Revised: 12/29/2010] [Accepted: 01/04/2011] [Indexed: 10/18/2022]
|
10
|
Locations of serial reach targets are coded in multiple reference frames. Vision Res 2010; 50:2651-60. [DOI: 10.1016/j.visres.2010.09.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2010] [Revised: 09/08/2010] [Accepted: 09/09/2010] [Indexed: 11/22/2022]
|
11
|
Specificity of human parietal saccade and reach regions during transcranial magnetic stimulation. J Neurosci 2010; 30:13053-65. [PMID: 20881123 DOI: 10.1523/jneurosci.1644-10.2010] [Citation(s) in RCA: 111] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Single-unit recordings in macaque monkeys have identified effector-specific regions in posterior parietal cortex (PPC), but functional neuroimaging in the human has yielded controversial results. Here we used on-line repetitive transcranial magnetic stimulation (rTMS) to determine saccade and reach specificity in human PPC. A short train of three TMS pulses (separated by an interval of 100 ms) was delivered to superior parieto-occipital cortex (SPOC), a region over the midposterior intraparietal sulcus (mIPS), and a site close to caudal IPS situated over the angular gyrus (AG) during a brief memory interval while subjects planned either a saccade or reach with the left or right hand. Behavioral measures then were compared to controls without rTMS. Stimulation of mIPS and AG produced similar patterns: increased end-point variability for reaches and decreased saccade accuracy for contralateral targets. In contrast, stimulation of SPOC deviated reach end points toward visual fixation and had no effect on saccades. Contralateral-limb specificity was highest for AG and lowest for SPOC. Visual feedback of the hand negated rTMS-induced disruptions of the reach plan for mIPS and AG, but not SPOC. These results suggest that human SPOC is specialized for encoding retinally peripheral reach goals, whereas more anterior-lateral regions (mIPS and AG) along the IPS possess overlapping maps for saccade and reach planning and are more closely involved in motor details (i.e., planning the reach vector for a specific hand). This work provides the first causal evidence for functional specificity of these parietal regions in healthy humans.
Collapse
|
12
|
Jones SAH, Henriques DYP. Memory for proprioceptive and multisensory targets is partially coded relative to gaze. Neuropsychologia 2010; 48:3782-92. [PMID: 20934442 DOI: 10.1016/j.neuropsychologia.2010.10.001] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2010] [Revised: 09/21/2010] [Accepted: 10/01/2010] [Indexed: 11/25/2022]
Abstract
We examined the effect of gaze direction relative to target location on reach endpoint errors made to proprioceptive and multisensory targets. We also explored if and how visual and proprioceptive information about target location are integrated to guide reaches. Participants reached to their unseen left hand in one of three target locations (left of body midline, body midline, or right or body midline), while it remained at a target site (online), or after it was removed from this location (remembered), and also after the target hand had been briefly lit before reaching (multisensory target). The target hand was guided to a target location using a robot-generated path. Reaches were made with the right hand in complete darkness, while gaze was varied in one of four eccentric directions. Horizontal reach errors systematically varied relative to gaze for all target modalities; not only for visually remembered and online proprioceptive targets as has been found in previous studies, but for the first time, also for remembered proprioceptive targets and proprioceptive targets that were briefly visible. These results suggest that the brain represents the locations of online and remembered proprioceptive reach targets, as well as visual-proprioceptive reach targets relative to gaze, along with other motor-related representations. Our results, however, do not suggest that visual and proprioceptive information are optimally integrated when coding the location of multisensory reach targets in this paradigm.
Collapse
|
13
|
Thompson AA, Henriques DYP. Updating visual memory across eye movements for ocular and arm motor control. J Neurophysiol 2008; 100:2507-14. [PMID: 18768640 DOI: 10.1152/jn.90599.2008] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.
Collapse
Affiliation(s)
- Aidan A Thompson
- Centre for Vision Research, York University, 4700 Keele Street, Toronto, ON, Canada M3J 1P3
| | | |
Collapse
|
14
|
Sorrento GU, Henriques DYP. Reference frame conversions for repeated arm movements. J Neurophysiol 2008; 99:2968-84. [PMID: 18400956 DOI: 10.1152/jn.90225.2008] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame.
Collapse
Affiliation(s)
- Gianluca U Sorrento
- York University, School of Kinesiology and Health Science, Bethune College, 4700 Keele St., Toronto, Ontario M3J 1P3, Canada
| | | |
Collapse
|
15
|
Vesia M, Monteon JA, Sergio LE, Crawford JD. Hemispheric asymmetry in memory-guided pointing during single-pulse transcranial magnetic stimulation of human parietal cortex. J Neurophysiol 2006; 96:3016-27. [PMID: 17005619 DOI: 10.1152/jn.00411.2006] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Dorsal posterior parietal cortex (PPC) has been implicated through single-unit recordings, neuroimaging data, and studies of brain-damaged humans in the spatial guidance of reaching and pointing movements. The present study examines the causal effect of single-pulse transcranial magnetic stimulation (TMS) over the left and right dorsal posterior parietal cortex during a memory-guided "reach-to-touch" movement task in six human subjects. Stimulation of the left parietal hemisphere significantly increased endpoint variability, independent of visual field, with no horizontal bias. In contrast, right parietal stimulation did not increase variability, but instead produced a significantly systematic leftward directional shift in pointing (contralateral to stimulation site) in both visual fields. Furthermore, the same lateralized pattern persisted with left-hand movement, suggesting that these aspects of parietal control of pointing movements are spatially fixed. To test whether the right parietal TMS shift occurs in visual or motor coordinates, we trained subjects to point correctly to optically reversed peripheral targets, viewed through a left-right Dove reversing prism. After prism adaptation, the horizontal pointing direction for a given visual target reversed, but the direction of shift during right parietal TMS did not reverse. Taken together, these data suggest that induction of a focal current reveals a hemispheric asymmetry in the early stages of the putative spatial processing in PPC. These results also suggest that a brief TMS pulse modifies the output of the right PPC in motor coordinates downstream from the adapted visuomotor reversal, rather than modifying the upstream visual coordinates of the memory representation.
Collapse
Affiliation(s)
- Michael Vesia
- York University, 4700 Keele Street, Toronto, Ontario, Canada M3J 1P3
| | | | | | | |
Collapse
|
16
|
Vaziri S, Diedrichsen J, Shadmehr R. Why does the brain predict sensory consequences of oculomotor commands? Optimal integration of the predicted and the actual sensory feedback. J Neurosci 2006; 26:4188-97. [PMID: 16624939 PMCID: PMC1473981 DOI: 10.1523/jneurosci.4747-05.2006] [Citation(s) in RCA: 108] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
When the brain initiates a saccade, it uses a copy of the oculomotor commands to predict the visual consequences: for example, if one fixates a reach target, a peripheral saccade will produce an internal estimate of the new retinal location of the target, a process called remapping. In natural settings, the target likely remains visible after the saccade. So why should the brain predict the sensory consequence of the saccade when after its completion, the image of the target remains visible? We hypothesized that in the post-saccadic period, the brain integrates target position information from two sources: one based on remapping and another based on the peripheral view of the target. The integration of information from these two sources could produce a less variable target estimate than is possible from either source alone. Here, we show that reaching toward targets that were initially foveated and remapped had significantly less variance than reaches relying on peripheral target information. Furthermore, in a more natural setting where both sources of information were available simultaneously, variance of the reaches was further reduced as predicted by integration. This integration occurred in a statistically optimal manner, as demonstrated by the change in integration weights when we manipulated the uncertainty of the post-saccadic target estimate by varying exposure time. Therefore, the brain predicts the sensory consequences of motor commands because it integrates its prediction with the actual sensory information to produce an estimate of sensory space that is better than possible from either source alone.
Collapse
Affiliation(s)
- Siavash Vaziri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205, USA
| | | | | |
Collapse
|
17
|
Beurze SM, Van Pelt S, Medendorp WP. Behavioral reference frames for planning human reaching movements. J Neurophysiol 2006; 96:352-62. [PMID: 16571731 DOI: 10.1152/jn.01362.2005] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects (n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group (n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.
Collapse
Affiliation(s)
- Sabine M Beurze
- Nijmegen Institute for Cognition and Information, Radboud University of Nijmegen, Nijmegen, The Netherlands.
| | | | | |
Collapse
|
18
|
Poljac E, Neggers B, van den Berg AV. Collision judgment of objects approaching the head. Exp Brain Res 2005; 171:35-46. [PMID: 16328256 DOI: 10.1007/s00221-005-0257-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2004] [Accepted: 09/28/2005] [Indexed: 10/25/2022]
Abstract
Recent investigations have indicated that human perception of the trajectory of objects approaching in the horizontal plane is precise but biased away from straight ahead. This is remarkable because it could mean that subjects perceive objects that approach on a collision course as missing the head. Approach within the horizontal plane through the eyes and the fixation point (the plane of regard) is special, as general motions will also have a component of motion perpendicular to the plane of regard. Thus, we investigated three-dimensional motion perception in the vicinity of the head, including vertical components. Subjects judged whether an object that moved in the mid-sagittal plane was going to hit below or above a well-known reference point on the face like the center of the chin or the forehead (perceptual task). Tactile and proprioceptive information about the reference point significantly improved precision. Precision did not change with distance of the approaching target or with fixation direction. Bias was virtually absent for these vertical motions. When subjects pointed with their index finger to the perceived location of impact on their face (visuo-motor task), they overestimated (1.7 cm) the horizontal eccentricity of the point of impact (pointing task). Vertical bias, however, was again virtually absent. Interestingly, when trajectories intersected the plane of regard, higher precision was observed in the perceptual task regardless of the other conditions. In contrast, neither bias nor precision of the pointing task changed significantly when the trajectories intersected the plane of regard. When asked to point to the location where a trajectory intersected the plane of regard, subjects overestimated the depth component of this intersection location by about 3 cm. The absence of perceptual and pointing bias in the vertical direction in contrast to the clear horizontal bias suggests that different (combinations of) cues are used to judge these components of the trajectory of an approaching object. The results of our perceptual task suggest a role for somatosensory signals in the visual judgment of impending impact.
Collapse
Affiliation(s)
- E Poljac
- Functional Neurobiology, Helmholtz Institute, Padualaan 8, 3584 Utrecht, The Netherlands
| | | | | |
Collapse
|
19
|
Khan AZ, Pisella L, Rossetti Y, Vighetto A, Crawford JD. Impairment of Gaze-centered Updating of Reach Targets in Bilateral Parietal–Occipital Damaged Patients. Cereb Cortex 2005; 15:1547-60. [PMID: 15746004 DOI: 10.1093/cercor/bhi033] [Citation(s) in RCA: 52] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Recent studies have suggested that internal updating of visuospatial targets in humans occurs in gaze-centered coordinates and takes place in the parietal and extrastriate cortices. We explored how information for reaching is updated in two patients with bilateral lesions in these areas. Subjects performed two visuomotor tasks: (i) a fixation reaching task, which began with the appearance of one of five fixation positions (varying eye positions) followed by a central reaching target. Subjects reached to the target while fixating on the presented fixation position (relative to gaze the target was always presented in the periphery); and (ii) a saccade reaching task, in which subjects foveated on the central reaching target, then made a saccade to the presented fixation position before reaching to the central target. In both tasks, subjects reached to targets after a 500 or 5000 ms delay. Gaze-centered updating predicts similarities in reaching errors between fixation and saccade trials. Control subjects showed evidence for gaze-centered updating during both 500 and 5000 ms delay conditions. In contrast, patient AT, who had extensive occipital-parietal damage, only showed signs of gaze-centered representation after 5 s. Patient IG, with a more focal lesion in the parietal cortices, showed partial updating in gaze-centered coordinates when reaching with the small memory delay but recovered a complete gaze-centered representation after the longer delay. This suggests that patients with bilateral occipital-parietal lesions may rely on non-gaze-centered frames to store immediate target locations in reaching space but, given enough time, this information may be rerouted to access other gaze-centered motor cortical mechanisms.
Collapse
Affiliation(s)
- Aarlenne Z Khan
- Centre for Vision Research, CIHR Group for Action and Perception and Department of Psychology, York University, Toronto, Ontario, Canada, M3J 1P3
| | | | | | | | | |
Collapse
|
20
|
Poljac E, Lankheet MJM, van den Berg AV. Perceptual compensation for eye torsion. Vision Res 2005; 45:485-96. [PMID: 15610752 DOI: 10.1016/j.visres.2004.09.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2003] [Revised: 08/31/2004] [Indexed: 11/20/2022]
Abstract
To correctly perceive visual directions relative to the head, one needs to compensate for the eye's orientation in the head. In this study we focus on compensation for the eye's torsion regarding objects that contain the line of sight and objects that do not pass through the fixation point. Subjects judged the location of flashed probe points relative to their binocular plane of regard, the mid-sagittal or the transverse plane of the head, while fixating straight ahead, right upward, or right downward at 30 cm distance, to evoke eye torsion according to Listing's law. In addition, we investigated the effects of head-tilt and monocular versus binocular viewing. Flashed probe points were correctly localized in the plane of regard irrespective of eccentric viewing, head-tilt, and monocular or binocular vision in nearly all subjects and conditions. Thus, eye torsion that varied by +/-9 degrees across these different conditions was in general compensated for. However, the position of probes relative to the midsagittal or the transverse plane, both true head-fixed planes, was misjudged. We conclude that judgment of the orientation of the plane of regard, a plane that contains the line of sight, is veridical, indicating accurate compensation for actual eye torsion. However, when judgment has to be made of a head-fixed plane that is offset with respect to the line of sight, eye torsion that accompanies that eye orientation appears not to be taken into account correctly.
Collapse
Affiliation(s)
- E Poljac
- Functional Neurobiology, Utrecht University, Helmholtz School Padualaan 8, 3584 CH Utrecht, The Netherlands.
| | | | | |
Collapse
|
21
|
Poljac E, van den Berg AV. Localization of the plane of regard in space. Exp Brain Res 2005; 163:457-67. [PMID: 15657697 DOI: 10.1007/s00221-004-2201-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2004] [Accepted: 11/06/2004] [Indexed: 11/28/2022]
Abstract
When we fixate an object in space, the rotation centers of the eyes, together with the object, define a plane of regard. People perceive the elevation of objects relative to this plane accurately, irrespective of eye or head orientation (Poljac et al. (2004) Vision Res, in press). Yet, to create a correct representation of objects in space, the orientation of the plane of regard in space is required. Subjects pointed along an eccentric vertical line on a touch screen to the location where their plane of regard intersected the touch screen positioned on their right. The distance of the vertical line to the subject's eyes varied from 10 to 40 cm. Subjects were sitting upright and fixating one of the nine randomly presented directions ranging from 20 degrees left and down to 20 degrees right and up relative to their straight ahead. The eccentricity of fixations relative to the pointing location varied by up to 40 degrees . Subjects underestimated the elevation of their plane of regard (on average by 3.69 cm, SD=1.44 cm), regardless of the fixation direction or pointing distance. However, when the targets were shown on a display mounted in a table, to provide support of the subject's hand throughout the trial, subjects pointed accurately (average error 0.3 cm, SD=0.8 cm). In addition, head tilt 20 degrees to the left or right did not cause any change in accuracy. The bias observed in the first task could be caused by maintained tonus in arm muscles when the arm is raised, that might interfere with the transformation from visual to motor signals needed to perform the pointing movement. We conclude that the plane of regard is correctly localized in space. This may be a good starting point for representing objects in head-centric coordinates.
Collapse
Affiliation(s)
- Ervin Poljac
- Functional Neurobiology, Padualaan 8, 3584 CH Utrecht, The Netherlands.
| | | |
Collapse
|
22
|
Abstract
Eye–hand coordination is complex because it involves the visual guidance of both the eyes and hands, while simultaneously using eye movements to optimize vision. Since only hand motion directly affects the external world, eye movements are the slave in this system. This eye– hand visuomotor system incorporates closed-loop visual feedback but here we focus on early feedforward mechanisms that allow primates to make spatially accurate reaches. First, we consider how the parietal cortex might store and update gaze-centered representations of reach targets during a sequence of gaze shifts and fixations. Recent evidence suggests that such representations might be compared with hand position signals within this early gaze-centered frame. However, the resulting motor error commands cannot be treated independently of their frame of origin or the frame of their destined motor command. Behavioral experiments show that the brain deals with the nonlinear aspects of such reference frame transformations, and incorporates internal models of the complex linkage geometry of the eye–head–shoulder system. These transformations are modeled as a series of vector displacement commands, rotated by eye and head orientation, and implemented between parietal and frontal cortex through efficient parallel neuronal architectures. Finally, we consider how this reach system might interact with the visually guided grasp system through both parallel and coordinated neural algorithms.
Collapse
Affiliation(s)
- J D Crawford
- Canadian Institutes of Health Research Group for Action and Perception, York Centre for Vision Research, Department of Psychology, York University, 4700 Keele St., Toronto, Ontario M3J 1P3, Canada.
| | | | | |
Collapse
|