1
|
Suminski AJ, Doudlah RC, Scheidt RA. Neural Correlates of Multisensory Integration for Feedback Stabilization of the Wrist. Front Integr Neurosci 2022; 16:815750. [PMID: 35600223 PMCID: PMC9121119 DOI: 10.3389/fnint.2022.815750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 03/25/2022] [Indexed: 11/24/2022] Open
Abstract
Robust control of action relies on the ability to perceive, integrate, and act on information from multiple sensory modalities including vision and proprioception. How does the brain combine sensory information to regulate ongoing mechanical interactions between the body and its physical environment? Some behavioral studies suggest that the rules governing multisensory integration for action may differ from the maximum likelihood estimation rules that appear to govern multisensory integration for many perceptual tasks. We used functional magnetic resonance (MR) imaging techniques, a MR-compatible robot, and a multisensory feedback control task to test that hypothesis by investigating how neural mechanisms involved in regulating hand position against mechanical perturbation respond to the presence and fidelity of visual and proprioceptive information. Healthy human subjects rested supine in a MR scanner and stabilized their wrist against constant or pseudo-random torque perturbations imposed by the robot. These two stabilization tasks were performed under three visual feedback conditions: “No-vision”: Subjects had to rely solely on proprioceptive feedback; “true-vision”: visual cursor and hand motions were congruent; and “random-vision”: cursor and hand motions were uncorrelated in time. Behaviorally, performance errors accumulated more quickly during trials wherein visual feedback was absent or incongruous. We analyzed blood-oxygenation level-dependent (BOLD) signal fluctuations to compare task-related activations in a cerebello-thalamo-cortical neural circuit previously linked with feedback stabilization of the hand. Activation in this network varied systematically depending on the presence and fidelity of visual feedback of task performance. Addition of task related visual information caused activations in the cerebello-thalamo-cortical network to expand into neighboring brain regions. Specific loci and intensity of expanded activity depended on the fidelity of visual feedback. Remarkably, BOLD signal fluctuations within these regions correlated strongly with the time series of proprioceptive errors—but not visual errors—when the fidelity of visual feedback was poor, even though visual and hand motions had similar variability characteristics. These results provide insight into the neural control of the body’s physical interactions with its environment, rejecting the standard Gaussian cue combination model of multisensory integration in favor of models that account for causal structure in the sensory feedback.
Collapse
Affiliation(s)
- Aaron J. Suminski
- Department of Biomedical Engineering, Marquette University, Milwaukee, WI, United States
- Department of Neurological Surgery, University of Wisconsin-Madison, Madison, WI, United States
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, United States
| | - Raymond C. Doudlah
- Department of Neuroscience, University of Wisconsin-Madison, Madison, WI, United States
| | - Robert A. Scheidt
- Department of Biomedical Engineering, Marquette University, Milwaukee, WI, United States
| |
Collapse
|
2
|
Rand MK, Ringenbach SDR. Delay of gaze fixation during reaching movement with the non-dominant hand to a distant target. Exp Brain Res 2022; 240:1629-1647. [PMID: 35366070 DOI: 10.1007/s00221-022-06357-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 03/22/2022] [Indexed: 11/26/2022]
Abstract
The present study examined the effects of hand and task difficulty on eye-hand coordination related to gaze fixation behavior (i.e., fixating a gaze to the target until reach completion) in single reaching movements. Twenty right-handed young adults made reaches on a digitizer, while looking at a visual target and feedback of hand movements on a computer monitor. Task difficulty was altered by having three target distances. In a small portion of trials, visual feedback was randomly removed at the target presentation. The effect of a moderate amount of practice was also examined using a randomized trial schedule across target-distance and visual-feedback conditions in each hand. The results showed that the gaze distances covered during the early reaching phase were reduced, and the gaze fixation to the target was delayed when reaches were performed with the left hand and when the target distance increased. These results suggest that when the use of the non-dominant hand or an increased task difficulty reduces the predictability of hand movements and its sensory consequences, eye-hand coordination is modified to enhance visual monitoring of the reach progress prior to gaze fixation. The randomized practice facilitated this process. Nevertheless, variability of reach trajectory was more increased without visual feedback for right-hand reaches, indicating that control of the dominant arm integrates more visual feedback information during reaches. These results together suggest that the earlier gaze fixation and greater integration of visual feedback during right-hand reaches contribute to the faster and more accurate performance in the final reaching phase.
Collapse
Affiliation(s)
- Miya K Rand
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany.
- College of Health Solutions, Arizona State University, Phoenix, AZ, USA.
| | | |
Collapse
|
3
|
Gerb J, Brandt T, Dieterich M. Different strategies in pointing tasks and their impact on clinical bedside tests of spatial orientation. J Neurol 2022; 269:5738-5745. [PMID: 35258851 PMCID: PMC9553832 DOI: 10.1007/s00415-022-11015-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 01/27/2022] [Accepted: 02/05/2022] [Indexed: 11/24/2022]
Abstract
Deficits in spatial memory, orientation, and navigation are often early or neglected signs of degenerative and vestibular neurological disorders. A simple and reliable bedside test of these functions would be extremely relevant for diagnostic routine. Pointing at targets in the 3D environment is a basic well-trained common sensorimotor ability that provides a suitable measure. We here describe a smartphone-based pointing device using the built-in inertial sensors for analysis of pointing performance in azimuth and polar spatial coordinates. Interpretation of the vectors measured in this way is not trivial, since the individuals tested may use at least two different strategies: first, they may perform the task in an egocentric eye-based reference system by aligning the fingertip with the target retinotopically or second, by aligning the stretched arm and the index finger with the visual line of sight in allocentric world-based coordinates similar to using a rifle. The two strategies result in considerable differences of target coordinates. A pilot test with a further developed design of the device and an app for a standardized bedside utilization in five healthy volunteers revealed an overall mean deviation of less than 5° between the measured and the true coordinates. Future investigations of neurological patients comparing their performance before and after changes in body position (chair rotation) may allow differentiation of distinct orientational deficits in peripheral (vestibulopathy) or central (hippocampal or cortical) disorders.
Collapse
Affiliation(s)
- J Gerb
- Department of Neurology, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377, Munich, Germany. .,German Center for Vertigo and Balance Disorders, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377, Munich, Germany.
| | - T Brandt
- Graduate School of Systemic Neuroscience, Ludwig-Maximilians University, Munich, Germany.,German Center for Vertigo and Balance Disorders, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377, Munich, Germany.,Hertie Senior Professor for Clinical Neuroscience, Ludwig-Maximilians University, Munich, Germany
| | - M Dieterich
- Department of Neurology, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377, Munich, Germany.,Graduate School of Systemic Neuroscience, Ludwig-Maximilians University, Munich, Germany.,German Center for Vertigo and Balance Disorders, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377, Munich, Germany.,Munich Cluster for Systems Neurology (SyNergy), Munich, Germany
| |
Collapse
|
4
|
Debats NB, Heuer H, Kayser C. Visuo-proprioceptive integration and recalibration with multiple visual stimuli. Sci Rep 2021; 11:21640. [PMID: 34737371 PMCID: PMC8569193 DOI: 10.1038/s41598-021-00992-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 10/18/2021] [Indexed: 11/29/2022] Open
Abstract
To organize the plethora of sensory signals from our environment into a coherent percept, our brain relies on the processes of multisensory integration and sensory recalibration. We here asked how visuo-proprioceptive integration and recalibration are shaped by the presence of more than one visual stimulus, hence paving the way to study multisensory perception under more naturalistic settings with multiple signals per sensory modality. We used a cursor-control task in which proprioceptive information on the endpoint of a reaching movement was complemented by two visual stimuli providing additional information on the movement endpoint. The visual stimuli were briefly shown, one synchronously with the hand reaching the movement endpoint, the other delayed. In Experiment 1, the judgments of hand movement endpoint revealed integration and recalibration biases oriented towards the position of the synchronous stimulus and away from the delayed one. In Experiment 2 we contrasted two alternative accounts: that only the temporally more proximal visual stimulus enters integration similar to a winner-takes-all process, or that the influences of both stimuli superpose. The proprioceptive biases revealed that integration—and likely also recalibration—are shaped by the superposed contributions of multiple stimuli rather than by only the most powerful individual one.
Collapse
Affiliation(s)
- Nienke B Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Universitätsstrasse 25, 33615, Bielefeld, Germany. .,Center for Cognitive Interaction Technology (CITEC), Universität Bielefeld, Bielefeld, Germany.
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Universitätsstrasse 25, 33615, Bielefeld, Germany.,Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Universitätsstrasse 25, 33615, Bielefeld, Germany.,Center for Cognitive Interaction Technology (CITEC), Universität Bielefeld, Bielefeld, Germany
| |
Collapse
|
5
|
Constancy of Preparatory Postural Adjustments for Reaching to Virtual Targets across Different Postural Configurations. Neuroscience 2020; 455:223-239. [PMID: 33246066 DOI: 10.1016/j.neuroscience.2020.11.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 11/05/2020] [Accepted: 11/06/2020] [Indexed: 11/21/2022]
Abstract
Postural and movement components must be coordinated without significant disturbance to balance when reaching from a standing position. Traditional theories propose that muscle activity prior to movement onset create the mechanics to counteract the internal torques generated by the future limb movement, reducing possible instability via centre of mass (CoM) displacement. However, during goal-directed reach movements executed on a fixed base of support (BoS), preparatory postural adjustments (or pPAs) promote movement of the CoM within the BoS. Considering this dichotomy, the current study investigated if pPAs constitute part of a whole-body strategy that is tied to the efficient execution of movement, rather than the constraints of balance. We reasoned that if pPAs were tied primarily to balance control, they would modulate as a function of perceived instability. Alternatively, if tied to dynamics necessary for movement initiation, they would remain unchanged, with feedback-based changes being sufficient to retain balance following volitional arm movement. Participants executed beyond-arm reaching movements in four different postural configurations that altered the quality of the BoS. Quantification of these changes to stability did not drastically alter the tuning or timing of preparatory muscle activity despite modifications to arm and CoM trajectories necessary to complete the reaching movement. In contrast to traditional views, preparatory postural muscle activity is not always tuned for balance maintenance or even as a calculation of upcoming instability but may reflect a requirement of voluntary movement towards a pre-defined location.
Collapse
|
6
|
A condition that produces sensory recalibration and abolishes multisensory integration. Cognition 2020; 202:104326. [PMID: 32464344 DOI: 10.1016/j.cognition.2020.104326] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 05/05/2020] [Accepted: 05/07/2020] [Indexed: 11/20/2022]
Abstract
We examined the influence of extended exposure to a visuomotor rotation, which induces both motor adaptation and sensory recalibration, on (partial) multisensory integration in a cursor-control task. Participants adapted to a 30° (adaptation condition) or 0° (control condition) visuomotor rotation by making center-out movements to remembered targets. In subsequent test trials of sensory integration, they made center-out movements with variable visuomotor rotations and judged the position of hand or cursor at the end of these movements. Test trials were randomly embedded among twice the number of maintenance trials with 30° or 0° rotation. The biases of perceived hand (or cursor) position toward the cursor (or hand) position were measured. We found motor adaptation together with proprioceptive and visual recalibrations in the adaptation condition. Unexpectedly, multisensory integration was absent in both the adaptation and control condition. The absence stemmed from the extensive experience of constant visuomotor rotations of 30° or 0°, which probably produced highly precise predictions of the visual consequences of hand movements. The frequently confirmed predictions then dominated the estimate of the visual movement consequences, leaving no influence of the actual visuomotor rotations in the minority of test trials. Conversely, multisensory integration was present for sensed hand positions when these were indirectly assessed from movement characteristics, indicating that the relative weighting of discrepant estimates of hand position was different for motor control. The existence of a condition that abolishes multisensory integration while keeping sensory recalibration suggests that mechanisms that reduce sensory discrepancies (partly) differ between integration and recalibration.
Collapse
|
7
|
Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques. J Neurophysiol 2019; 122:1946-1961. [PMID: 31533015 DOI: 10.1152/jn.00072.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination-eye-head-hand coordination-has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized "chair" that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target.NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Collapse
Affiliation(s)
- Harbandhan Kaur Arora
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - Vishal Bharmauria
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Saihong Sun
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hongying Wang
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - John Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada.,Department of Psychology, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Flanagin VL, Fisher P, Olcay B, Kohlbecher S, Brandt T. A bedside application-based assessment of spatial orientation and memory: approaches and lessons learned. J Neurol 2019; 266:126-138. [PMID: 31240446 PMCID: PMC6722154 DOI: 10.1007/s00415-019-09409-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 05/28/2019] [Accepted: 05/29/2019] [Indexed: 01/05/2023]
Abstract
Spatial orientation and memory deficits are an often overlooked and potentially powerful early marker for pathological cognitive decline. Pen-and-paper tests for spatial abilities often do not coincide with actual navigational performance due to differences in spatial perspective and scale. Mobile devices are becoming increasingly useful in a clinical setting, for patient monitoring, clinical decision-making, and information management. The same devices have positional information that may be useful for a scale appropriate point-of-care test for spatial ability. We created a test for spatial orientation and memory based on pointing within a single room using the sensors in mobile phone. The test consisted of a baseline pointing condition to which all other conditions were compared, a spatial memory condition with eyes-closed, and two body rotation conditions (real or mental) where spatial updating were assessed. We examined the effectiveness of the sensors from a mobile phone for measuring pointing errors in these conditions in a sample of healthy young individuals. We found that the sensors reliably produced appropriate azimuth and elevation pointing angles for all of the 15 targets presented across multiple participants and days. Within-subject variability was below 6° elevation and 10° azimuth for the control condition. The pointing error and variability increased with task difficulty and correlated with self-report tests of spatial ability. The lessons learned from the first tests are discussed as well as the outlook of this application as a scientific and clinical bedside device. Finally, the next version of the application is introduced as an open source application for further development.
Collapse
Affiliation(s)
| | - Paul Fisher
- Neuro-Cognitive-Psychology, Department of Psychology, LMU, Munich, Germany
| | - Berk Olcay
- Computer Aided Medical Procedures, Technical University Munich (TUM), Munich, Germany
| | - Stefan Kohlbecher
- German Centre for Vertigo and Balance Disorders (DSGZ), Munich, Germany
| | - Thomas Brandt
- German Centre for Vertigo and Balance Disorders (DSGZ), Munich, Germany
- Hertie, University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
9
|
Winner T, Selen L, Murillo Oosterwijk A, Verhagen L, Medendorp WP, van Rooij I, Toni I. Recipient Design in Communicative Pointing. Cogn Sci 2019; 43:e12733. [PMID: 31087589 PMCID: PMC6594194 DOI: 10.1111/cogs.12733] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Revised: 02/05/2019] [Accepted: 03/28/2019] [Indexed: 11/18/2022]
Abstract
A long‐standing debate in the study of human communication centers on the degree to which communicators tune their communicative signals (e.g., speech, gestures) for specific addressees, as opposed to taking a neutral or egocentric perspective. This tuning, called recipient design, is known to occur under special conditions (e.g., when errors in communication need to be corrected), but several researchers have argued that it is not an intrinsic feature of human communication, because that would be computationally too demanding. In this study, we contribute to this debate by studying a simple communicative behavior, communicative pointing, under conditions of successful (error‐free) communication. Using an information‐theoretic measure, called legibility, we present evidence of recipient design in communicative pointing. The legibility effect is present early in the movement, suggesting that it is an intrinsic part of the communicative plan. Moreover, it is reliable only from the viewpoint of the addressee, suggesting that the motor plan is tuned to the addressee. These findings suggest that recipient design is an intrinsic feature of human communication.
Collapse
Affiliation(s)
- Tobias Winner
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Luc Selen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Anke Murillo Oosterwijk
- Donders Institute for Brain, Cognition and Behaviour, Radboud University.,Erasmus Research Institute of Management, Erasmus University Rotterdam
| | | | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Iris van Rooij
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Ivan Toni
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| |
Collapse
|
10
|
Abedi Khoozani P, Blohm G. Neck muscle spindle noise biases reaches in a multisensory integration task. J Neurophysiol 2018; 120:893-909. [PMID: 29742021 PMCID: PMC6171065 DOI: 10.1152/jn.00643.2017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Revised: 04/25/2018] [Accepted: 04/25/2018] [Indexed: 11/22/2022] Open
Abstract
Reference frame transformations (RFTs) are crucial components of sensorimotor transformations in the brain. Stochasticity in RFTs has been suggested to add noise to the transformed signal due to variability in transformation parameter estimates (e.g., angle) as well as the stochastic nature of computations in spiking networks of neurons. Here, we varied the RFT angle together with the associated variability and evaluated the behavioral impact in a reaching task that required variability-dependent visual-proprioceptive multisensory integration. Crucially, reaches were performed with the head either straight or rolled 30° to either shoulder, and we also applied neck loads of 0 or 1.8 kg (left or right) in a 3 × 3 design, resulting in different combinations of estimated head roll angle magnitude and variance required in RFTs. A novel three-dimensional stochastic model of multisensory integration across reference frames was fitted to the data and captured our main behavioral findings: 1) neck load biased head angle estimation across all head roll orientations, resulting in systematic shifts in reach errors; 2) increased neck muscle tone led to increased reach variability due to signal-dependent noise; and 3) both head roll and neck load created larger angular errors in reaches to visual targets away from the body compared with reaches toward the body. These results show that noise in muscle spindles and stochasticity in general have a tangible effect on RFTs underlying reach planning. Since RFTs are omnipresent in the brain, our results could have implications for processes as diverse as motor control, decision making, posture/balance control, and perception. NEW & NOTEWORTHY We show that increasing neck muscle tone systematically biases reach movements. A novel three-dimensional multisensory integration across reference frames model captures the data well and provides evidence that the brain must have online knowledge of full-body geometry together with the associated variability to plan reach movements accurately.
Collapse
Affiliation(s)
- Parisa Abedi Khoozani
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada
- Canadian Action and Perception Network , Toronto, Ontario , Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada
- Canadian Action and Perception Network , Toronto, Ontario , Canada
- Association for Canadian Neuroinformatics and Computational Neuroscience , Kingston, Ontario , Canada
| |
Collapse
|
11
|
Bakker RS, Selen LPJ, Medendorp WP. Reference frames in the decisions of hand choice. J Neurophysiol 2018; 119:1809-1817. [DOI: 10.1152/jn.00738.2017] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
For the brain to decide on a reaching movement, it needs to select which hand to use. A number of body-centered factors affect this decision, such as the anticipated movement costs of each arm, recent choice success, handedness, and task demands. While the position of each hand relative to the target is also known to be an important spatial factor, it is unclear which reference frames coordinate the spatial aspects in the decisions of hand choice. Here we tested the role of gaze- and head-centered reference frames in a hand selection task. With their head and gaze oriented in different directions, we measured hand choice of 19 right-handed subjects instructed to make unimanual reaching movements to targets at various directions relative to their body. Using an adaptive procedure, we determined the target angle that led to equiprobable right/left hand choices. When gaze remained fixed relative to the body this balanced target angle shifted systematically with head orientation, and when head orientation remained fixed this choice measure shifted with gaze. These results suggest that a mixture of head- and gaze-centered reference frames is involved in the spatially guided decisions of hand choice, perhaps to flexibly bind this process to the mechanisms of target selection. NEW & NOTEWORTHY Decisions of target and hand choice are fundamental aspects of human reaching movements. While the reference frames involved in target choice have been identified, it is unclear which reference frames are involved in hand selection. We tested the role of gaze- and head-centered reference frames in a hand selection task. Findings emphasize the role of both spatial reference frames in the decisions of hand choice, in addition to known body-centered computations such anticipated movement costs and handedness.
Collapse
Affiliation(s)
- Romy S. Bakker
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Luc P. J. Selen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - W. Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
12
|
Hondzinski JM, Soebbing CM, French AE, Winges SA. Different damping responses explain vertical endpoint error differences between visual conditions. Exp Brain Res 2016; 234:1575-87. [DOI: 10.1007/s00221-015-4546-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 12/23/2015] [Indexed: 11/28/2022]
|
13
|
Brandt T, Huber M, Schramm H, Kugler G, Dieterich M, Glasauer S. "Taller and Shorter": Human 3-D Spatial Memory Distorts Familiar Multilevel Buildings. PLoS One 2015; 10:e0141257. [PMID: 26509927 PMCID: PMC4624999 DOI: 10.1371/journal.pone.0141257] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2015] [Accepted: 10/06/2015] [Indexed: 01/26/2023] Open
Abstract
Animal experiments report contradictory findings on the presence of a behavioural and neuronal anisotropy exhibited in vertical and horizontal capabilities of spatial orientation and navigation. We performed a pointing experiment in humans on the imagined 3-D direction of the location of various invisible goals that were distributed horizontally and vertically in a familiar multilevel hospital building. The 21 participants were employees who had worked for years in this building. The hypothesis was that comparison of the experimentally determined directions and the true directions would reveal systematic inaccuracy or dimensional anisotropy of the localizations. The study provides first evidence that the internal representation of a familiar multilevel building was distorted compared to the dimensions of the true building: vertically 215% taller and horizontally 51% shorter. This was not only demonstrated in the mathematical reconstruction of the mental model based on the analysis of the pointing experiments but also by the participants’ drawings of the front view and the ground plan of the building. Thus, in the mental model both planes were altered in different directions: compressed for the horizontal floor plane and stretched for the vertical column plane. This could be related to human anisotropic behavioural performance of horizontal and vertical navigation in such buildings.
Collapse
Affiliation(s)
- Thomas Brandt
- Clinical Neuroscience, Ludwig-Maximilians-University Munich, Germany
- German Center for Vertigo and Balance Disorders—IFBLMU (DSGZ), Ludwig-Maximilians-University Munich, Germany
- Bernstein Center for Computational Neuroscience; Ludwig-Maximilians-University Munich, Germany
- Hertie Foundation, Frankfurt a.M., Germany
- * E-mail:
| | - Markus Huber
- Clinical Neuroscience, Ludwig-Maximilians-University Munich, Germany
- Center for Sensorimotor Research; Ludwig-Maximilians-University Munich, Germany
| | - Hannah Schramm
- Clinical Neuroscience, Ludwig-Maximilians-University Munich, Germany
- Center for Sensorimotor Research; Ludwig-Maximilians-University Munich, Germany
| | - Günter Kugler
- Clinical Neuroscience, Ludwig-Maximilians-University Munich, Germany
| | - Marianne Dieterich
- German Center for Vertigo and Balance Disorders—IFBLMU (DSGZ), Ludwig-Maximilians-University Munich, Germany
- Department of Neurology, Ludwig-Maximilians-University Munich, Germany
- Munich Cluster for Systems Neurology (SyNergy), Munich, Germany
| | - Stefan Glasauer
- Clinical Neuroscience, Ludwig-Maximilians-University Munich, Germany
- German Center for Vertigo and Balance Disorders—IFBLMU (DSGZ), Ludwig-Maximilians-University Munich, Germany
- Department of Neurology, Ludwig-Maximilians-University Munich, Germany
- Center for Sensorimotor Research; Ludwig-Maximilians-University Munich, Germany
- Bernstein Center for Computational Neuroscience; Ludwig-Maximilians-University Munich, Germany
| |
Collapse
|
14
|
Varlet M, Bucci C, Richardson MJ, Schmidt RC. Informational constraints on spontaneous visuomotor entrainment. Hum Mov Sci 2015; 41:265-81. [PMID: 25866944 DOI: 10.1016/j.humov.2015.03.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Revised: 03/24/2015] [Accepted: 03/24/2015] [Indexed: 10/23/2022]
Abstract
Past research has revealed that an individual's rhythmic limb movements become spontaneously entrained to an environmental rhythm if visual information about the rhythm is available and its frequency is near that of the individual's movements. Research has also demonstrated that if the eyes track an environmental stimulus, the spontaneous entrainment to the rhythm is strengthened. One hypothesis explaining this enhancement of spontaneous entrainment is that the limb movements and eye movements are linked through a neuromuscular coupling or synergy. Another is that eye-tracking facilitates the pick up of important coordinating information. Experiment 1 investigated the first hypothesis by evaluating whether any rhythmic movement of the eyes would facilitate spontaneous entrainment. Experiments 2 and 3 (respectively) explored whether eye-tracking strengthens spontaneous entrainment by allowing the pickup of trajectory direction change information or allowing an increase in the amount of information to be picked-up. Results suggest that the eye-tracking enhancement of spontaneous entrainment is a consequence of increasing the amount of information available to be picked-up.
Collapse
Affiliation(s)
- Manuel Varlet
- The MARCS Institute, University of Western Sydney, NSW, Australia.
| | - Colleen Bucci
- Department of Psychology, College of the Holy Cross, Worcester, MA, USA
| | - Michael J Richardson
- Center for Cognition, Action, and Perception, University of Cincinnati, Cincinnati, OH, USA
| | - R C Schmidt
- Department of Psychology, College of the Holy Cross, Worcester, MA, USA
| |
Collapse
|
15
|
Hesse C, Ball K, Schenk T. Pointing in visual periphery: is DF's dorsal stream intact? PLoS One 2014; 9:e91420. [PMID: 24626162 PMCID: PMC3953402 DOI: 10.1371/journal.pone.0091420] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2013] [Accepted: 02/11/2014] [Indexed: 12/02/2022] Open
Abstract
Observations of the visual form agnosic patient DF have been highly influential in establishing the hypothesis that separate processing streams deal with vision for perception (ventral stream) and vision for action (dorsal stream). In this context, DF's preserved ability to perform visually-guided actions has been contrasted with the selective impairment of visuomotor performance in optic ataxia patients suffering from damage to dorsal stream areas. However, the recent finding that DF shows a thinning of the grey matter in the dorsal stream regions of both hemispheres in combination with the observation that her right-handed movements are impaired when they are performed in visual periphery has opened up the possibility that patient DF may potentially also be suffering from optic ataxia. If lesions to the posterior parietal cortex (dorsal stream) are bilateral, pointing and reaching deficits should be observed in both visual hemifields and for both hands when targets are viewed in visual periphery. Here, we tested DF's visuomotor performance when pointing with her left and her right hand toward targets presented in the left and the right visual field at three different visual eccentricities. Our results indicate that DF shows large and consistent impairments in all conditions. These findings imply that DF's dorsal stream atrophies are functionally relevant and hence challenge the idea that patient DF's seemingly normal visuomotor behaviour can be attributed to her intact dorsal stream. Instead, DF seems to be a patient who suffers from combined ventral and dorsal stream damage meaning that a new account is needed to explain why she shows such remarkably normal visuomotor behaviour in a number of tasks and conditions.
Collapse
Affiliation(s)
- Constanze Hesse
- School of Psychology, University of Aberdeen, Aberdeen, United Kingdom
- * E-mail:
| | - Keira Ball
- Department of Psychology, Durham University, Stockton-on-Tees, United Kingdom
| | - Thomas Schenk
- Neurology, University of Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
16
|
Leclercq G, Blohm G, Lefèvre P. Accounting for direction and speed of eye motion in planning visually guided manual tracking. J Neurophysiol 2013; 110:1945-57. [DOI: 10.1152/jn.00130.2013] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.
Collapse
Affiliation(s)
- Guillaume Leclercq
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; and
- Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
| | - Philippe Lefèvre
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| |
Collapse
|
17
|
Smid KA, den Otter AR. Why you need to look where you step for precise foot placement: the effects of gaze eccentricity on stepping errors. Gait Posture 2013; 38:242-6. [PMID: 23266044 DOI: 10.1016/j.gaitpost.2012.11.019] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2012] [Revised: 10/12/2012] [Accepted: 11/22/2012] [Indexed: 02/02/2023]
Abstract
Previous research has shown that accurate stepping involves the fixation of gaze on the intended step location. One possible explanation for this visual strategy is that the fixation of locations that are eccentric relative to the step target, results in systematic localization errors, as has previously been demonstrated in pointing. To test this idea, we assessed the possible role of gaze stabilization in the spatial planning of accurate steps, and determined whether the direction of mediolateral stepping errors depended on the direction of gaze. Final foot position was recorded from ten healthy participants when making steps towards prints of their own foot, in light and in darkness, and fixating their gaze on (i) the stepping target or (ii) locations 30 cm to the left or right of the target. The results showed that accuracy and precision of foot placement were reduced when stepping in darkness or when fixating eccentric gaze targets, demonstrating that visual feedback on the target and/or foot facilitates spatial control of the foot, and that foveal information is superior to perifoveal information in this respect. Crucially, the direction of the mediolateral stepping errors depended on the direction of gaze: on average participants overstepped 12 mm contralateral to the direction of gaze when fixating eccentric locations, indicating that the fixation of locations eccentric to the stepping target results in inaccuracies in foot placement. These results provide new insights into the contributions of foveal vision to the spatial planning of precise steps, and explain why it is important to look where you step when accurate foot placement is required.
Collapse
Affiliation(s)
- K A Smid
- University of Groningen, University Medical Center Groningen, Center for Human Movement Sciences, Groningen, The Netherlands
| | | |
Collapse
|
18
|
Mrotek LA. Following and intercepting scribbles: interactions between eye and hand control. Exp Brain Res 2013; 227:161-74. [PMID: 23552996 DOI: 10.1007/s00221-013-3496-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2011] [Accepted: 03/19/2013] [Indexed: 01/02/2023]
Abstract
The smooth pursuit eye movement system appears to be importantly engaged during the planning and execution of interceptive hand movements. The present study sought to probe the interaction between eye and hand control systems by examining their responses during an interception task that included target speed perturbations. On 2/3 of trials, the target increased or decreased speed at various times, ranging from about 300 ms before to 150 ms after the onset of a finger movement directed to intercept the target and was triggered by a GO signal. Additionally, the same 2D sum-of-sines target trajectories were followed with the eyes without interception. The smooth pursuit system responded more quickly if the target speed perturbation occurred earlier during the reaction time (i.e., near the time of the GO signal). Similarly, the finger movement began more quickly if target speed was increased earlier during the reaction time. For early perturbation conditions, the initial direction of the finger movement matched the predicted target intercept using the new target speed. For perturbations occurring after finger movement, onset initial direction of finger movement did not match target interception such that the finger path began to curve toward the perturbed target after about 150-200 ms. The results support the idea of an active process of visual target path extrapolation simultaneously used to guide both the eye and hand.
Collapse
Affiliation(s)
- Leigh A Mrotek
- Department of Kinesiology, University of Wisconsin Oshkosh, 800 Algoma Boulevard, Oshkosh, WI 54901-8630, USA.
| |
Collapse
|
19
|
Vesia M, Crawford JD. Specialization of reach function in human posterior parietal cortex. Exp Brain Res 2012; 221:1-18. [DOI: 10.1007/s00221-012-3158-9] [Citation(s) in RCA: 108] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2011] [Accepted: 06/21/2012] [Indexed: 10/28/2022]
|
20
|
Abstract
Direction of gaze (eye angle + head angle) has been shown to be important for representing space for action, implying a crucial role of vision for spatial updating. However, blind people have no access to vision yet are able to perform goal-directed actions successfully. Here, we investigated the role of visual experience for localizing and updating targets as a function of intervening gaze shifts in humans. People who differed in visual experience (late blind, congenitally blind, or sighted) were briefly presented with a proprioceptive reach target while facing it. Before they reached to the target's remembered location, they turned their head toward an eccentric direction that also induced corresponding eye movements in sighted and late blind individuals. We found that reaching errors varied systematically as a function of shift in gaze direction only in participants with early visual experience (sighted and late blind). In the late blind, this effect was solely present in people with moveable eyes but not in people with at least one glass eye. Our results suggest that the effect of gaze shifts on spatial updating develops on the basis of visual experience early in life and remains even after loss of vision as long as feedback from the eyes and head is available.
Collapse
|
21
|
Varlet M, Coey CA, Schmidt RC, Richardson MJ. Influence of stimulus amplitude on unintended visuomotor entrainment. Hum Mov Sci 2011; 31:541-52. [PMID: 22088490 DOI: 10.1016/j.humov.2011.08.002] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2011] [Revised: 08/13/2011] [Accepted: 08/14/2011] [Indexed: 10/15/2022]
Abstract
Rhythmic limb movements have been shown to spontaneously coordinate with rhythmic environmental stimuli. Previous research has demonstrated how such entrainment depends on the difference between the movement periods of the limb and the stimulus, and on the degree to which the actor visually tracks the stimulus. Here we present an experiment that investigated how stimulus amplitude influences unintended visuomotor entrainment. Participants performed rhythmic forearm movements while visually tracking an oscillating stimulus. The amplitude and period of stimulus motion were manipulated. Larger stimulus amplitudes resulted in stronger entrainment irrespective of how participants visually tracked the movements of the stimulus. Visual tracking, however, did result in increased entrainment for large, but not small, stimulus amplitudes. Collectively, the results indicate that the movement amplitude of environmental stimuli plays a significant role in the emergence of unintended visuomotor entrainment.
Collapse
Affiliation(s)
- Manuel Varlet
- Movement to Health Laboratory, EuroMov, Montpellier-1 University, Montpellier, France.
| | | | | | | |
Collapse
|
22
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
23
|
Does the “eyes lead the hand” principle apply to reach-to-grasp movements evoked by unexpected balance perturbations? Hum Mov Sci 2011; 30:368-83. [DOI: 10.1016/j.humov.2010.07.005] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2009] [Revised: 07/10/2010] [Accepted: 07/12/2010] [Indexed: 10/18/2022]
|
24
|
Chen Y, Byrne P, Crawford JD. Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach. Neuropsychologia 2011; 49:49-60. [DOI: 10.1016/j.neuropsychologia.2010.10.031] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2010] [Revised: 10/18/2010] [Accepted: 10/29/2010] [Indexed: 10/18/2022]
|
25
|
King EC, McKay SM, Cheng KC, Maki BE. The use of peripheral vision to guide perturbation-evoked reach-to-grasp balance-recovery reactions. Exp Brain Res 2010; 207:105-18. [PMID: 20957351 PMCID: PMC5142842 DOI: 10.1007/s00221-010-2434-9] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2010] [Accepted: 09/26/2010] [Indexed: 10/18/2022]
Abstract
For a reach-to-grasp reaction to prevent a fall, it must be executed very rapidly, but with sufficient accuracy to achieve a functional grip. Recent findings suggest that the CNS may avoid potential time delays associated with saccade-guided arm movements by instead relying on peripheral vision (PV). However, studies of volitional arm movements have shown that reaching is slower and/or less accurate when guided by PV, rather than central vision (CV). The present study investigated how the CNS resolves speed-accuracy trade-offs when forced to use PV to guide perturbation-evoked reach-to-grasp balance-recovery reactions. These reactions were evoked, in 12 healthy young adults, via sudden unpredictable antero-posterior platform translation (barriers deterred stepping reactions). In PV trials, subjects were required to look straight-ahead at a visual target while a small cylindrical handhold (length 25%> hand-width) moved intermittently and unpredictably along a transverse axis before stopping at a visual angle of 20°, 30°, or 40°. The perturbation was then delivered after a random delay. In CV trials, subjects fixated on the handhold throughout the trial. A concurrent visuo-cognitive task was performed in 50% of PV trials but had little impact on reach-to-grasp timing or accuracy. Forced reliance on PV did not significantly affect response initiation times, but did lead to longer movement times, longer time-after-peak-velocity and less direct trajectories (compared to CV trials) at the larger visual angles. Despite these effects, forced reliance on PV did not compromise ability to achieve a functional grasp and recover equilibrium, for the moderately large perturbations and healthy young adults tested in this initial study.
Collapse
Affiliation(s)
- Emily C King
- Centre for Studies in Aging, Sunnybrook Health Sciences Centre, Toronto, ON, Canada.
| | | | | | | |
Collapse
|
26
|
Specificity of human parietal saccade and reach regions during transcranial magnetic stimulation. J Neurosci 2010; 30:13053-65. [PMID: 20881123 DOI: 10.1523/jneurosci.1644-10.2010] [Citation(s) in RCA: 111] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Single-unit recordings in macaque monkeys have identified effector-specific regions in posterior parietal cortex (PPC), but functional neuroimaging in the human has yielded controversial results. Here we used on-line repetitive transcranial magnetic stimulation (rTMS) to determine saccade and reach specificity in human PPC. A short train of three TMS pulses (separated by an interval of 100 ms) was delivered to superior parieto-occipital cortex (SPOC), a region over the midposterior intraparietal sulcus (mIPS), and a site close to caudal IPS situated over the angular gyrus (AG) during a brief memory interval while subjects planned either a saccade or reach with the left or right hand. Behavioral measures then were compared to controls without rTMS. Stimulation of mIPS and AG produced similar patterns: increased end-point variability for reaches and decreased saccade accuracy for contralateral targets. In contrast, stimulation of SPOC deviated reach end points toward visual fixation and had no effect on saccades. Contralateral-limb specificity was highest for AG and lowest for SPOC. Visual feedback of the hand negated rTMS-induced disruptions of the reach plan for mIPS and AG, but not SPOC. These results suggest that human SPOC is specialized for encoding retinally peripheral reach goals, whereas more anterior-lateral regions (mIPS and AG) along the IPS possess overlapping maps for saccade and reach planning and are more closely involved in motor details (i.e., planning the reach vector for a specific hand). This work provides the first causal evidence for functional specificity of these parietal regions in healthy humans.
Collapse
|
27
|
Abstract
Spatial computations underlying the coordination of the hand and eye present formidable geometric challenges. One way for the nervous system to simplify these computations is to directly encode the relative position of the hand and the center of gaze. Neurons in the dorsal premotor cortex (PMd), which is critical for the guidance of arm-reaching movements, encode the relative position of the hand, gaze, and goal of reaching movements. This suggests that PMd can coordinate reaching movements with eye movements. Here, we examine saccade-related signals in PMd to determine whether they also point to a role for PMd in coordinating visual-motor behavior. We first compared the activity of a population of PMd neurons with a population of parietal reach region (PRR) neurons. During center-out reaching and saccade tasks, PMd neurons responded more strongly before saccades than PRR neurons, and PMd contained a larger proportion of exclusively saccade-tuned cells than PRR. During a saccade relative position-coding task, PMd neurons encoded saccade targets in a relative position code that depended on the relative position of gaze, the hand, and the goal of a saccadic eye movement. This relative position code for saccades is similar to the way that PMd neurons encode reach targets. We propose that eye movement and eye position signals in PMd do not drive eye movements, but rather provide spatial information that links the control of eye and arm movements to support coordinated visual-motor behavior.
Collapse
|
28
|
Interaction between gaze and visual and proprioceptive position judgements. Exp Brain Res 2010; 203:485-98. [DOI: 10.1007/s00221-010-2251-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2010] [Accepted: 04/08/2010] [Indexed: 10/19/2022]
|
29
|
Kwok JC, Hui-Chan CW, Tsang WW. Effects of aging and Tai Chi on finger-pointing toward stationary and moving visual targets. Arch Phys Med Rehabil 2010; 91:149-55. [PMID: 20103410 DOI: 10.1016/j.apmr.2009.07.018] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2009] [Revised: 07/27/2009] [Accepted: 07/28/2009] [Indexed: 11/20/2022]
Abstract
UNLABELLED Kwok JC, Hui-Chan CW, Tsang WW. Effects of aging and Tai Chi on finger-pointing toward stationary and moving visual targets. OBJECTIVE To examine the aging effect on speed and accuracy in finger pointing toward stationary and moving visual targets between young and older healthy subjects and whether or not Tai Chi practitioners perform better than healthy older controls in these tasks. DESIGN Cross-sectional study. SETTING University-based rehabilitation center. PARTICIPANTS University students (n=30) (aged 24.2+/-3.1y), were compared with healthy older control subjects (n=30) (aged 72.3+/-7.2y) and experienced (n=31) (mean years of practice, 7.1+/-6.5y) Tai Chi practitioners (aged 70.3+/-5.9y). INTERVENTIONS Not applicable. MAIN OUTCOME MEASURES Subjects pointed with the index finger of their dominant hand from a fixed starting position on a desk to a visual signal (1.2cm diameter dot) appearing on a display unit, as quickly and as accurately as possible. Outcome measures included (1) reaction time-the time from the appearance of the dot to the onset of the anterior deltoid electromyographic response; (2) movement time-the time from onset of the electromyographic response to touching of the dot; and (3) accuracy-the absolute deviation of the subject's finger-pointing location from center of the dot. RESULTS Young subjects achieved significantly faster reaction and movement times with significantly better accuracy than older control subjects in all finger-pointing tasks. Tai Chi practitioners attained significantly better accuracy than older controls in pointing to stationary visual signals appearing contralaterally and centrally to their pointing hand. They also demonstrated significantly better accuracy when the target was moving. Accuracy in Tai Chi practitioners was similar to young controls. CONCLUSIONS Eye-hand coordination in finger-pointing declines with age in time and accuracy domains. However, Tai Chi practitioners attained significantly better accuracy than control subjects similar in age, sex, and physical activity level.
Collapse
Affiliation(s)
- Jasmine C Kwok
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong (SAR), China
| | | | | |
Collapse
|
30
|
McGuire LMM, Sabes PN. Sensory transformations and the use of multiple reference frames for reach planning. Nat Neurosci 2009; 12:1056-61. [PMID: 19597495 PMCID: PMC2749235 DOI: 10.1038/nn.2357] [Citation(s) in RCA: 159] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2009] [Accepted: 06/01/2009] [Indexed: 01/08/2023]
Abstract
The sensory signals that drive movement planning arrive in a variety of “reference frames”, so integrating or comparing them requires sensory transformations. We propose a model where the statistical properties of sensory signals and their transformations determine how these signals are used. This model captures the patterns of gaze-dependent errors found in our human psychophysics experiment when the sensory signals available for reach planning are varied. These results challenge two widely held ideas: error patterns directly reflect the reference frame of the underlying neural representation, and it is preferable to use a single common reference frame for movement planning. We show that gaze-dependent error patterns, often cited as evidence for retinotopic reach planning, can be explained by a transformation bias and are not exclusively linked to retinotopic representations. Further, the presence of multiple reference frames allows for optimal use of available sensory information and explains task-dependent reweighting of sensory signals.
Collapse
Affiliation(s)
- Leah M M McGuire
- W. M. Keck Center for Integrative Neuroscience, Department of Physiology, and the Neuroscience Graduate Program, University of California, San Francisco, California, USA
| | | |
Collapse
|
31
|
Hondzinski JM, Kwon T. Pointing control using a moving base of support. Exp Brain Res 2009; 197:81-90. [DOI: 10.1007/s00221-009-1893-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2008] [Accepted: 06/03/2009] [Indexed: 10/20/2022]
|
32
|
Dessing JC, Oostwoud Wijdenes L, Peper CE, Beek PJ. Visuomotor transformation for interception: catching while fixating. Exp Brain Res 2009; 196:511-27. [PMID: 19543722 PMCID: PMC2704620 DOI: 10.1007/s00221-009-1882-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2008] [Accepted: 05/21/2009] [Indexed: 11/21/2022]
Abstract
Catching a ball involves a dynamic transformation of visual information about ball motion into motor commands for moving the hand to the right place at the right time. We previously formulated a neural model for this transformation to account for the consistent leftward movement biases observed in our catching experiments. According to the model, these biases arise within the representation of target motion as well as within the transformation from a gaze-centered to a body-centered movement command. Here, we examine the validity of the latter aspect of our model in a catching task involving gaze fixation. Gaze fixation should systematically influence biases in catching movements, because in the model movement commands are only generated in the direction perpendicular to the gaze direction. Twelve participants caught balls while gazing at a fixation point positioned either straight ahead or 14° to the right. Four participants were excluded because they could not adequately maintain fixation. We again observed a consistent leftward movement bias, but the catching movements were unaffected by fixation direction. This result refutes our proposal that the leftward bias partly arises within the visuomotor transformation, and suggests instead that the bias predominantly arises within the early representation of target motion, specifically through an imbalance in the represented radial and azimuthal target motion.
Collapse
Affiliation(s)
- Joost C Dessing
- Research Institute MOVE, Faculty of Human Movement Sciences, VU University Amsterdam, Van der Boechorststraat 9, 1081 BT, Amsterdam, The Netherlands.
| | | | | | | |
Collapse
|
33
|
Coordinate transformations for hand-guided saccades. Exp Brain Res 2009; 195:455-65. [DOI: 10.1007/s00221-009-1811-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2008] [Accepted: 04/08/2009] [Indexed: 10/20/2022]
|
34
|
Review of models for the generation of multi-joint movements in 3-D. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2009; 629:523-50. [PMID: 19227519 DOI: 10.1007/978-0-387-77064-2_28] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Most studies in motor control have focused on movements in two dimensions and only very few studies have systematically investigated movements in three dimensions. As a consequence, the large majority of modeling studies for motor control have tested the predictions of these models using movement data in 2D. As we will explain, movements in 3D cannot be understood from movements in 2D by adding just another dimension. The third dimension adds new and unexpected complexities. In this chapter we will explore the frames of reference, which are used in mapping sensory information about movement targets into motor commands and muscle activation patterns. Moreover, we will make a quantitative comparison between the predictions of various models in the literature with the outcome of 3D movement experiments. Quite surprisingly, none of the existing models is able to explain the data in different movement paradigms.
Collapse
|
35
|
Blohm G, Keith GP, Crawford JD. Decoding the cortical transformations for visually guided reaching in 3D space. ACTA ACUST UNITED AC 2008; 19:1372-93. [PMID: 18842662 DOI: 10.1093/cercor/bhn177] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | | | | |
Collapse
|
36
|
Thompson AA, Henriques DYP. Updating visual memory across eye movements for ocular and arm motor control. J Neurophysiol 2008; 100:2507-14. [PMID: 18768640 DOI: 10.1152/jn.90599.2008] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.
Collapse
Affiliation(s)
- Aidan A Thompson
- Centre for Vision Research, York University, 4700 Keele Street, Toronto, ON, Canada M3J 1P3
| | | |
Collapse
|
37
|
Sorrento GU, Henriques DYP. Reference frame conversions for repeated arm movements. J Neurophysiol 2008; 99:2968-84. [PMID: 18400956 DOI: 10.1152/jn.90225.2008] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame.
Collapse
Affiliation(s)
- Gianluca U Sorrento
- York University, School of Kinesiology and Health Science, Bethune College, 4700 Keele St., Toronto, Ontario M3J 1P3, Canada
| | | |
Collapse
|
38
|
Lemay M, Chouinard S, Richer F, Lesperance P. Huntington's disease affects movement termination. Behav Brain Res 2007; 187:153-8. [PMID: 17980441 DOI: 10.1016/j.bbr.2007.09.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2007] [Revised: 08/31/2007] [Accepted: 09/07/2007] [Indexed: 11/17/2022]
Abstract
Huntington's disease (HD) is a neurodegenerative disease affecting the striatum and associated with deficits in voluntary movement in early stages. The final portion of aiming movements is particularly affected in HD and one hypothesis is that this deficit is linked to attention or terminal control requirements. Sixteen patients with early HD and 16 age-matched controls were examined in aiming movements. Four conditions manipulated movement termination requirements (discrete movements with a complete stop vs. cyclical back-and-forth movements) and the presence of flankers around the target. Reducing movement termination requirements significantly attenuated deficits in the final movement phase in patients. The presence of flankers around the target affected the initial portion of movements but did not affect the two groups differentially. These results indicate that terminal control requirements affect voluntary movements in HD. This suggests that frontostriatal systems are involved in movement termination.
Collapse
Affiliation(s)
- Martin Lemay
- Centre de Réadaptation Marie-Enfant, Hôpital Ste-Justine, Montréal, QC, Canada.
| | | | | | | |
Collapse
|
39
|
Bernier PM, Gauthier GM, Blouin J. Evidence for Distinct, Differentially Adaptable Sensorimotor Transformations for Reaches to Visual and Proprioceptive Targets. J Neurophysiol 2007; 98:1815-9. [PMID: 17634334 DOI: 10.1152/jn.00570.2007] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recent evidence suggests that planning a reaching movement entails similar stages and common networks irrespective of whether the target location is defined through visual or proprioceptive cues. Here we test whether the transformations that convert the sensory information regarding target location into the required motor output are common for both types of reaches. To do so, we adaptively modified these sensorimotor transformations through exposure to displacing prisms and hypothesized that if they are common to both types of reaches, the aftereffects observed for reaches to visual targets would generalize to reaches to a proprioceptive target. Subjects ( n = 16) were divided into two groups that differed with respect to the sensory modality of the targets (visual or proprioceptive) used in the pre- and posttests. The adaptation phase was identical for both groups and consisted of movements toward visual targets while wearing 10.5° horizontally displacing prisms. We observed large aftereffects consistent with the magnitude of the prism-induced shift when reaching toward visual targets in the posttest, but no significant aftereffects for movements toward the proprioceptive target. These results provide evidence that distinct, differentially adaptable sensorimotor transformations underlie the planning of reaches to visual and proprioceptive targets.
Collapse
Affiliation(s)
- Pierre-Michel Bernier
- Lab. de Neurobiologie de la Cognition, Centre National de la Recherche Scientifique and Aix Marseille Université, Marseille, France
| | | | | |
Collapse
|
40
|
Vaziri S, Diedrichsen J, Shadmehr R. Why does the brain predict sensory consequences of oculomotor commands? Optimal integration of the predicted and the actual sensory feedback. J Neurosci 2006; 26:4188-97. [PMID: 16624939 PMCID: PMC1473981 DOI: 10.1523/jneurosci.4747-05.2006] [Citation(s) in RCA: 108] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
When the brain initiates a saccade, it uses a copy of the oculomotor commands to predict the visual consequences: for example, if one fixates a reach target, a peripheral saccade will produce an internal estimate of the new retinal location of the target, a process called remapping. In natural settings, the target likely remains visible after the saccade. So why should the brain predict the sensory consequence of the saccade when after its completion, the image of the target remains visible? We hypothesized that in the post-saccadic period, the brain integrates target position information from two sources: one based on remapping and another based on the peripheral view of the target. The integration of information from these two sources could produce a less variable target estimate than is possible from either source alone. Here, we show that reaching toward targets that were initially foveated and remapped had significantly less variance than reaches relying on peripheral target information. Furthermore, in a more natural setting where both sources of information were available simultaneously, variance of the reaches was further reduced as predicted by integration. This integration occurred in a statistically optimal manner, as demonstrated by the change in integration weights when we manipulated the uncertainty of the post-saccadic target estimate by varying exposure time. Therefore, the brain predicts the sensory consequences of motor commands because it integrates its prediction with the actual sensory information to produce an estimate of sensory space that is better than possible from either source alone.
Collapse
Affiliation(s)
- Siavash Vaziri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205, USA
| | | | | |
Collapse
|
41
|
Beurze SM, Van Pelt S, Medendorp WP. Behavioral reference frames for planning human reaching movements. J Neurophysiol 2006; 96:352-62. [PMID: 16571731 DOI: 10.1152/jn.01362.2005] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects (n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group (n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.
Collapse
Affiliation(s)
- Sabine M Beurze
- Nijmegen Institute for Cognition and Information, Radboud University of Nijmegen, Nijmegen, The Netherlands.
| | | | | |
Collapse
|
42
|
Hondzinski JM, Cui Y. Allocentric cues do not always improve whole body reaching performance. Exp Brain Res 2006; 174:60-73. [PMID: 16565811 DOI: 10.1007/s00221-006-0421-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2005] [Accepted: 02/24/2006] [Indexed: 10/24/2022]
Abstract
The aim of this investigation was to gain further insight into control strategies used for whole body reaching tasks. Subjects were requested to step and reach to remembered target locations in normal room lighting (LIGHT) and complete darkness (DARK) with their gaze directed toward or eccentric to the remembered target location. Targets were located centrally at three different heights. Eccentric anchors for gaze direction were located at target height and initial target distance, either 30 degrees to the right or 20 degrees to the left of target location. Control trials, where targets remained in place, and remembered target trials were randomly presented. We recorded movements of the hand, eye and head, while subjects stepped and reached to real or remembered target locations. Lateral, vertical and anterior-posterior (AP) hand errors and eye location, and gaze direction deviations were determined relative to control trials. Final hand location errors varied by target height, lighting condition and gaze eccentricity. Lower reaches in the DARK compared to the LIGHT condition were common, and when matched with a tendency to reach above the low target, help explain more accurate reaches for this target in darkness. Anchoring the gaze eccentrically reduced hand errors in the AP direction and increased errors in the lateral direction. These results could be explained by deviations in eye locations and gaze directions, which were deemed significant predictors of final reach errors, accounting for a 17-47% of final hand error variance. Results also confirmed a link between gaze deviations and hand and head displacements, suggesting that gaze direction is used as a common input for movement of the hand and body. Additional links between constant and variable eye deviations and hand errors were common for the AP direction but not for lateral or vertical directions. When combined with data regarding hand error predictions, we found that subjects' alterations in body movement in the AP direction were associated with AP adjustments in their reach, but final hand position adjustments were associated with gaze direction alterations for movements in the vertical and horizontal directions. These results support the hypothesis that gaze direction provides a control signal for hand and body movement and that this control signal is used for movement direction and not amplitude.
Collapse
Affiliation(s)
- Jan M Hondzinski
- Department of Kinesiology, Louisiana State University, 112 Long Fieldhouse, Baton Rouge, LA 70803, USA.
| | | |
Collapse
|
43
|
Knox JJ, Coppieters MW, Hodges PW. Do you know where your arm is if you think your head has moved? Exp Brain Res 2006; 173:94-101. [PMID: 16565812 DOI: 10.1007/s00221-006-0368-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2005] [Accepted: 01/12/2006] [Indexed: 10/24/2022]
Abstract
Reproduction of a previously presented elbow position is affected by changes in head position. As movement of the head is associated with local biomechanical changes, the aim of the present study was to determine if illusory changes in head position could induce similar effects on the reproduction of elbow position. Galvanic vestibular stimulation (GVS) was applied to healthy subjects in supine lying. The stimulus was applied during the presentation of an elbow position, which the subject then reproduced without stimulation. In the first study, 13 subjects received 1.5 mA stimuli, which caused postural sway in standing, confirming that the firing of vestibular afferents was affected, but no illusory changes in head position were reported. In the second study, 13 subjects received 2.0-3.0 mA GVS. Six out of 13 subjects reported consistent illusory changes in head position, away from the side of the anode. In these subjects, anode right stimulation induced illusory left lateral flexion and elbow joint position error towards extension (p=0.03), while anode left tended to have the opposite effect (p=0.16). The GVS had no effect on error in subjects who did not experience illusory head movement with either 1.5 mA stimulus (p=0.8) or 2.0-3.0 mA stimulus (p=0.7). This study demonstrates that the accuracy of elbow repositioning is affected by illusory changes in head position. These results support the hypothesis that the perceived position of proximal body segments is used in the planning and performance of accurate upper limb movements.
Collapse
Affiliation(s)
- Joanna J Knox
- Division of Physiotherapy, School of Health and Rehabilitation Sciences, The University of Queensland, 4072 Brisbane, Qld, Australia
| | | | | |
Collapse
|
44
|
Guerraz M, Navarro J, Ferrero F, Cremieux J, Blouin J. Perceived versus actual head-on-trunk orientation during arm movement control. Exp Brain Res 2005; 172:221-9. [PMID: 16369783 DOI: 10.1007/s00221-005-0316-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2005] [Accepted: 12/03/2005] [Indexed: 10/25/2022]
Abstract
Static roll head tilt induces bias in the trajectory of upper limb voluntary movements. The aim of the experiment was to investigate whether this bias is dependant on the perception of body configuration rather than on its actual configuration. We used the 'return' phenomenon as a method to produce dissociation between perceived and actual head tilt. Static roll head tilt in supine subjects was sustained for 15 min during which subjects were periodically required to estimate verbally the tilt of their head respective to their trunk and draw, with their right index finger, straight lines aligned with their trunk. After 15 min, subjects' head were realigned with the trunk, and subjects continued to give verbal estimate of head position and perform the motor task. Results showed that the initial angular deviation of the lines in the direction opposite to head tilt gradually diminished. The adaptation was noticeable within the first 3-5 min of tilt and subsequently diminished. Verbal estimates confirmed the return phenomenon, i.e. subjects perceived their head as slowly returning towards its neutral position after a few minutes of sustained tilt. When realigned with the trunk, subjects experienced the illusion that their head was tilted in the opposite direction to the initial head tilt and a line deviation in the opposite direction to those made on initial exposure was observed (after-effect). These results indicate that the angular deviation in motor production observed in condition of static head tilt were largely related to the perceived body configuration and therefore favour the hypothesis that the conscious perception of body configuration plays a key role in organising sensorimotor tasks.
Collapse
Affiliation(s)
- Michel Guerraz
- Laboratoire de Psychologie et Neurocognition, CNRS UMR 5105, Université de Savoie, 73376 Le Bourget du lac, Chambéry, France.
| | | | | | | | | |
Collapse
|
45
|
Guillaud E, Gauthier G, Vercher JL, Blouin J. Fusion of visuo-ocular and vestibular signals in arm motor control. J Neurophysiol 2005; 95:1134-46. [PMID: 16221749 DOI: 10.1152/jn.00453.2005] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Keeping the finger pointing at an Earth-fixed object during body displacements can be achieved if compensatory arm movements counteract the effect of the rotation on the hand's position in space. Here we investigated the fusion of signals that originated from systems having different neurophysiological properties (i.e., the visuo-oculomotor and vestibular systems) in the production of such compensatory arm movements. To this end, we analyzed the subjects' performance in three conditions that differed according to the information they provided about relative target-body motion. This information originated either from the vestibular or visuo-oculomotor system, or from a combination of the two. To highlight the integration of visuo-oculomotor and vestibular signals, we compared the arm response to motion frequencies presumed to allow or not to allow optimal vestibular and oculomotor responses. When they could be used in isolation, the ocular signals allowed long-latency but precise kinematics control of the arm movement, whereas vestibular signals allowed accurate motor response early in the rotation but their contribution declined as body rotation developed. Optimal performance was obtained throughout the whole movement and for all rotation frequencies when the visuo-oculomotor and vestibular signals could be used together. This increase in hand-tracking performance could not be explained by a unimodal model or an additive model of vestibular and ocular cues, even when using weighted signals. Rather, the results supported a functional model in which vestibular and visuo-oculomotor signals have different influences on the temporal and spatial aspects of hand movement compensating for body motion.
Collapse
Affiliation(s)
- Etienne Guillaud
- Unité Mixte de Recherche Mouvement et Perception, Centre National de la Recherche Scientifique et Université de la Méditerranée, Marseille, France
| | | | | | | |
Collapse
|
46
|
Brown LE, Halpert BA, Goodale MA. Peripheral vision for perception and action. Exp Brain Res 2005; 165:97-106. [PMID: 15940498 DOI: 10.1007/s00221-005-2285-y] [Citation(s) in RCA: 56] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2004] [Accepted: 01/18/2005] [Indexed: 11/26/2022]
Abstract
Anatomical and physiological evidence suggests that vision-for-perception and vision-for-action may be differently sensitive to increasingly peripheral stimuli, and to stimuli in the upper and lower visual fields (VF). We asked participants to fixate one of 24 randomly presented LED arranged radially in eight directions and at three eccentricities around a central target location. One of two (small, large) target objects was presented briefly, and participants responded in two ways. For the action task, they reached for and grasped the target. For the perception task, they estimated target height by adjusting thumb-finger separation. In a final set of trials for each task, participants knew that target size would remain constant. We found that peak aperture increased with eccentricity for grasping, but not for perceptual estimations of size. In addition, peak grip aperture, but not size-estimation aperture, was more variable when targets were viewed in the upper as opposed to the lower VF. A second experiment demonstrated that prior knowledge about object size significantly reduced the variability of perceptual estimates, but had no effect on the variability of grip aperture. Overall, these results support the claim that peripheral VF stimuli are processed differently for perception and action. Moreover, they support the idea that the lower VF is specialized for the control of manual prehension. Finally, the effect of prior knowledge about target size on performance substantiates claims that perception is more tightly linked to memory systems than action.
Collapse
Affiliation(s)
- Liana E Brown
- Department of Psychology, 6250 Social Sciences Centre, University of Western Ontario, London, ON, Canada, N6A 5C2.
| | | | | |
Collapse
|
47
|
Lemay M, Stelmach GE. Multiple frames of reference for pointing to a remembered target. Exp Brain Res 2005; 164:301-10. [PMID: 15782349 DOI: 10.1007/s00221-005-2249-2] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2004] [Accepted: 11/30/2004] [Indexed: 11/25/2022]
Abstract
Pointing with an unseen hand to a visual target that disappears prior to movement requires maintaining a memory representation about the target location. The target location can be transformed either into a hand-centered frame of reference during target presentation and remembered under that form, or remembered in terms of retinal and extra-retinal cues and transformed into a body-centered frame of reference before movement initiation. The main goal of the present study was to investigate whether the target is stored in memory in an eye-centered frame, a hand-centered frame or in both frames of reference concomitantly. The task was to locate, memorize, and point to a target in a dark environment. Hand movement was not visible. During the recall delay, participants were asked to move their hand or their eyes in order to disrupt the memory representation of the target. Movement of the eyes during the recall delay was expected to disrupt an eye-centered memory representation whereas movement of the hand was expected to disrupt a hand-centered memory representation by increasing movement variability to the target. Variability of movement amplitude and direction was examined. Results showed that participants were more variable on the directional component of the movement when required to move their hand during recall delay. On the contrary, moving the eyes caused an increase in variability only in the amplitude component of the pointing movement. Taken together, these results suggest that the direction of the movement is coded and remembered in a frame of reference linked to the arm, whereas the amplitude of the movement is remembered in an eye-centered frame of reference.
Collapse
Affiliation(s)
- Martin Lemay
- Cognitive Neuroscience Center, Université du Québec à Montréal, 8888, Montréal, Québec, Canada, H3C 3P8.
| | | |
Collapse
|
48
|
van Donkelaar P, Siu KC, Walterschied J. Saccadic Output Is Influenced by Limb Kinetics During Eye—Hand Coordination. J Mot Behav 2004; 36:245-52. [PMID: 15262621 DOI: 10.3200/jmbr.36.3.245-252] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In several recent studies, saccadic eye movements were found to be influenced by concurrent reaching movements. The authors investigated whether that influence originates in limb kinematic or kinetic signals. To dissociate those 2 possibilities, the authors required participants (N = 6) to generate pointing movements with a mass that either resisted or assisted limb motion. With practice, participants were able to generate pointing responses with very similar kinematics but whose kinetics varied in a systematic manner. The results showed that saccadic output was altered by the amount of force required to move the arm, consistent with an influence from limb kinetic signals. Because the interaction occurred before the pointing response began, the authors conclude that a predictive signal related to limb kinetics modulates saccadic output during tasks requiring eye-hand coordination.
Collapse
Affiliation(s)
- Paul van Donkelaar
- Department of Exercise and Movement Science, Institute of Neuroscience, University of Oregon, Eugene, OR, USA.
| | | | | |
Collapse
|
49
|
Abstract
Eye–hand coordination is complex because it involves the visual guidance of both the eyes and hands, while simultaneously using eye movements to optimize vision. Since only hand motion directly affects the external world, eye movements are the slave in this system. This eye– hand visuomotor system incorporates closed-loop visual feedback but here we focus on early feedforward mechanisms that allow primates to make spatially accurate reaches. First, we consider how the parietal cortex might store and update gaze-centered representations of reach targets during a sequence of gaze shifts and fixations. Recent evidence suggests that such representations might be compared with hand position signals within this early gaze-centered frame. However, the resulting motor error commands cannot be treated independently of their frame of origin or the frame of their destined motor command. Behavioral experiments show that the brain deals with the nonlinear aspects of such reference frame transformations, and incorporates internal models of the complex linkage geometry of the eye–head–shoulder system. These transformations are modeled as a series of vector displacement commands, rotated by eye and head orientation, and implemented between parietal and frontal cortex through efficient parallel neuronal architectures. Finally, we consider how this reach system might interact with the visually guided grasp system through both parallel and coordinated neural algorithms.
Collapse
Affiliation(s)
- J D Crawford
- Canadian Institutes of Health Research Group for Action and Perception, York Centre for Vision Research, Department of Psychology, York University, 4700 Keele St., Toronto, Ontario M3J 1P3, Canada.
| | | | | |
Collapse
|
50
|
Hondzinski JM. Comparing human reaches across three viewing conditions in a step and reach task. Neurosci Lett 2004; 357:25-8. [PMID: 15036605 DOI: 10.1016/j.neulet.2003.12.037] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2003] [Revised: 11/06/2003] [Accepted: 12/03/2003] [Indexed: 11/25/2022]
Abstract
The effects of gaze and step direction on a step and reach task were studied to gain insight to possible motor control strategies used in goal-directed whole-body movements. Head, foot and arm positions were monitored while subjects reached to nine targets in space. In dim light, subjects looked at and reached to actual targets or remembered targets, or reached to remembered targets initially located eccentric to gaze orientation. Final reaching errors were influenced by step and gaze orientations, but gaze direction variables were the largest contributors found to predict reach errors. While spatial memory of target location was initially encoded in eye-centered coordinates, memory of eccentric target location was not updated when subjects stepped and reached. Thus, control strategies were dependent on gaze direction in the dim light conditions.
Collapse
Affiliation(s)
- Jan M Hondzinski
- Department of Kinesiology, Louisiana State University, 112 Long Fieldhouse, Baton Rouge, LA 70803, USA.
| |
Collapse
|