1
|
Isenstein EL, Waz T, LoPrete A, Hernandez Y, Knight EJ, Busza A, Tadin D. Rapid assessment of hand reaching using virtual reality and application in cerebellar stroke. PLoS One 2022; 17:e0275220. [PMID: 36174027 PMCID: PMC9522266 DOI: 10.1371/journal.pone.0275220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 09/13/2022] [Indexed: 11/19/2022] Open
Abstract
The acquisition of sensory information about the world is a dynamic and interactive experience, yet the majority of sensory research focuses on perception without action and is conducted with participants who are passive observers with very limited control over their environment. This approach allows for highly controlled, repeatable experiments and has led to major advances in our understanding of basic sensory processing. Typical human perceptual experiences, however, are far more complex than conventional action-perception experiments and often involve bi-directional interactions between perception and action. Innovations in virtual reality (VR) technology offer an approach to close this notable disconnect between perceptual experiences and experiments. VR experiments can be conducted with a high level of empirical control while also allowing for movement and agency as well as controlled naturalistic environments. New VR technology also permits tracking of fine hand movements, allowing for seamless empirical integration of perception and action. Here, we used VR to assess how multisensory information and cognitive demands affect hand movements while reaching for virtual targets. First, we manipulated the visibility of the reaching hand to uncouple vision and proprioception in a task measuring accuracy while reaching toward a virtual target (n = 20, healthy young adults). The results, which as expected revealed multisensory facilitation, provided a rapid and a highly sensitive measure of isolated proprioceptive accuracy. In the second experiment, we presented the virtual target only briefly and showed that VR can be used as an efficient and robust measurement of spatial memory (n = 18, healthy young adults). Finally, to assess the feasibility of using VR to study perception and action in populations with physical disabilities, we showed that the results from the visual-proprioceptive task generalize to two patients with recent cerebellar stroke. Overall, we show that VR coupled with hand-tracking offers an efficient and adaptable way to study human perception and action.
Collapse
Affiliation(s)
- E. L. Isenstein
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States of America
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Center for Visual Science, University of Rochester, Rochester, NY, United States of America
| | - T. Waz
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States of America
| | - A. LoPrete
- Center for Visual Science, University of Rochester, Rochester, NY, United States of America
- Center for Neuroscience and Behavior, American University, Washington, DC, United States of America
- Bioengineering Graduate Group, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Y. Hernandez
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- The City College of New York, CUNY, New York, NY, United States of America
| | - E. J. Knight
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Division of Developmental and Behavioral Pediatrics, Department of Pediatrics, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
| | - A. Busza
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Department of Neurology, University of Rochester Medical Center, Rochester, NY, United States of America
| | - D. Tadin
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States of America
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Center for Visual Science, University of Rochester, Rochester, NY, United States of America
- Department of Ophthalmology, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
| |
Collapse
|
2
|
Hondzinski JM, Soebbing CM, French AE, Winges SA. Different damping responses explain vertical endpoint error differences between visual conditions. Exp Brain Res 2016; 234:1575-87. [DOI: 10.1007/s00221-015-4546-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 12/23/2015] [Indexed: 11/28/2022]
|
3
|
Abstract
The angular declination of a target with respect to eye level is known to be an important cue to egocentric distance when objects are viewed or can be assumed to be resting on the ground. When targets are fixated, angular declination and the direction of the gaze with respect to eye level have the same objective value. However, any situation that limits the time available to shift gaze could leave to-be-localized objects outside the fovea, and, in these cases, the objective values would differ. Nevertheless, angular declination and gaze declination are often conflated, and the role for retinal eccentricity in egocentric distance judgments is unknown. We report two experiments demonstrating that gaze declination is sufficient to support judgments of distance, even when extraretinal signals are all that are provided by the stimulus and task environment. Additional experiments showed no accuracy costs for extrafoveally viewed targets and no systematic impact of foveal or peripheral biases, although a drop in precision was observed for the most retinally eccentric targets. The results demonstrate the remarkable utility of target direction, relative to eye level, for judging distance (signaled by angular declination and/or gaze declination) and are consonant with the idea that detection of the target is sufficient to capitalize on the angular declination of floor-level targets (regardless of the direction of gaze).
Collapse
|
4
|
Falciati L, Gianesini T, Maioli C. Covert oculo-manual coupling induced by visually guided saccades. Front Hum Neurosci 2013; 7:664. [PMID: 24133442 PMCID: PMC3794306 DOI: 10.3389/fnhum.2013.00664] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2013] [Accepted: 09/24/2013] [Indexed: 11/16/2022] Open
Abstract
Hand pointing to objects under visual guidance is one of the most common motor behaviors in everyday life. In natural conditions, gaze and arm movements are commonly aimed at the same target and the accuracy of both systems is considerably enhanced if eye and hand move together. Evidence supports the viewpoint that gaze and limb control systems are not independent but at least partially share a common neural controller. The aim of the present study was to verify whether a saccade execution induces excitability changes in the upper-limb corticospinal system (CSS), even in the absence of a manual response. This effect would provide evidence for the existence of a common drive for ocular and arm motor systems during fast aiming movements. Single-pulse TMS was applied to the left motor cortex of 19 subjects during a task involving visually guided saccades, and motor evoked potentials (MEPs) induced in hand and wrist muscles of the contralateral relaxed arm were recorded. Subjects had to make visually guided saccades to one of 6 positions along the horizontal meridian (±5°, ±10°, or ±15°). During each trial, TMS was randomly delivered at one of 3 different time delays: shortly after the end of the saccade or 300 or 540 ms after saccade onset. Fast eye movements toward a peripheral target were accompanied by changes in upper-limb CSS excitability. MEP amplitude was highest immediately after the end of the saccade and gradually decreased at longer TMS delays. In addition to the change in overall CSS excitability, MEPs were specifically modulated in different muscles, depending on the target position and the TMS delay. By applying a simple model of a manual pointing movement, we demonstrated that the observed changes in CSS excitability are compatible with the facilitation of an arm motor program for a movement aimed at the same target of the gaze. These results provide evidence in favor of the existence of a common drive for both eye and arm motor systems.
Collapse
Affiliation(s)
- Luca Falciati
- Department of Clinical and Experimental Sciences and National Institute of Neuroscience, University of Brescia Brescia, Italy
| | | | | |
Collapse
|
5
|
Scheidt RA, Ghez C, Asnani S. Patterns of hypermetria and terminal cocontraction during point-to-point movements demonstrate independent action of trajectory and postural controllers. J Neurophysiol 2011; 106:2368-82. [PMID: 21849613 DOI: 10.1152/jn.00763.2010] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We examined elbow muscle activities and movement kinematics to determine how subjects combine elementary control actions in performing movements with one and two trajectory segments. In reaching, subjects made a rapid elbow flexion to a visual target before stabilizing the limb with either a low or a higher level of elbow flexor/extensor coactivity (CoA), which was cued by target diameter. Cursor diameter provided real-time biofeedback of actual muscle CoA. In reversing, the limb was to reverse direction within the target and return to the origin with minimal CoA. We previously reported that subjects overshoot the goal when attempting a reversal after first having learned to reach accurately to the same target. Here we test the hypothesis that this hypermetria results because reversals co-opt the initial feedforward control action from the preceding trained reach, thereby failing to account for task-dependent changes in limb impedance induced by differences in flexor/extensor coactivity as the target is acquired (higher in reaching than reversing). Instructed increases in elbow CoA began mid-reach, thus increasing elbow impedance and reducing transient oscillations present in low CoA movments. Flexor EMG alone increased at movement onset. Test reversals incorporated the initial agonist activity of previous reaches but not the increased coactivity at the target, thus leading to overshoot. Moreover, we observed elevated coactivity in reversals upon returning to the origin even though coactivity in reaching was centered at the goal target. These findings refute the idea that the brain necessarily invokes distinct unitary control actions for reaches and reversals made to the same target. Instead, reaches and reversals share a common control action that initiates trajectories toward their target and another later control action that terminates movement and stabilizes the limb about its final resting posture, which differs in the two tasks.
Collapse
Affiliation(s)
- Robert A Scheidt
- Dept. of Biomedical Engineering, Olin Engineering Center, 303, PO Box 1881, Marquette Univ., Milwaukee, WI 53201-1881, USA.
| | | | | |
Collapse
|
6
|
Visuomotor coordination is different for different directions in three-dimensional space. J Neurosci 2011; 31:7857-66. [PMID: 21613499 DOI: 10.1523/jneurosci.0486-11.2011] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
In most visuomotor tasks in which subjects have to reach to visual targets or move the hand along a particular trajectory, eye movements have been shown to lead hand movements. Because the dynamics of vergence eye movements is different from that of smooth pursuit and saccades, we have investigated the lead time of gaze relative to the hand for the depth component (vergence) and in the frontal plane (smooth pursuit and saccades) in a tracking task and in a tracing task in which human subjects were instructed to move the finger along a 3D path. For tracking, gaze leads finger position on average by 28 ± 6 ms (mean ± SE) for the components in the frontal plane but lags finger position by 95 ± 39 ms for the depth dimension. For tracing, gaze leads finger position by 151 ± 36 ms for the depth dimension. For the frontal plane, the mean lead time of gaze relative to the hand is 287 ± 13 ms. However, we found that the lead time in the frontal plane was inversely related to the tangential velocity of finger. This inverse relation for movements in the frontal plane could be explained by assuming that gaze leads the finger by a constant distance of ∼ 2.6 cm (range of 1.5-3.6 cm across subjects).
Collapse
|
7
|
Cotti J, Vercher JL, Guillaume A. Hand–eye coordination relies on extra-retinal signals: Evidence from reactive saccade adaptation. Behav Brain Res 2011; 218:248-52. [DOI: 10.1016/j.bbr.2010.12.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2010] [Revised: 11/30/2010] [Accepted: 12/05/2010] [Indexed: 11/29/2022]
|
8
|
Bédard P, Wu M, Sanes JN. Brain activation related to combinations of gaze position, visual input, and goal-directed hand movements. Cereb Cortex 2010; 21:1273-82. [PMID: 20974688 DOI: 10.1093/cercor/bhq205] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Humans reach to and acquire objects by transforming visual targets into action commands. How the brain integrates goals specified in a visual framework to signals into a suitable framework for an action plan requires clarification whether visual input, per se, interacts with gaze position to formulate action plans. To further evaluate brain control of visual-motor integration, we assessed brain activation, using functional magnetic resonance imaging. Humans performed goal-directed movements toward visible or remembered targets while fixating gaze left or right from center. We dissociated movement planning from performance using a delayed-response task and manipulated target visibility by its availability throughout the delay or blanking it 500 ms after onset. We found strong effects of gaze orientation on brain activation during planning and interactive effects of target visibility and gaze orientation on movement-related activation during performance in parietal and premotor cortices (PM), cerebellum, and basal ganglia, with more activation for rightward gaze at a visible target and no gaze modulation for movements directed toward remembered targets. These results demonstrate effects of gaze position on PM and movement-related processes and provide new information how visual signals interact with gaze position in transforming visual inputs into motor goals.
Collapse
Affiliation(s)
- Patrick Bédard
- Department of Neuroscience, Alpert Medical School of Brown University, Providence, RI 02912, USA
| | | | | |
Collapse
|
9
|
Scheidt RA, Lillis KP, Emerson SJ. Visual, motor and attentional influences on proprioceptive contributions to perception of hand path rectilinearity during reaching. Exp Brain Res 2010; 204:239-54. [PMID: 20532489 PMCID: PMC2935593 DOI: 10.1007/s00221-010-2308-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2010] [Accepted: 05/19/2010] [Indexed: 11/27/2022]
Abstract
We examined how proprioceptive contributions to perception of hand path straightness are influenced by visual, motor and attentional sources of performance variability during horizontal planar reaching. Subjects held the handle of a robot that constrained goal-directed movements of the hand to the paths of controlled curvature. Subjects attempted to detect the presence of hand path curvature during both active (subject driven) and passive (robot driven) movements that either required active muscle force production or not. Subjects were less able to discriminate curved from straight paths when actively reaching for a target versus when the robot moved their hand through the same curved paths. This effect was especially evident during robot-driven movements requiring concurrent activation of lengthening but not shortening muscles. Subjects were less likely to report curvature and were more variable in reporting when movements appeared straight in a novel "visual channel" condition previously shown to block adaptive updating of motor commands in response to deviations from a straight-line hand path. Similarly, compromised performance was obtained when subjects simultaneously performed a distracting secondary task (key pressing with the contralateral hand). The effects compounded when these last two treatments were combined. It is concluded that environmental, intrinsic and attentional factors all impact the ability to detect deviations from a rectilinear hand path during goal-directed movement by decreasing proprioceptive contributions to limb state estimation. In contrast, response variability increased only in experimental conditions thought to impose additional attentional demands on the observer. Implications of these results for perception and other sensorimotor behaviors are discussed.
Collapse
Affiliation(s)
- Robert A Scheidt
- Department of Biomedical Engineering, Marquette University, Olin Engineering Center, 303, P.O. Box 1881, Milwaukee, WI, 53201-1881, USA.
| | | | | |
Collapse
|
10
|
Hondzinski JM, Kwon T. Pointing control using a moving base of support. Exp Brain Res 2009; 197:81-90. [DOI: 10.1007/s00221-009-1893-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2008] [Accepted: 06/03/2009] [Indexed: 10/20/2022]
|
11
|
Review of models for the generation of multi-joint movements in 3-D. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2009; 629:523-50. [PMID: 19227519 DOI: 10.1007/978-0-387-77064-2_28] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Most studies in motor control have focused on movements in two dimensions and only very few studies have systematically investigated movements in three dimensions. As a consequence, the large majority of modeling studies for motor control have tested the predictions of these models using movement data in 2D. As we will explain, movements in 3D cannot be understood from movements in 2D by adding just another dimension. The third dimension adds new and unexpected complexities. In this chapter we will explore the frames of reference, which are used in mapping sensory information about movement targets into motor commands and muscle activation patterns. Moreover, we will make a quantitative comparison between the predictions of various models in the literature with the outcome of 3D movement experiments. Quite surprisingly, none of the existing models is able to explain the data in different movement paradigms.
Collapse
|
12
|
Bridging of Models for Complex Movements in 3D. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2009; 629:479-83. [DOI: 10.1007/978-0-387-77064-2_25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
13
|
Philbeck J, Sargent J, Arthur J, Dopkins S. Large manual pointing errors, but accurate verbal reports, for indications of target azimuth. Perception 2008; 37:511-34. [PMID: 18546661 DOI: 10.1068/p5839] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Many tasks have been used to probe human directional knowledge, but relatively little is known about the comparative merits of different means of indicating target azimuth. Few studies have compared action-based versus non-action-based judgments for targets encircling the observer. This comparison promises to illuminate not only the perception of azimuths in the front and rear hemispaces, but also the frames of reference underlying various azimuth judgments, and ultimately their neural underpinnings. We compared a response in which participants aimed a pointer at a nearby target, with verbal azimuth estimates. Target locations were distributed between 20 degrees and 340 degrees. Non-visual pointing responses exhibited large constant errors (up to -32 degrees) that tended to increase with target eccentricity. Pointing with eyes open also showed large errors (up to -21 degrees). In striking contrast, verbal reports were highly accurate, with constant errors rarely exceeding +/-5 degrees. Under our testing conditions, these results are not likely to stem from differences in perception-based versus action-based responses, but instead reflect the frames of reference underlying the pointing and verbal responses. When participants used the pointer to match the egocentric target azimuth rather than the exocentric target azimuth relative to the pointer, errors were reduced.
Collapse
Affiliation(s)
- John Philbeck
- Department of Psychology, George Washington University, Washington, DC 20052, USA.
| | | | | | | |
Collapse
|
14
|
Gielen CCAM, Dijkstra TMH, Roozen IJ, Welten J. Coordination of gaze and hand movements for tracking and tracing in 3D. Cortex 2008; 45:340-55. [PMID: 18718579 DOI: 10.1016/j.cortex.2008.02.009] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2007] [Revised: 12/14/2007] [Accepted: 02/25/2008] [Indexed: 01/01/2023]
Abstract
In this study we have investigated movements in three-dimensional space. Since most studies have investigated planar movements (like ellipses, cloverleaf shapes and "figure eights") we have compared two generalizations of the two-thirds power law to three dimensions. In particular we have tested whether the two-thirds power law could be best described by tangential velocity and curvature in a plane (compatible with the idea of planar segmentation) or whether tangential velocity and curvature should be calculated in three dimensions. We defined total curvature in three dimensions as the square root of the sum of curvature squared and torsion squared. The results demonstrate that most of the variance is explained by tangential velocity and total curvature. This indicates that all three orthogonal components of movements in 3D are equally important and that movements are truly 3D and do not reflect a concatenation of 2D planar movement segments. In addition, we have studied the coordination of eye and hand movements in 3D by measuring binocular eye movements while subjects move the finger along a curved path. The results show that the directional component and finger position almost superimpose when subjects track a target moving in 3D. However, the vergence component of gaze leads finger position by about 250msec. For drawing (tracing) the path of a visible 3D shape, the directional component of gaze leads finger position by about 225msec, and the vergence component leads finger position by about 400msec. These results are compatible with the idea that gaze leads hand position during drawing movement to assist prediction and planning of hand position in 3D space.
Collapse
Affiliation(s)
- Constantinus C A M Gielen
- Department of Biophysics, Radboud University Nijmegen, Geert Grooteplein 21, EZ Nijmegen, The Netherlands.
| | | | | | | |
Collapse
|
15
|
Bock O, Schmitz G, Grigorova V. Transfer of adaptation between ocular saccades and arm movements. Hum Mov Sci 2008; 27:383-95. [PMID: 18372070 DOI: 10.1016/j.humov.2008.01.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2007] [Revised: 01/28/2008] [Accepted: 01/29/2008] [Indexed: 10/22/2022]
|
16
|
Ghez C, Scheidt R, Heijink H. Different learned coordinate frames for planning trajectories and final positions in reaching. J Neurophysiol 2007; 98:3614-26. [PMID: 17804576 DOI: 10.1152/jn.00652.2007] [Citation(s) in RCA: 62] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We previously reported that the kinematics of reaching movements reflect the superimposition of two separate control mechanisms specifying the hand's spatial trajectory and its final equilibrium position. We now asked whether the brain maintains separate representations of the spatial goals for planning hand trajectory and final position. One group of subjects learned a 30 degrees visuomotor rotation about the hand's starting point while performing a movement reversal task ("slicing") in which they reversed direction at one target and terminated movement at another. This task required accuracy in acquiring a target mid-movement. A second group adapted while moving to -- and stabilizing at -- a single target ("reaching"). This task required accuracy in specifying an intended final position. We examined how learning in the two tasks generalized both to movements made from untrained initial positions and to movements directed toward untrained targets. Shifting initial hand position had differential effects on the location of reversals and final positions: Trajectory directions remained unchanged and reversal locations were displaced in slicing whereas final positions of both reaches and slices were relatively unchanged. Generalization across directions in slicing was consistent with a hand-centered representation of desired reversal point as demonstrated previously for this task whereas the distributions of final positions were consistent with an eye-centered representation as found previously in studies of pointing in three-dimensional space. Our findings indicate that the intended trajectory and final position are represented in different coordinate frames, reconciling previous conflicting claims of hand-centered (vectorial) and eye-centered representations in reach planning.
Collapse
Affiliation(s)
- Claude Ghez
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA.
| | | | | |
Collapse
|
17
|
Schlicht EJ, Schrater PR. Impact of coordinate transformation uncertainty on human sensorimotor control. J Neurophysiol 2007; 97:4203-14. [PMID: 17409174 DOI: 10.1152/jn.00160.2007] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Humans build representations of objects and their locations by integrating imperfect information from multiple perceptual modalities (e.g., visual, haptic). Because sensory information is specified in different frames of reference (i.e., eye- and body-centered), it must be remapped into a common coordinate frame before integration and storage in memory. Such transformations require an understanding of body articulation, which is estimated through noisy sensory data. Consequently, target information acquires additional coordinate transformation uncertainty (CTU) during remapping because of errors in joint angle sensing. As a result, CTU creates differences in the reliability of target information depending on the reference frame used for storage. This paper explores whether the brain represents and compensates for CTU when making grasping movements. To address this question, we varied eye position in the head, while participants reached to grasp a spatially fixed object, both when the object was in view and when it was occluded. Varying eye position changes CTU between eye and head, producing additional uncertainty in remapped information away from forward view. The results showed that people adjust their maximum grip aperture to compensate both for changes in visual information and for changes in CTU when the target is occluded. Moreover, the amount of compensation is predicted by a Bayesian model for location inference that uses eye-centered storage.
Collapse
Affiliation(s)
- Erik J Schlicht
- Department of Psychology, Univ. of Minnesota, N218 Elliott Hall, 75 East River Rd., Minneapolis, MN 55455, USA
| | | |
Collapse
|
18
|
White BJ, Kerzel D, Gegenfurtner KR. Visually guided movements to color targets. Exp Brain Res 2006; 175:110-26. [PMID: 16733702 DOI: 10.1007/s00221-006-0532-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2006] [Accepted: 04/25/2006] [Indexed: 10/24/2022]
Abstract
The pathways controlling motor behavior are believed to exhibit little selectivity for color, but there is growing evidence suggesting that color signals can be used to guide actions. We investigated this by having observers make a saccade or a rapid pointing movement to a small, peripherally flashed (100 ms) Gaussian target (SD=0.5 degrees) defined exclusively by luminance (maximum contrast) or color (from cardinal DKL red-green or blue-yellow axes, at maximum saturation). We found no difference in saccadic or pointing accuracy for luminance or color targets. The same was true using shutter goggles during pointing (to minimize the use of external cues), and when the luminance contrast of color targets was varied by up to +/-10%. In terms of response times, both eye and hand latencies increased with target eccentricity for R-G targets only, in a manner consistent with the sensitivity of this channel across eccentricity. We found little difference in response latencies between luminance and color targets once matched in terms of cone contrast. While RTs were longer when coupled with a goal directed pointing movement (versus a simple reaction without pointing), the difference was the same for color or luminance targets, suggesting that the spatial coding for the movements was also the same. In a final experiment we compared the accuracy of pointing to color-naming performance in a 4AFC procedure. The psychometric functions relating pointing accuracy (% correct quadrant) to color-naming (% correct color-name) were identical. Taken together, the results show that human observers can efficiently use pure chromatic signals to guide actions.
Collapse
Affiliation(s)
- Brian J White
- Justus-Liebig-Universität Giessen, Abteilung Allgemeine Psychologie, Otto-Behaghel-Strasse 10F, 35394, Giessen, Germany.
| | | | | |
Collapse
|
19
|
Beurze SM, Van Pelt S, Medendorp WP. Behavioral reference frames for planning human reaching movements. J Neurophysiol 2006; 96:352-62. [PMID: 16571731 DOI: 10.1152/jn.01362.2005] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects (n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group (n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.
Collapse
Affiliation(s)
- Sabine M Beurze
- Nijmegen Institute for Cognition and Information, Radboud University of Nijmegen, Nijmegen, The Netherlands.
| | | | | |
Collapse
|
20
|
Hondzinski JM, Cui Y. Allocentric cues do not always improve whole body reaching performance. Exp Brain Res 2006; 174:60-73. [PMID: 16565811 DOI: 10.1007/s00221-006-0421-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2005] [Accepted: 02/24/2006] [Indexed: 10/24/2022]
Abstract
The aim of this investigation was to gain further insight into control strategies used for whole body reaching tasks. Subjects were requested to step and reach to remembered target locations in normal room lighting (LIGHT) and complete darkness (DARK) with their gaze directed toward or eccentric to the remembered target location. Targets were located centrally at three different heights. Eccentric anchors for gaze direction were located at target height and initial target distance, either 30 degrees to the right or 20 degrees to the left of target location. Control trials, where targets remained in place, and remembered target trials were randomly presented. We recorded movements of the hand, eye and head, while subjects stepped and reached to real or remembered target locations. Lateral, vertical and anterior-posterior (AP) hand errors and eye location, and gaze direction deviations were determined relative to control trials. Final hand location errors varied by target height, lighting condition and gaze eccentricity. Lower reaches in the DARK compared to the LIGHT condition were common, and when matched with a tendency to reach above the low target, help explain more accurate reaches for this target in darkness. Anchoring the gaze eccentrically reduced hand errors in the AP direction and increased errors in the lateral direction. These results could be explained by deviations in eye locations and gaze directions, which were deemed significant predictors of final reach errors, accounting for a 17-47% of final hand error variance. Results also confirmed a link between gaze deviations and hand and head displacements, suggesting that gaze direction is used as a common input for movement of the hand and body. Additional links between constant and variable eye deviations and hand errors were common for the AP direction but not for lateral or vertical directions. When combined with data regarding hand error predictions, we found that subjects' alterations in body movement in the AP direction were associated with AP adjustments in their reach, but final hand position adjustments were associated with gaze direction alterations for movements in the vertical and horizontal directions. These results support the hypothesis that gaze direction provides a control signal for hand and body movement and that this control signal is used for movement direction and not amplitude.
Collapse
Affiliation(s)
- Jan M Hondzinski
- Department of Kinesiology, Louisiana State University, 112 Long Fieldhouse, Baton Rouge, LA 70803, USA.
| | | |
Collapse
|
21
|
Mrotek LA, Gielen CCAM, Flanders M. Manual tracking in three dimensions. Exp Brain Res 2005; 171:99-115. [PMID: 16308688 DOI: 10.1007/s00221-005-0282-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2005] [Accepted: 10/11/2005] [Indexed: 10/25/2022]
Abstract
Little is known about the manual tracking of targets that move in three dimensions. In the present study, human subjects followed, with the tip of a hand-held pen, a virtual target moving four times (period 5 s) around a novel, unseen path. Two basic types of target paths were used: a peanut-shaped Cassini ellipse and a quasi-spherical shape where four connected semicircles lay in orthogonal planes. The quasi-spherical shape was presented in three different sizes, and the Cassini shape was varied in spatial orientation and by folding it along one of the three bend axes. During the first cycle of Cassini shapes, the hand lagged behind the target by about 150 ms on average, which decreased to 100 ms during the last three cycles. Tracking performance gradually improved during the first 3 s of the first cycle and then stabilized. Tracking was especially good during the smooth, planar sections of the shapes, and time lag was significantly shorter when the tracking of a low-frequency component was compared to performance at a higher frequency (-88 ms at 0.2 Hz vs. -101 ms at 0.6 Hz). Even after the appropriate adjustment of the virtual target path to a virtual shape tracing condition, tracking in depth was poor compared to tracking in the frontal plane, resulting in a flattening of the hand path. In contrast to previous studies where target trajectories were linear or sinusoidal, these complex trajectories may have involved estimation of the overall shape, as well as prediction of target velocity.
Collapse
Affiliation(s)
- Leigh A Mrotek
- Department of Neuroscience, University of Minnesota, 6-145 Jackson Hall, 312 Church St. S.E., Minneapolis, MN 55455, USA
| | | | | |
Collapse
|
22
|
Vindras P, Desmurget M, Viviani P. Error Parsing in Visuomotor Pointing Reveals Independent Processing of Amplitude and Direction. J Neurophysiol 2005; 94:1212-24. [PMID: 15857965 DOI: 10.1152/jn.01295.2004] [Citation(s) in RCA: 75] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
An experiment investigated systematic pointing errors in horizontal movements performed without visual feedback toward 48 targets placed symmetrically around two initial hand positions. Our main goal was to provide evidence in favor of the hypothesis that amplitude and direction of the movements are planned independently on the basis of the hand-target vector (vectorial parametric hypothesis, VP). The analysis was carried out mainly at the individual level. By screening a number of formal models of the potential error components, we found that only models compatible with the VP hypothesis provide an accurate description of the error pattern. A quantitative analysis showed that errors are explained mostly by a bias in the represented initial hand position (46% of the sum of squared errors) and a visuomotor gain bias (26%). Range effect (3%), directional biases (3%), and inertia-dependent amplitude modulations (1%) also provided significant contributions. The error pattern was incompatible with the view that movements are planned by specifying either a final posture or a final position. Instead, the results fully supported the view that, at least in the horizontal plane, amplitude, and direction of pointing movements are planned independently in a hand- or target-centered frame of reference.
Collapse
Affiliation(s)
- Philippe Vindras
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | | | | |
Collapse
|
23
|
Whitney D, Goodale MA. Visual motion due to eye movements helps guide the hand. Exp Brain Res 2005; 162:394-400. [PMID: 15654592 PMCID: PMC3890259 DOI: 10.1007/s00221-004-2154-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2004] [Accepted: 10/16/2004] [Indexed: 11/30/2022]
Abstract
Movement of the body, head, or eyes with respect to the world creates one of the most common yet complex situations in which the visuomotor system must localize objects. In this situation, vestibular, proprioceptive, and extra-retinal information contribute to accurate visuomotor control. The utility of retinal motion information, on the other hand, is questionable, since a single pattern of retinal motion can be produced by any number of head or eye movements. Here we investigated whether retinal motion during a smooth pursuit eye movement contributes to visuomotor control. When subjects pursued a moving object with their eyes and reached to the remembered location of a separate stationary target, the presence of a moving background significantly altered the endpoints of their reaching movements. A background that moved with the pursuit, creating a retinally stationary image (no retinal slip), caused the endpoints of the reaching movements to deviate in the direction of pursuit, overshooting the target. A physically stationary background pattern, however, producing retinal image motion opposite to the direction of pursuit, caused reaching movements to become more accurate. The results indicate that background retinal motion is used by the visuomotor system in the control of visually guided action.
Collapse
Affiliation(s)
- David Whitney
- The Department of Psychology & The Center for Mind and Brain, The University of California, Davis, CA 95616, USA.
| | | |
Collapse
|
24
|
Keijsers NLW, Admiraal MA, Cools AR, Bloem BR, Gielen CCAM. Differential progression of proprioceptive and visual information processing deficits in Parkinson's disease. Eur J Neurosci 2005; 21:239-48. [PMID: 15654861 DOI: 10.1111/j.1460-9568.2004.03840.x] [Citation(s) in RCA: 64] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Indirect evidence suggests that patients with Parkinson's disease (PD) have deficits not only in motor performance, but also in the processing of sensory information. We investigated the role of sensory information processing in PD patients with a broad range of disease severities and in a group of age-matched controls. Subjects were tested in two conditions: pointing to a remembered visual target in complete darkness (DARK) and in the presence of an illuminated frame with a light attached to the index finger (FRAME). Differences in pointing errors in these two conditions reflect the effect of visual feedback on pointing. PD patients showed significantly larger constant and variable errors than controls in the DARK and FRAME condition. The difference of the variable error in the FRAME and DARK condition decreased as a function of the severity of PD. This indicates that any deficits in the processing of proprioceptive information occur already at very mild symptoms of PD, and that deficits in the use of visual feedback develop progressively in later stages of the disease. These results provide a tool for early diagnosis of PD and shed new light on the functional role of the brain structures that are affected in PD.
Collapse
Affiliation(s)
- N L W Keijsers
- Department of Biophysics, Institute for Neuroscience, BEG 231, Radboud University Nijmegen, Geert Grooteplein 21, 6525 EZ Nijmegen, Postbus 9101, The Netherlands.
| | | | | | | | | |
Collapse
|
25
|
Admiraal MA, Keijsers NLW, Gielen CCAM. Gaze Affects Pointing Toward Remembered Visual Targets After a Self-Initiated Step. J Neurophysiol 2004; 92:2380-93. [PMID: 15190097 DOI: 10.1152/jn.01046.2003] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We have investigated pointing movements toward remembered targets after an intervening self-generated body movement. We tested to what extent visual information about the environment or finger position is used in updating target position relative to the body after a step and whether gaze plays a role in the accuracy of the pointing movement. Subjects were tested in three visual conditions: complete darkness (DARK), complete darkness with visual feedback of the finger (FINGER), and with vision of a well-defined environment and with feedback of the finger (FRAME). Pointing accuracy was rather poor in the FINGER and DARK conditions, which did not provide vision of the environment. Constant pointing errors were mainly in the direction of the step and ranged from about 10 to 20 cm. Differences between binocular fixation and target position were often related to the step size and direction. At the beginning of the trial, when the target was visible, fixation was on target. After target extinction, fixation moved away from the target relative to the subject. The variability in the pointing positions appeared to be related to the variable errors in fixation, and the co-variance increases during the delay period after the step, reaching a highly significant value at the time of pointing. The significant co-variance between fixation position and pointing is not the result of a mutual dependence on the step, since we corrected for any direct contributions of the step in both signals. We conclude that the co-variance between fixation and pointing position reflects 1) a common command signal for gaze and arm movements and 2) an effect of fixation on pointing accuracy at the time of pointing.
Collapse
Affiliation(s)
- M A Admiraal
- Dept. Biophysics, Univ. of Nijmegen, PO Box 9101, 6500 HB Nijmegen, The Netherlands.
| | | | | |
Collapse
|