1
|
Takahashi M, Veale R. Pathways for Naturalistic Looking Behavior in Primate I: Behavioral Characteristics and Brainstem Circuits. Neuroscience 2023; 532:133-163. [PMID: 37776945 DOI: 10.1016/j.neuroscience.2023.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/09/2023] [Accepted: 09/18/2023] [Indexed: 10/02/2023]
Abstract
Organisms control their visual worlds by moving their eyes, heads, and bodies. This control of "gaze" or "looking" is key to survival and intelligence, but our investigation of the underlying neural mechanisms in natural conditions is hindered by technical limitations. Recent advances have enabled measurement of both brain and behavior in freely moving animals in complex environments, expanding on historical head-fixed laboratory investigations. We juxtapose looking behavior as traditionally measured in the laboratory against looking behavior in naturalistic conditions, finding that behavior changes when animals are free to move or when stimuli have depth or sound. We specifically focus on the brainstem circuits driving gaze shifts and gaze stabilization. The overarching goal of this review is to reconcile historical understanding of the differential neural circuits for different "classes" of gaze shift with two inconvenient truths. (1) "classes" of gaze behavior are artificial. (2) The neural circuits historically identified to control each "class" of behavior do not operate in isolation during natural behavior. Instead, multiple pathways combine adaptively and non-linearly depending on individual experience. While the neural circuits for reflexive and voluntary gaze behaviors traverse somewhat independent brainstem and spinal cord circuits, both can be modulated by feedback, meaning that most gaze behaviors are learned rather than hardcoded. Despite this flexibility, there are broadly enumerable neural pathways commonly adopted among primate gaze systems. Parallel pathways which carry simultaneous evolutionary and homeostatic drives converge in superior colliculus, a layered midbrain structure which integrates and relays these volitional signals to brainstem gaze-control circuits.
Collapse
Affiliation(s)
- Mayu Takahashi
- Department of Systems Neurophysiology, Graduate School of Medical and Dental, Sciences, Tokyo Medical and Dental University, Japan.
| | - Richard Veale
- Department of Neurobiology, Graduate School of Medicine, Kyoto University, Japan
| |
Collapse
|
2
|
Schütz A, Bharmauria V, Yan X, Wang H, Bremmer F, Crawford JD. Integration of landmark and saccade target signals in macaque frontal cortex visual responses. Commun Biol 2023; 6:938. [PMID: 37704829 PMCID: PMC10499799 DOI: 10.1038/s42003-023-05291-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 08/26/2023] [Indexed: 09/15/2023] Open
Abstract
Visual landmarks influence spatial cognition and behavior, but their influence on visual codes for action is poorly understood. Here, we test landmark influence on the visual response to saccade targets recorded from 312 frontal and 256 supplementary eye field neurons in rhesus macaques. Visual response fields are characterized by recording neural responses to various target-landmark combinations, and then we test against several candidate spatial models. Overall, frontal/supplementary eye fields response fields preferentially code either saccade targets (40%/40%) or landmarks (30%/4.5%) in gaze fixation-centered coordinates, but most cells show multiplexed target-landmark coding within intermediate reference frames (between fixation-centered and landmark-centered). Further, these coding schemes interact: neurons with near-equal target and landmark coding show the biggest shift from fixation-centered toward landmark-centered target coding. These data show that landmark information is preserved and influences target coding in prefrontal visual responses, likely to stabilize movement goals in the presence of noisy egocentric signals.
Collapse
Affiliation(s)
- Adrian Schütz
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Vishal Bharmauria
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Xiaogang Yan
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Hongying Wang
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Frank Bremmer
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - J Douglas Crawford
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada.
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
3
|
Ventral premotor cortex encodes task relevant features during eye and head movements. Sci Rep 2022; 12:22093. [PMID: 36543870 PMCID: PMC9772313 DOI: 10.1038/s41598-022-26479-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022] Open
Abstract
Visual exploration of the environment is achieved through gaze shifts or coordinated movements of the eyes and the head. The kinematics and contributions of each component can be decoupled to fit the context of the required behavior, such as redirecting the visual axis without moving the head or rotating the head without changing the line of sight. A neural controller of these effectors, therefore, must show code relating to multiple muscle groups, and it must also differentiate its code based on context. In this study we tested whether the ventral premotor cortex (PMv) in monkey exhibits a population code relating to various features of eye and head movements. We constructed three different behavioral tasks or contexts, each with four variables to explore whether PMv modulates its activity in accordance with these factors. We found that task related population code in PMv differentiates between all task related features and conclude that PMv carries information about task relevant features during eye and head movements. Furthermore, this code represents both lower-level (effector and movement direction) and higher-level (context) information.
Collapse
|
4
|
Spatiotemporal Coding in the Macaque Supplementary Eye Fields: Landmark Influence in the Target-to-Gaze Transformation. eNeuro 2021; 8:ENEURO.0446-20.2020. [PMID: 33318073 PMCID: PMC7877461 DOI: 10.1523/eneuro.0446-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 11/24/2020] [Indexed: 11/21/2022] Open
Abstract
Eye-centered (egocentric) and landmark-centered (allocentric) visual signals influence spatial cognition, navigation, and goal-directed action, but the neural mechanisms that integrate these signals for motor control are poorly understood. A likely candidate for egocentric/allocentric integration in the gaze control system is the supplementary eye fields (SEF), a mediofrontal structure with high-level “executive” functions, spatially tuned visual/motor response fields, and reciprocal projections with the frontal eye fields (FEF). To test this hypothesis, we trained two head-unrestrained monkeys (Macaca mulatta) to saccade toward a remembered visual target in the presence of a visual landmark that shifted during the delay, causing gaze end points to shift partially in the same direction. A total of 256 SEF neurons were recorded, including 68 with spatially tuned response fields. Model fits to the latter established that, like the FEF and superior colliculus (SC), spatially tuned SEF responses primarily showed an egocentric (eye-centered) target-to-gaze position transformation. However, the landmark shift influenced this default egocentric transformation: during the delay, motor neurons (with no visual response) showed a transient but unintegrated shift (i.e., not correlated with the target-to-gaze transformation), whereas during the saccade-related burst visuomotor (VM) neurons showed an integrated shift (i.e., correlated with the target-to-gaze transformation). This differed from our simultaneous FEF recordings (Bharmauria et al., 2020), which showed a transient shift in VM neurons, followed by an integrated response in all motor responses. Based on these findings and past literature, we propose that prefrontal cortex incorporates landmark-centered information into a distributed, eye-centered target-to-gaze transformation through a reciprocal prefrontal circuit.
Collapse
|
5
|
Bharmauria V, Sajad A, Li J, Yan X, Wang H, Crawford JD. Integration of Eye-Centered and Landmark-Centered Codes in Frontal Eye Field Gaze Responses. Cereb Cortex 2020; 30:4995-5013. [PMID: 32390052 DOI: 10.1093/cercor/bhaa090] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/07/2020] [Accepted: 03/23/2020] [Indexed: 12/19/2022] Open
Abstract
The visual system is thought to separate egocentric and allocentric representations, but behavioral experiments show that these codes are optimally integrated to influence goal-directed movements. To test if frontal cortex participates in this integration, we recorded primate frontal eye field activity during a cue-conflict memory delay saccade task. To dissociate egocentric and allocentric coordinates, we surreptitiously shifted a visual landmark during the delay period, causing saccades to deviate by 37% in the same direction. To assess the cellular mechanisms, we fit neural response fields against an egocentric (eye-centered target-to-gaze) continuum, and an allocentric shift (eye-to-landmark-centered) continuum. Initial visual responses best-fit target position. Motor responses (after the landmark shift) predicted future gaze position but embedded within the motor code was a 29% shift toward allocentric coordinates. This shift appeared transiently in memory-related visuomotor activity, and then reappeared in motor activity before saccades. Notably, fits along the egocentric and allocentric shift continua were initially independent, but became correlated across neurons just before the motor burst. Overall, these results implicate frontal cortex in the integration of egocentric and allocentric visual information for goal-directed action, and demonstrate the cell-specific, temporal progression of signal multiplexing for this process in the gaze system.
Collapse
Affiliation(s)
- Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Amirsaman Sajad
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN 37240, USA
| | - Jirui Li
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Hongying Wang
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - John Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Departments of Psychology, Biology and Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada M3J 1P3
| |
Collapse
|
6
|
Timing Determines Tuning: A Rapid Spatial Transformation in Superior Colliculus Neurons during Reactive Gaze Shifts. eNeuro 2020; 7:ENEURO.0359-18.2019. [PMID: 31792117 PMCID: PMC6944480 DOI: 10.1523/eneuro.0359-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 10/12/2019] [Accepted: 10/14/2019] [Indexed: 11/21/2022] Open
Abstract
Gaze saccades, rapid shifts of the eyes and head toward a goal, have provided fundamental insights into the neural control of movement. For example, it has been shown that the superior colliculus (SC) transforms a visual target (T) code to future gaze (G) location commands after a memory delay. However, this transformation has not been observed in "reactive" saccades made directly to a stimulus, so its contribution to normal gaze behavior is unclear. Here, we tested this using a quantitative measure of the intermediate codes between T and G, based on variable errors in gaze endpoints. We demonstrate that a rapid spatial transformation occurs within the primate's SC (Macaca mulatta) during reactive saccades, involving a shift in coding from T, through intermediate codes, to G. This spatial shift progressed continuously both across and within cell populations [visual, visuomotor (VM), motor], rather than relaying discretely between populations with fixed spatial codes. These results suggest that the SC produces a rapid, noisy, and distributed transformation that contributes to variable errors in reactive gaze shifts.
Collapse
|
7
|
Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques. J Neurophysiol 2019; 122:1946-1961. [PMID: 31533015 DOI: 10.1152/jn.00072.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination-eye-head-hand coordination-has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized "chair" that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target.NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Collapse
Affiliation(s)
- Harbandhan Kaur Arora
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - Vishal Bharmauria
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Saihong Sun
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hongying Wang
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - John Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada.,Department of Psychology, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Wijayasinghe IB, Das SK, Miller HL, Bugnariu NL, Popa DO. Head-Eye Coordination of Humanoid Robot with Potential Controller. J INTELL ROBOT SYST 2018. [DOI: 10.1007/s10846-018-0948-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
9
|
Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. The Influence of a Memory Delay on Spatial Coding in the Superior Colliculus: Is Visual Always Visual and Motor Always Motor? Front Neural Circuits 2018; 12:74. [PMID: 30405361 PMCID: PMC6204359 DOI: 10.3389/fncir.2018.00074] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Accepted: 08/29/2018] [Indexed: 11/13/2022] Open
Abstract
The memory-delay saccade task is often used to separate visual and motor responses in oculomotor structures such as the superior colliculus (SC), with the assumption that these same responses would sum with a short delay during immediate "reactive" saccades to visual stimuli. However, it is also possible that additional signals (suppression, delay) alter visual and/or motor response in the memory delay task. Here, we compared the spatiotemporal properties of visual and motor responses of the same SC neurons recorded during both the reactive and memory-delay tasks in two head-unrestrained monkeys. Comparing tasks, visual (aligned with target onset) and motor (aligned on saccade onset) responses were highly correlated across neurons, but the peak response of visual neurons and peak motor responses (of both visuomotor (VM) and motor neurons) were significantly higher in the reactive task. Receptive field organization was generally similar in both tasks. Spatial coding (along a Target-Gaze (TG) continuum) was also similar, with the exception that pure motor cells showed a stronger tendency to code future gaze location in the memory delay task, suggesting a more complete transformation. These results suggest that the introduction of a trained memory delay alters both the vigor and spatial coding of SC visual and motor responses, likely due to a combination of saccade suppression signals and greater signal noise accumulation during the delay in the memory delay task.
Collapse
Affiliation(s)
- Morteza Sadeh
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Amirsaman Sajad
- York Centre for Vision Research, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Hongying Wang
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - John Douglas Crawford
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| |
Collapse
|
10
|
Wilson JJ, Alexandre N, Trentin C, Tripodi M. Three-Dimensional Representation of Motor Space in the Mouse Superior Colliculus. Curr Biol 2018; 28:1744-1755.e12. [PMID: 29779875 PMCID: PMC5988568 DOI: 10.1016/j.cub.2018.04.021] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Revised: 03/16/2018] [Accepted: 04/05/2018] [Indexed: 11/23/2022]
Abstract
From the act of exploring an environment to that of grasping a cup of tea, animals must put in register their motor acts with their surrounding space. In the motor domain, this is likely to be defined by a register of three-dimensional (3D) displacement vectors, whose recruitment allows motion in the direction of a target. One such spatially targeted action is seen in the head reorientation behavior of mice, yet the neural mechanisms underlying these 3D behaviors remain unknown. Here, by developing a head-mounted inertial sensor for studying 3D head rotations and combining it with electrophysiological recordings, we show that neurons in the mouse superior colliculus are either individually or conjunctively tuned to the three Eulerian components of head rotation. The average displacement vectors associated with motor-tuned colliculus neurons remain stable over time and are unaffected by changes in firing rate or the duration of spike trains. Finally, we show that the motor tuning of collicular neurons is largely independent from visual or landmark cues. By describing the 3D nature of motor tuning in the superior colliculus, we contribute to long-standing debate on the dimensionality of collicular motor decoding; furthermore, by providing an experimental paradigm for the study of the metric of motor tuning in mice, this study also paves the way to the genetic dissection of the circuits underlying spatially targeted motion. Development of inertial sensor system for monitoring 3D head movements in real time Neurons in the superior colliculus code for the full dimensionality of head rotations Firing rate correlates with velocity, but not head displacement angle The spatial tuning of collicular units is largely independent of visual or landmark cues
Collapse
|
11
|
Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts. Eur J Neurosci 2016; 42:2934-51. [PMID: 26448341 DOI: 10.1111/ejn.13093] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2014] [Revised: 09/14/2015] [Accepted: 09/30/2015] [Indexed: 11/27/2022]
Abstract
We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure.
Collapse
Affiliation(s)
- Morteza Sadeh
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Amirsaman Sajad
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Hongying Wang
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - John Douglas Crawford
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| |
Collapse
|
12
|
Muhammad W, Spratling MW. A Neural Model of Coordinated Head and Eye Movement Control. J INTELL ROBOT SYST 2016. [DOI: 10.1007/s10846-016-0410-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
13
|
Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory-Motor Transformation. eNeuro 2016; 3:eN-TNWR-0040-16. [PMID: 27092335 PMCID: PMC4829728 DOI: 10.1523/eneuro.0040-16.2016] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Accepted: 03/23/2016] [Indexed: 01/01/2023] Open
Abstract
The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.
Collapse
|
14
|
Kress D, van Bokhorst E, Lentink D. How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization. PLoS One 2015; 10:e0129287. [PMID: 26107413 PMCID: PMC4481315 DOI: 10.1371/journal.pone.0129287] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 05/06/2015] [Indexed: 11/18/2022] Open
Abstract
Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones.
Collapse
Affiliation(s)
- Daniel Kress
- Department of Mechanical Engineering, Stanford University, Stanford, California, United States of America
| | - Evelien van Bokhorst
- Department of Mechanical Engineering, Stanford University, Stanford, California, United States of America; Department of Mechanical Engineering and Aeronautics, City University London, London, United Kingdom
| | - David Lentink
- Department of Mechanical Engineering, Stanford University, Stanford, California, United States of America; Experimental Zoology Group, Wageningen University, Wageningen, The Netherlands
| |
Collapse
|
15
|
Daemi M, Crawford JD. A kinematic model for 3-D head-free gaze-shifts. Front Comput Neurosci 2015; 9:72. [PMID: 26113816 PMCID: PMC4461827 DOI: 10.3389/fncom.2015.00072] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Accepted: 05/27/2015] [Indexed: 11/13/2022] Open
Abstract
Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision.
Collapse
Affiliation(s)
- Mehdi Daemi
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada
| | - J Douglas Crawford
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada ; Department of Psychology, York University Toronto, ON, Canada ; School of Kinesiology and Health Sciences, York University Toronto, ON, Canada ; Brain in Action NSERC CREATE/DFG IRTG Program Canada/Germany
| |
Collapse
|
16
|
Sajad A, Sadeh M, Keith GP, Yan X, Wang H, Crawford JD. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey. Cereb Cortex 2014; 25:3932-52. [PMID: 25491118 PMCID: PMC4585524 DOI: 10.1093/cercor/bhu279] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas.
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology
| | - Morteza Sadeh
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program School of Kinesiology and Health Sciences
| | - Gerald P Keith
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - Hongying Wang
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - John Douglas Crawford
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology School of Kinesiology and Health Sciences Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| |
Collapse
|
17
|
|
18
|
Merker B. The efference cascade, consciousness, and its self: naturalizing the first person pivot of action control. Front Psychol 2013; 4:501. [PMID: 23950750 PMCID: PMC3738861 DOI: 10.3389/fpsyg.2013.00501] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2013] [Accepted: 07/16/2013] [Indexed: 11/13/2022] Open
Abstract
The 20 billion neurons of the neocortex have a mere hundred thousand motor neurons by which to express cortical contents in overt behavior. Implemented through a staggered cortical "efference cascade" originating in the descending axons of layer five pyramidal cells throughout the neocortical expanse, this steep convergence accomplishes final integration for action of cortical information through a system of interconnected subcortical way stations. Coherent and effective action control requires the inclusion of a continually updated joint "global best estimate" of current sensory, motivational, and motor circumstances in this process. I have previously proposed that this running best estimate is extracted from cortical probabilistic preliminaries by a subcortical neural "reality model" implementing our conscious sensory phenomenology. As such it must exhibit first person perspectival organization, suggested to derive from formating requirements of the brain's subsystem for gaze control, with the superior colliculus at its base. Gaze movements provide the leading edge of behavior by capturing targets of engagement prior to contact. The rotation-based geometry of directional gaze movements places their implicit origin inside the head, a location recoverable by cortical probabilistic source reconstruction from the rampant primary sensory variance generated by the incessant play of collicularly triggered gaze movements. At the interface between cortex and colliculus lies the dorsal pulvinar. Its unique long-range inhibitory circuitry may precipitate the brain's global best estimate of its momentary circumstances through multiple constraint satisfaction across its afferents from numerous cortical areas and colliculus. As phenomenal content of our sensory awareness, such a global best estimate would exhibit perspectival organization centered on a purely implicit first person origin, inherently incapable of appearing as a phenomenal content of the sensory space it serves.
Collapse
|
19
|
Monteon JA, Wang H, Martinez-Trujillo J, Crawford JD. Frames of reference for eye-head gaze shifts evoked during frontal eye field stimulation. Eur J Neurosci 2013; 37:1754-65. [PMID: 23489744 DOI: 10.1111/ejn.12175] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2010] [Revised: 01/14/2013] [Accepted: 01/30/2013] [Indexed: 11/29/2022]
Abstract
The frontal eye field (FEF), in the prefrontal cortex, participates in the transformation of visual signals into saccade motor commands and in eye-head gaze control. The FEF is thought to show eye-fixed visual codes in head-restrained monkeys, but it is not known how it transforms these inputs into spatial codes for head-unrestrained gaze commands. Here, we tested if the FEF influences desired gaze commands within a simple eye-fixed frame, like the superior colliculus (SC), or in more complex egocentric frames like the supplementary eye fields (SEFs). We electrically stimulated 95 FEF sites in two head-unrestrained monkeys to evoke 3D eye-head gaze shifts and then mathematically rotated these trajectories into various reference frames. In theory, each stimulation site should specify a specific spatial goal when the evoked gaze shifts are plotted in the appropriate frame. We found that these motor output frames varied site by site, mainly within the eye-to-head frame continuum. Thus, consistent with the intermediate placement of the FEF within the high-level circuits for gaze control, its stimulation-evoked output showed an intermediate trend between the multiple reference frame codes observed in SEF-evoked gaze shifts and the simpler eye-fixed reference frame observed in SC-evoked movements. These results suggest that, although the SC, FEF and SEF carry eye-fixed information at the level of their unit response fields, this information is transformed differently in their output projections to the eye and head controllers.
Collapse
Affiliation(s)
- Jachin A Monteon
- Centre for Vision Research, York University, Toronto, ON, Canada
| | | | | | | |
Collapse
|
20
|
Reaching the limit of the oculomotor plant: 3D kinematics after abducens nerve stimulation during the torsional vestibulo-ocular reflex. J Neurosci 2012; 32:13237-43. [PMID: 22993439 DOI: 10.1523/jneurosci.2595-12.2012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Accumulating evidence shows that the oculomotor plant is capable of implementing aspects of three-dimensional kinematics such as Listing's law and the half-angle rule. But these studies have only examined the eye under static conditions or with movements that normally obey these rules (e.g., saccades and pursuit). Here we test the capability of the oculomotor plant to rearrange itself as necessary for non-half-angle behavior. Three monkeys (Macaca mulatta) fixated five vertically displaced targets along the midsagittal plane while sitting on a motion platform that rotated sinusoidally about the naso-occipital axis. This activated the torsional, rotational vestibulo-ocular reflex, which exhibits a zero-angle or negative-angle rule (depending on the visual stimulus). On random sinusoidal cycles, we stimulated the abducens nerve and observed the resultant eye movements. If the plant has rearranged itself to implement this non-half-angle behavior, then stimulation should reveal this behavior. On the other hand, if the plant is only capable of half-angle behavior, then stimulation should reveal a half-angle rule. We find the latter to be true and therefore additional neural signals are likely necessary to implement non-half-angle behavior.
Collapse
|
21
|
Farshadmanesh F, Byrne P, Wang H, Corneil BD, Crawford JD. Relationships between neck muscle electromyography and three-dimensional head kinematics during centrally induced torsional head perturbations. J Neurophysiol 2012; 108:2867-83. [PMID: 22956790 DOI: 10.1152/jn.00312.2012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The relationship between neck muscle electromyography (EMG) and torsional head rotation (about the nasooccipital axis) is difficult to assess during normal gaze behaviors with the head upright. Here, we induced acute head tilts similar to cervical dystonia (torticollis) in two monkeys by electrically stimulating 20 interstitial nucleus of Cajal (INC) sites or inactivating 19 INC sites by injection of muscimol. Animals engaged in a simple gaze fixation task while we recorded three-dimensional head kinematics and intramuscular EMG from six bilateral neck muscle pairs. We used a cross-validation-based stepwise regression to quantitatively examine the relationships between neck EMG and torsional head kinematics under three conditions: 1) unilateral INC stimulation (where the head rotated torsionally toward the side of stimulation); 2) corrective poststimulation movements (where the head returned toward upright); and 3) unilateral INC inactivation (where the head tilted toward the opposite side of inactivation). Our cross-validated results of corrective movements were slightly better than those obtained during unperturbed gaze movements and showed many more torsional terms, mostly related to velocity, although some orientation and acceleration terms were retained. In addition, several simplifying principles were identified. First, bilateral muscle pairs showed similar, but opposite EMG-torsional coupling terms, i.e., a change in torsional kinematics was associated with increased muscle activity on one side and decreased activity on the other side. s, whenever torsional terms were retained in a given muscle, they were independent of the inputs we tested, i.e., INC stimulation vs. corrective motion vs. INC inactivation, and left vs. right INC data. These findings suggest that, despite the complexity of the head-neck system, the brain can use a single, bilaterally coupled inverse model for torsional head control that is valid across different behaviors and movement directions. Combined with our previous data, these new data provide the terms for a more complete three-dimensional model of EMG: head rotation coupling for the muscles and gaze behaviors that we recorded.
Collapse
Affiliation(s)
- Farshad Farshadmanesh
- York Center for Vision Research, Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada
| | | | | | | | | |
Collapse
|
22
|
Intrinsic reference frames of superior colliculus visuomotor receptive fields during head-unrestrained gaze shifts. J Neurosci 2012; 31:18313-26. [PMID: 22171035 DOI: 10.1523/jneurosci.0990-11.2011] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A sensorimotor neuron's receptive field and its frame of reference are easily conflated within the natural variability of spatial behavior. Here, we capitalized on such natural variations in 3-D eye and head positions during head-unrestrained gaze shifts to visual targets in two monkeys: to determine whether intermediate/deep layer superior colliculus (SC) receptive fields code visual targets or gaze kinematics, within four different frames of reference. Visuomotor receptive fields were either characterized during gaze shifts to visual targets from a central fixation position (32 U) or were partially characterized from each of three initial fixation points (31 U). Natural variations of initial 3-D gaze and head orientation (including torsion) provided spatial separation between four different coordinate frame models (space, head, eye, fixed-vector relative to fixation), whereas natural saccade errors provided spatial separation between target and gaze positions. Using a new statistical method based on predictive sum-of-squares, we found that in our population of 63 neurons (1) receptive field fits to target positions were significantly better than fits to actual gaze shift locations and (2) eye-centered models gave significantly better fits than the head or space frame. An intermediate frames analysis confirmed that individual neuron fits were distributed target-in-eye coordinates. Gaze position "gain" effects with the spatial tuning required for a 3-D reference frame transformation were significant in 23% (7/31) of neurons tested. We conclude that the SC primarily represents gaze targets relative to the eye but also carries early signatures of the 3-D sensorimotor transformation.
Collapse
|
23
|
Farshadmanesh F, Byrne P, Keith GP, Wang H, Corneil BD, Crawford JD. Cross-validated models of the relationships between neck muscle electromyography and three-dimensional head kinematics during gaze behavior. J Neurophysiol 2011; 107:573-90. [PMID: 21994269 DOI: 10.1152/jn.00315.2011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The object of this study was to model the relationship between neck electromyography (EMG) and three-dimensional (3-D) head kinematics during gaze behavior. In two monkeys, we recorded 3-D gaze, head orientation, and bilateral EMG activity in the sternocleidomastoid, splenius capitis, complexus, biventer cervicis, rectus capitis posterior major, and occipital capitis inferior muscles. Head-unrestrained animals fixated and made gaze saccades between targets within a 60° × 60° grid. We performed a stepwise regression in which polynomial model terms were retained/rejected based on their tendency to increase/decrease a cross-validation-based measure of model generalizability. This revealed several results that could not have been predicted from knowledge of musculoskeletal anatomy. During head holding, EMG activity in most muscles was related to horizontal head orientation, whereas fewer muscles correlated to vertical head orientation and none to small random variations in head torsion. A fourth-order polynomial model, with horizontal head orientation as the only independent variable, generalized nearly as well as higher order models. For head movements, we added time-varying linear and nonlinear perturbations in velocity and acceleration to the previously derived static (head holding) models. The static models still explained most of the EMG variance, but the additional motion terms, which included horizontal, vertical, and torsional contributions, significantly improved the results. Several coordinate systems were used for both static and dynamic analyses, with Fick coordinates showing a marginal (nonsignificant) advantage. Thus, during gaze fixations, recruitment within the neck muscles from which we recorded contributed primarily to position-dependent horizontal orientation terms in our data set, with more complex multidimensional contributions emerging during the head movements that accompany gaze shifts. These are crucial components of the late neuromuscular transformations in a complete model of 3-D head-neck system and should help constrain the study of premotor signals for head control during gaze behaviors.
Collapse
Affiliation(s)
- Farshad Farshadmanesh
- York Center for Vision Research, Neuroscience Graduate Diploma Program, Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario
| | | | | | | | | | | |
Collapse
|
24
|
Klier EM, Meng H, Angelaki DE. Revealing the kinematics of the oculomotor plant with tertiary eye positions and ocular counterroll. J Neurophysiol 2011; 105:640-9. [PMID: 21106901 PMCID: PMC3059169 DOI: 10.1152/jn.00737.2010] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2010] [Accepted: 11/18/2010] [Indexed: 11/22/2022] Open
Abstract
Retinal information is two-dimensional, whereas eye movements are three-dimensional. The oculomotor system solves this degrees-of-freedom problem by constraining eye positions to zero torsion (Listing's law) and determining how eye velocities change with eye position (half-angle rule). Here we test whether the oculomotor plant, in the absence of well-defined neural commands, can implement these constrains mechanically, not just in a primary position but for all eye and head orientations. We stimulated the abducens nerve at tertiary eye positions and when ocular counterroll was induced at tilted head orientations. Stimulation-induced eye velocities follow the half-angle rule, even for tertiary eye positions, and microstimulation at tilted head orientations elicits eye positions that adhere to torsionally shifted planes, similar to naturally occurring eye movements. These results support the notion that oculomotor plant can continuously apply these three-dimensional rules correctly and appropriately for all eye and head orientations that obey Listing's law, demonstrating a major role of peripheral biomechanics in motor control.
Collapse
Affiliation(s)
- Eliana M Klier
- Washington University School of Medicine, Department of Anatomy and Neurobiology, Box 8108, 660 South Euclid Avenue, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
25
|
Monteon JA, Constantin AG, Wang H, Martinez-Trujillo J, Crawford JD. Electrical stimulation of the frontal eye fields in the head-free macaque evokes kinematically normal 3D gaze shifts. J Neurophysiol 2010; 104:3462-75. [PMID: 20881198 DOI: 10.1152/jn.01032.2009] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The frontal eye field (FEF) is a region of the primate prefrontal cortex that is central to eye-movement generation and target selection. It has been shown that neurons in this area encode commands for saccadic eye movements. Furthermore, it has been suggested that the FEF may be involved in the generation of gaze commands for the eye and the head. To test this suggestion, we systematically stimulated (with pulses of 300 Hz frequency, 200 ms duration, 30-100 μA intensity) the FEF of two macaques, with the head unrestrained, while recording three-dimensional (3D) eye and head rotations. In a total of 95 sites, the stimulation consistently elicited gaze-orienting movements ranging in amplitude from 2 to 172°, directed contralateral to the stimulation site, and with variable vertical components. These movements were typically a combination of eye-in-head saccades and head-in-space movements. We then performed a comparison between the stimulation-evoked movements and gaze shifts voluntarily made by the animal. The kinematics of the stimulation-evoked movements (i.e., their spatiotemporal properties, their velocity-amplitude relationships, and the relative contributions of the eye and the head as a function of movement amplitude) were very similar to those of natural gaze shifts. Moreover, they obeyed the same 3D constraints as the natural gaze shifts (i.e., modified Listing's law for eye-in-head movements). As in natural gaze shifts, saccade and vestibuloocular reflex torsion during stimulation-evoked movements were coordinated so that at the end of the head movement the eye-in-head ended up in Listing's plane. In summary, movements evoked by stimulation of the FEF closely resembled those of naturally occurring eye-head gaze shifts. Thus we conclude that the FEF explicitly encodes gaze commands and that the kinematic aspects of eye-head coordination are likely specified by downstream mechanisms.
Collapse
Affiliation(s)
- Jachin A Monteon
- Centre for Vision Research, York University, Toronto, ON, Canada, M3J 1P3
| | | | | | | | | |
Collapse
|
26
|
Eye-head coordination in the guinea pig II. Responses to self-generated (voluntary) head movements. Exp Brain Res 2010; 205:445-54. [PMID: 20697698 DOI: 10.1007/s00221-010-2375-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2010] [Accepted: 07/15/2010] [Indexed: 10/19/2022]
Abstract
Retinal image stability is essential for vision but may be degraded by head movements. The vestibulo-ocular reflex (VOR) compensates for passive perturbations of head position and is usually assumed to be the major neural mechanism for ocular stability. During our recent investigation of vestibular reflexes in guinea pigs free to move their heads (Shanidze et al. in Exp Brain Res, 2010), we observed compensatory eye movements that could not have been initiated either by vestibular or neck proprioceptive reflexes because they occurred with zero or negative latency with respect to head movement. These movements always occurred in association with self-generated (active) head or body movements and thus anticipated a voluntary movement. We found the anticipatory responses to differ from those produced by the VOR in two significant ways. First, anticipatory responses are characterized by temporal synchrony with voluntary head movements (latency approximately 1 versus approximately 7 ms for the VOR). Second, the anticipatory responses have higher gains (0.80 vs. 0.46 for the VOR) and thus more effectively stabilize the retinal image during voluntary head movements. We suggest that anticipatory responses act synergistically with the VOR to stabilize retinal images. Furthermore, they are independent of actual vestibular sensation since they occur in guinea pigs with complete peripheral vestibular lesions. Conceptually, anticipatory responses could be produced by a feed-forward neural controller that transforms efferent motor commands for head movement into estimates of the sensory consequences of those movements.
Collapse
|
27
|
Keith GP, Blohm G, Crawford JD. Influence of saccade efference copy on the spatiotemporal properties of remapping: a neural network study. J Neurophysiol 2009; 103:117-39. [PMID: 19846615 DOI: 10.1152/jn.91191.2008] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Remapping of gaze-centered target-position signals across saccades has been observed in the superior colliculus and several cortical areas. It is generally assumed that this remapping is driven by saccade-related signals. What is not known is how the different potential forms of this signal (i.e., visual, visuomotor, or motor) might influence this remapping. We trained a three-layer recurrent neural network to update target position (represented as a "hill" of activity in a gaze-centered topographic map) across saccades, using discrete time steps and backpropagation-through-time algorithm. Updating was driven by an efference copy of one of three saccade-related signals: a transient visual response to the saccade-target in two-dimensional (2-D) topographic coordinates (Vtop), a temporally extended motor burst in 2-D topographic coordinates (Mtop), or a 3-D eye velocity signal in brain stem coordinates (EV). The Vtop model produced presaccadic remapping in the output layer, with a "jumping hill" of activity and intrasaccadic suppression. The Mtop model also produced presaccadic remapping with a dispersed moving hill of activity that closely reproduced the quantitative results of Sommer and Wurtz. The EV model produced a coherent moving hill of activity but failed to produce presaccadic remapping. When eye velocity and a topographic (Vtop or Mtop) updater signal were used together, the remapping relied primarily on the topographic signal. An analysis of the hidden layer activity revealed that the transient remapping was highly dispersed across hidden-layer units in both Vtop and Mtop models but tightly clustered in the EV model. These results show that the nature of the updater signal influences both the mechanism and final dynamics of remapping. Taken together with the currently known physiology, our simulations suggest that different brain areas might rely on different signals and mechanisms for updating that should be further distinguishable through currently available single- and multiunit recording paradigms.
Collapse
Affiliation(s)
- Gerald P Keith
- York Centre for Vision Research, and Canadian Institute of Health Research Group, York University, 4700 Keele St., Toronto, Ontario, Canada
| | | | | |
Collapse
|
28
|
Constantin AG, Wang H, Monteon JA, Martinez-Trujillo JC, Crawford JD. 3-Dimensional eye-head coordination in gaze shifts evoked during stimulation of the lateral intraparietal cortex. Neuroscience 2009; 164:1284-302. [PMID: 19733631 DOI: 10.1016/j.neuroscience.2009.08.066] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2008] [Revised: 08/27/2009] [Accepted: 08/29/2009] [Indexed: 11/28/2022]
Abstract
Coordinated eye-head gaze shifts have been evoked during electrical stimulation of the frontal cortex (supplementary eye field (SEF) and frontal eye field (FEF)) and superior colliculus (SC), but less is known about the role of lateral intraparietal cortex (LIP) in head-unrestrained gaze shifts. To explore this, two monkeys (M1 and M2) were implanted with recording chambers and 3-D eye+ head search coils. Tungsten electrodes delivered trains of electrical pulses (usually 200 ms duration) to and around area LIP during head-unrestrained gaze fixations. A current of 200 muA consistently evoked small, short-latency contralateral gaze shifts from 152 sites in M1 and 243 sites in M2 (Constantin et al., 2007). Gaze kinematics were independent of stimulus amplitude and duration, except that subsequent saccades were suppressed. The average amplitude of the evoked gaze shifts was 8.46 degrees for M1 and 8.25 degrees for M2, with average head components of only 0.36 and 0.62 degrees respectively. The head's amplitude contribution to these movements was significantly smaller than in normal gaze shifts, and did not increase with behavioral adaptation. Stimulation-evoked gaze, eye and head movements qualitatively obeyed normal 3-D constraints (Donders' law and Listing's law), but with less precision. As in normal behavior, when the head was restrained LIP stimulation evoked eye-only saccades in Listing's plane, whereas when the head was not restrained, stimulation evoked saccades with position-dependent torsional components (driving the eye out of Listing's plane). In behavioral gaze-shifts, the vestibuloocular reflex (VOR) then drives torsion back into Listing's plane, but in the absence of subsequent head movement the stimulation-induced torsion was "left hanging". This suggests that the position-dependent torsional saccade components are preprogrammed, and that the oculomotor system was expecting a head movement command to follow the saccade. These data show that, unlike SEF, FEF, and SC stimulation in nearly identical conditions, LIP stimulation fails to produce normally-coordinated eye-head gaze shifts.
Collapse
Affiliation(s)
- A G Constantin
- Centre for Vision Research, York University, Toronto, ON, Canada M3J 1P3
| | | | | | | | | |
Collapse
|
29
|
Blum BM, Kremmyda O, Glasauer S, Büttner U. Neural constraints in kinematics of head-free gaze. BMC Neurosci 2009. [DOI: 10.1186/1471-2202-10-s1-p18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
30
|
Keith GP, DeSouza JFX, Yan X, Wang H, Crawford JD. A method for mapping response fields and determining intrinsic reference frames of single-unit activity: applied to 3D head-unrestrained gaze shifts. J Neurosci Methods 2009; 180:171-84. [PMID: 19427544 DOI: 10.1016/j.jneumeth.2009.03.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2009] [Revised: 03/08/2009] [Accepted: 03/09/2009] [Indexed: 10/21/2022]
Abstract
Natural movements towards a target show metric variations between trials. When movements combine contributions from multiple body-parts, such as head-unrestrained gaze shifts involving both eye and head rotation, the individual body-part movements may vary even more than the overall movement. The goal of this investigation was to develop a general method for both mapping sensory or motor response fields of neurons and determining their intrinsic reference frames, where these movement variations are actually utilized rather than avoided. We used head-unrestrained gaze shifts, three-dimensional (3D) geometry, and naturalistic distributions of eye and head orientation to explore the theoretical relationship between the intrinsic reference frame of a sensorimotor neuron's response field and the coherence of the activity when this response field is fitted non-parametrically using different kernel bandwidths in different reference frames. We measure how well the regression surface predicts unfitted data using the PREdictive Sum-of-Squares (PRESS) statistic. The reference frame with the smallest PRESS statistic was categorized as the intrinsic reference frame if the PRESS statistic was significantly larger in other reference frames. We show that the method works best when targets are at regularly spaced positions within the response field's active region, and that the method identifies the best kernel bandwidth for response field estimation. We describe how gain-field effects may be dealt with, and how to test neurons within a population that fall on a continuum between specific reference frames. This method may be applied to any spatially coherent single-unit activity related to sensation and/or movement during naturally varying behaviors.
Collapse
Affiliation(s)
- Gerald P Keith
- Canadian Action and Perception Network, York University, 4700 Keele Street, Toronto, Ontario M3J1P3, Canada
| | | | | | | | | |
Collapse
|
31
|
Keshner E, Dhaher Y. Characterizing head motion in three planes during combined visual and base of support disturbances in healthy and visually sensitive subjects. Gait Posture 2008; 28:127-34. [PMID: 18162402 PMCID: PMC2577851 DOI: 10.1016/j.gaitpost.2007.11.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2007] [Revised: 10/31/2007] [Accepted: 11/07/2007] [Indexed: 02/02/2023]
Abstract
Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29-31 years) and 3 visually sensitive (27-57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a three-dimensional model of joint motion was developed to examine gross head motion in three planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field could modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms.
Collapse
Affiliation(s)
- E.A. Keshner
- Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Chicago, IL 60611,Dept. of Physical Medicine and Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611
| | - Y. Dhaher
- Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Chicago, IL 60611,Dept. of Physical Medicine and Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611,Biomedical Engineering Department, Northwestern University, Evanston, IL 60208
| |
Collapse
|
32
|
Farshadmanesh F, Chang P, Wang H, Yan X, Corneil BD, Crawford JD. Neck muscle synergies during stimulation and inactivation of the interstitial nucleus of Cajal (INC). J Neurophysiol 2008; 100:1677-85. [PMID: 18579660 DOI: 10.1152/jn.90363.2008] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The interstitial nucleus of Cajal (INC) is thought to control torsional and vertical head posture. Unilateral microstimulation of the INC evokes torsional head rotation to positions that are maintained until stimulation offset. Unilateral INC inactivation evokes head position-holding deficits with the head tilted in the opposite direction. However, the underlying muscle synergies for these opposite behavioral effects are unknown. Here, we examined neck muscle activity in head-unrestrained monkeys before and during stimulation (50 muA, 200 ms, 300 Hz) and inactivation (injection of 0.3 mul of 0.05% muscimol) of the same INC sites. Three-dimensional eye and head movements were recorded simultaneously with electromyographic (EMG) activity in six bilateral neck muscles: sternocleidomastoid (SCM), splenius capitis (SP), rectus capitis posterior major (RCPmaj.), occipital capitis inferior (OCI), complexus (COM), and biventer cervicis (BC). INC stimulation evoked a phasic, short-latency ( approximately 5-10 ms) facilitation and later ( approximately 100-200 ms) a more tonic facilitation in the activity of ipsi-SCM, ipsi-SP, ipsi-COM, ipsi-BC, contra-RCPmaj., and contra-OCI. Unilateral INC inactivation led to an increase in the activity of contra-SCM, ipsi-SP, ipsi-RCPmaj., and ipsi-OCI and a decrease in the activity of contra-RCPmaj. and contra-OCI. Thus the influence of INC stimulation and inactivation were opposite on some muscles (i.e., contra-OCI and contra-RCPmaj.), but the comparative influences on other neck muscles were more variable. These results show that the relationship between the neck muscle responses during INC stimulation and inactivation is much more complex than the relationship between the overt behaviors.
Collapse
Affiliation(s)
- Farshad Farshadmanesh
- York Center for Vision Research, Canadian Institutes of Health Research Group for Action and Perception, Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada
| | | | | | | | | | | |
Collapse
|
33
|
Kunin M, Osaki Y, Cohen B, Raphan T. Rotation Axes of the Head During Positioning, Head Shaking, and Locomotion. J Neurophysiol 2007; 98:3095-108. [PMID: 17898142 DOI: 10.1152/jn.00764.2007] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Static head orientations obey Donders’ law and are postulated to be rotations constrained by a Fick gimbal. Head oscillations can be voluntary or generated during natural locomotion. Whether the rotation axes of the voluntary oscillations or during locomotion are constrained by the same gimbal is unknown and is the subject of this study. Head orientation was monitored with an Optotrak (Northern Digital). Human subjects viewed visual targets wearing pin-hole goggles to achieve static head positions with the eyes centered in the orbit. Incremental rotation axes were determined for pitch and yaw by computing the velocity vectors during head oscillation and during locomotion at 1.5 m/s on a treadmill. Static head orientation could be described by a generalization of the Fick gimbal by having the axis of the second rotation rotate by a fraction, k, of the angle of the first rotation without a third rotation. We have designated this as a k-gimbal system. Incremental rotation axes for both pitch and yaw oscillations were functions of the pitch but not the yaw head positions. The pivot point for head oscillations was close to the midpoint of the interaural line. During locomotion, however, the pivot point was considerably lower. These findings are well explained by an implementation of the k-gimbal model, which has a rotation axis superimposed on a Fick-gimbal system. This could be realized physiologically by the head interface with the dens and occipital condyles during head oscillation with a contribution of the lower spine to pitch during locomotion.
Collapse
Affiliation(s)
- Mikhail Kunin
- Institute for Neural and Intelligent Systems, Department of Computer and Information Science, Brooklyn College of the City University of New York, New York 11210, USA
| | | | | | | |
Collapse
|
34
|
Constantin AG, Wang H, Martinez-Trujillo JC, Crawford JD. Frames of reference for gaze saccades evoked during stimulation of lateral intraparietal cortex. J Neurophysiol 2007; 98:696-709. [PMID: 17553952 DOI: 10.1152/jn.00206.2007] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous studies suggest that stimulation of lateral intraparietal cortex (LIP) evokes saccadic eye movements toward eye- or head-fixed goals, whereas most single-unit studies suggest that LIP uses an eye-fixed frame with eye-position modulations. The goal of our study was to determine the reference frame for gaze shifts evoked during LIP stimulation in head-unrestrained monkeys. Two macaques (M1 and M2) were implanted with recording chambers over the right intraparietal sulcus and with search coils for recording three-dimensional eye and head movements. The LIP region was microstimulated using pulse trains of 300 Hz, 100-150 microA, and 200 ms. Eighty-five putative LIP sites in M1 and 194 putative sites in M2 were used in our quantitative analysis throughout this study. Average amplitude of the stimulation-evoked gaze shifts was 8.67 degrees for M1 and 7.97 degrees for M2 with very small head movements. When these gaze-shift trajectories were rotated into three coordinate frames (eye, head, and body), gaze endpoint distribution for all sites was most convergent to a common point when plotted in eye coordinates. Across all sites, the eye-centered model provided a significantly better fit compared with the head, body, or fixed-vector models (where the latter model signifies no modulation of the gaze trajectory as a function of initial gaze position). Moreover, the probability of evoking a gaze shift from any one particular position was modulated by the current gaze direction (independent of saccade direction). These results provide causal evidence that the motor commands from LIP encode gaze command in eye-fixed coordinates but are also subtly modulated by initial gaze position.
Collapse
Affiliation(s)
- A G Constantin
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | | | | | |
Collapse
|
35
|
Farshadmanesh F, Klier EM, Chang P, Wang H, Crawford JD. Three-Dimensional Eye–Head Coordination After Injection of Muscimol Into the Interstitial Nucleus of Cajal (INC). J Neurophysiol 2007; 97:2322-38. [PMID: 17229829 DOI: 10.1152/jn.00752.2006] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The interstitial nucleus of Cajal (INC) is thought to be the “neural integrator” for torsional/vertical eye position and head posture. Here, we investigated the coordination of eye and head movements after reversible INC inactivation. Three-dimensional (3-D) eye–head movements were recorded in three head-unrestrained monkeys using search coils. INC sites were identified by unit recording/electrical stimulation and then reversibly inactivated by 0.3 μl of 0.05% muscimol injection into 26 INC sites. After muscimol injection, the eye and head 1) began to drift (an inability to maintain stable fixation) torsionally: clockwise (CW)/counterclockwise (CCW) after left/right INC inactivation respectively. 2) The eye and head tilted torsionally CW/CCW after left/right INC inactivation, respectively. Horizontal gaze/head drifts were inconsistently present and did not result in considerable position offsets. Vertical eye drift was dependent on both vertical eye position and the magnitude of the previous vertical saccade, as in head-fixed condition. This correlation was smaller for gaze and head drift, suggesting that the gaze and head deficits could not be explained by a first-order integrator model. Ocular counterroll (OC) was completely disrupted. The gain of torsional vestibuloocular reflex (VOR) during spontaneous eye and head movements was reduced by 22% in both CW/CCW directions after either left or right INC inactivation. Our results suggest a complex interdependence of eye and head deficits after INC inactivation during fixation, gaze shifts, and VOR. Some of our results resemble the symptoms of spasmodic torticollis (ST).
Collapse
Affiliation(s)
- Farshad Farshadmanesh
- York Center for Vision Research, Canadian Institutes of Health Research Group for Action and Perception, Departments of Psychology, Biology, and Kinesiology and Health Sciences York University, Toronto, Ontario, Canada
| | | | | | | | | |
Collapse
|
36
|
Tian J, Zee DS, Walker MF. Rotational and translational optokinetic nystagmus have different kinematics. Vision Res 2007; 47:1003-10. [PMID: 17320142 PMCID: PMC1862819 DOI: 10.1016/j.visres.2006.12.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2006] [Revised: 12/08/2006] [Accepted: 12/11/2006] [Indexed: 10/23/2022]
Abstract
We studied the dependence of ocular torsion on eye position during horizontal optokinetic nystagmus (OKN) elicited by random-dot translational motion (tOKN) and prolonged rotation in the light (rOKN). For slow and quick phases, we fit the eye-velocity axis to vertical eye position to determine the tilt angle slope (TAS). The TAS for tOKN was 0.48 for both slow and quick phases, close to what is found during translational motion of the head. The TAS for rOKN was less for both slow (0.11) and quick phases (0.26), close to what is found during rotational motion of the head. Our findings are consistent with the notion that translational and rotational optic flow are processed differently by the brain and that they produce different 3-D eye movement commands that are comparable to the different commands generated in response to vestibular signals when the head is actually translating or rotating.
Collapse
Affiliation(s)
- Jing Tian
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | | | | |
Collapse
|
37
|
Ronsse R, White O, Lefèvre P. Computation of gaze orientation under unrestrained head movements. J Neurosci Methods 2007; 159:158-69. [PMID: 16890993 DOI: 10.1016/j.jneumeth.2006.06.016] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2006] [Revised: 06/07/2006] [Accepted: 06/22/2006] [Indexed: 10/24/2022]
Abstract
Given the high relevance of visual input to human behavior, it is often important to precisely monitor the spatial orientation of the visual axis. One popular and accurate technique for measuring gaze orientation is based on the dual search coil. This technique does not allow for very large displacements of the subject, however, and is not robust with respect to translations of the head. More recently, less invasive procedures have been developed that record eye movements with camera-based systems attached to a helmet worn by the subject. Computational algorithms have also been developed that can calibrate eye orientation when the head's position is fixed. Given that camera-based systems measure the eye's position in its orbit, however, the reconstruction of gaze orientation is not as straightforward when the head is allowed to move. In this paper, we propose a new algorithm and calibration method to compute gaze orientation under unrestrained head conditions. Our method requires only the accurate measurement of orbital eye position (for instance, with a camera-based system), and the position of three points on the head. The calculations are expressed in terms of linear algebra, so can easily be interpreted and related to the geometry of the human body. Our calibration method has been tested experimentally and validated against independent data, proving that is it robust even under large translations, rotations, and torsions of the head.
Collapse
Affiliation(s)
- Renaud Ronsse
- Department of Electrical Engineering and Computer Science (Montefiore Institute), Université de Liège, Grande Traverse 10 (B28), B-4000 Liège, Belgium.
| | | | | |
Collapse
|
38
|
Klier EM, Wang H, Crawford JD. Interstitial Nucleus of Cajal Encodes Three-Dimensional Head Orientations in Fick-Like Coordinates. J Neurophysiol 2007; 97:604-17. [PMID: 17079347 DOI: 10.1152/jn.00379.2006] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Two central, related questions in motor control are 1) how the brain represents movement directions of various effectors like the eyes and head and 2) how it constrains their redundant degrees of freedom. The interstitial nucleus of Cajal (INC) integrates velocity commands from the gaze control system into position signals for three-dimensional eye and head posture. It has been shown that the right INC encodes clockwise (CW)-up and CW-down eye and head components, whereas the left INC encodes counterclockwise (CCW)-up and CCW-down components, similar to the sensitivity directions of the vertical semicircular canals. For the eyes, these canal-like coordinates align with Listing’s plane (a behavioral strategy limiting torsion about the gaze axis). By analogy, we predicted that the INC also encodes head orientation in canal-like coordinates, but instead, aligned with the coordinate axes for the Fick strategy (which constrains head torsion). Unilateral stimulation (50 μA, 300 Hz, 200 ms) evoked CW head rotations from the right INC and CCW rotations from the left INC, with variable vertical components. The observed axes of head rotation were consistent with a canal-like coordinate system. Moreover, as predicted, these axes remained fixed in the head, rotating with initial head orientation like the horizontal and torsional axes of a Fick coordinate system. This suggests that the head is ordinarily constrained to zero torsion in Fick coordinates by equally activating CW/CCW populations of neurons in the right/left INC. These data support a simple mechanism for controlling head orientation through the alignment of brain stem neural coordinates with natural behavioral constraints.
Collapse
Affiliation(s)
- Eliana M Klier
- Department of Anatomy and Neurobiology, Box 8108, Washington University School of Medicine, 660 South Euclid Avenue, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
39
|
Bremen P, Van der Willigen RF, Van Opstal AJ. Using double-magnetic induction to measure head-unrestrained gaze shifts. I. Theory and validation. J Neurosci Methods 2006; 160:75-84. [PMID: 16997380 DOI: 10.1016/j.jneumeth.2006.08.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2006] [Revised: 08/23/2006] [Accepted: 08/24/2006] [Indexed: 10/24/2022]
Abstract
So far, the double-magnetic induction (DMI) method has been successfully applied to record eye movements from head-restrained humans, monkeys and cats. An advantage of the DMI method, compared to the more widely used scleral search coil technique, is the absence of vulnerable lead wires on the eye. A disadvantage, however, is that the relationship between the eye-in-head orientation and the secondary induction signal is highly non-linear and non-monotonic. This limits the effective measuring range to maximum eye orientations of about +/-30 degrees . Here, we analyze and test two extensions required to record the full eye-head orienting range, well exceeding 90 degrees from straight-ahead in all directions. (1) The use of mutually perpendicular magnetic fields allows for the disambiguation of the non-monotonic signal from the ring. (2) The application of an artificial neural network for offline calibration of the signals. The theoretical predictions are tested for horizontal rotations with a gimbal system. Our results show that the method is a promising alternative to the search coil technique.
Collapse
Affiliation(s)
- Peter Bremen
- Department of Biophysics, Institute of Neuroscience, Radboud University Nijmegen, Geert Grooteplein 21, 6525 EZ Nijmegen, The Netherlands
| | | | | |
Collapse
|
40
|
Abstract
Motor systems often require that superfluous degrees of freedom be constrained. For the oculomotor system, a redundancy in the degrees of freedom occurs during visually guided eye movements and is solved by implementing Listing's law and the half-angle rule, kinematic constraints that limit the range of eye positions and angular velocities used by the eyes. These constraints have been attributed either to neurally generated commands or to the physical mechanics of the eye and its surrounding muscles and tissues (i.e., the ocular plant). To directly test whether the ocular plant implements the half-angle rule, critical to the maintenance of Listing's law, we microstimulated the abducens nerve with the eye at different initial vertical eye positions. We report that the electrically evoked eye velocity exhibits the same eye position dependence as seen in visually guided smooth-pursuit eye movements. These results support an important role for the ocular plant in providing a solution to the degrees-of-freedom problem during eye movements.
Collapse
Affiliation(s)
- Eliana M Klier
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110, USA.
| | | | | |
Collapse
|
41
|
Abstract
The neural commands for gaze control include not only signals that drive the eyes and head from one point to the next, but also those that hold the eyes and head steady at the end of each movement. Studies using microstimulation and chemical inactivation techniques, in head-fixed and head-free macaques, were used to investigate the role of the interstitial nucleus of Cajal (INC) in the production of the latter, tonic signals. The right INC was found to control clockwise-up and clockwise-down components of both eye and head orientation, whereas the left INC was found to control the counterclockwise-up and counterclockwise-down components. Temporary inactivation of the INC left the eyes and head unable to hold their final torsional and vertical positions after each gaze shift. Thus, the INC is strongly implicated in the production of the tonic, step-like commands that maintain eye and head orientations between gaze shifts. In addition, these studies also found that the INC represents the torsional and vertical commands for eye and head orientation using different coordinate coding strategies, optimally matched to the different three-dimensional postural constraints observed in the eye and head.
Collapse
Affiliation(s)
- Eliana M Klier
- Canadian Institute of Health Research Group for Action and Perception, York Centre for Vision Research and Departments of Psychology, Biology and Kinesiology Health Sciences, York University, Toronto, Ontario, Canada
| | | |
Collapse
|
42
|
Monteon JA, Martinez-Trujillo JC, Wang H, Crawford JD. Cross-coupled adaptation of eye and head position commands in the primate gaze control system. Neuroreport 2005; 16:1189-92. [PMID: 16012346 DOI: 10.1097/00001756-200508010-00011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Primates orient visual gaze using different eye-head coordination strategies. To test how these strategies are formed, we trained a macaque monkey to perform 'head-only' gaze shifts looking through a 10 degrees head-fixed aperture. When we suddenly relocated this aperture 15 degrees downward, the animal could orient initial eye position toward the new aperture, but during large gaze saccades the eye was mistakenly driven back to the original (now occluded) aperture. More importantly, this was accompanied by an opposite head movement, such that gaze (although blocked) pointed correctly. We conclude that the gaze control system acquires new strategies through separate but interdependent eye-head controllers, designed primarily to ensure that gaze is placed in the correct direction.
Collapse
Affiliation(s)
- Jachin A Monteon
- Centre for Vision Research Department of Biology, York University, Toronto, Ontario, Canada
| | | | | | | |
Collapse
|
43
|
Constantin AG, Wang H, Crawford JD. Role of Superior Colliculus in Adaptive Eye–Head Coordination During Gaze Shifts. J Neurophysiol 2004; 92:2168-84. [PMID: 15190087 DOI: 10.1152/jn.00103.2004] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The goal of this study was to determine which aspects of adaptive eye–head coordination are implemented upstream or downstream from the motor output layers of the superior colliculus (SC). Two monkeys were trained to perform head-free gaze shifts while looking through a 10° aperture in opaque, head-fixed goggles. This training produced context-dependent alterations in eye–head coordination, including a coordinated pattern of saccade–vestibuloocular reflex (VOR) eye movements that caused eye position to converge toward the aperture, and an increased contribution of head movement to the gaze shift. One would expect the adaptations that were implemented downstream from the SC to be preserved in gaze shifts evoked by SC stimulation. To test this, we analyzed gaze shifts evoked from 19 SC sites in monkey 1 and 38 sites in monkey 2, both with and without goggles. We found no evidence that the goggle paradigm altered the basic gaze position–dependent spatial coding of the evoked movements (i.e., gaze was still coded in an eye-centered frame). However, several aspects of the context-dependent coordination strategy were preserved during stimulation, including the adaptive convergence of final eye position toward the goggles aperture, and the position-dependent patterns of eye and head movement required to achieve this. For example, when initial eye position was offset from the learned aperture location at the time of stimulation, a coordinated saccade–VOR eye movement drove it back to the original aperture, and the head compensated to preserve gaze kinematics. Some adapted amplitude–velocity relationships in eye, gaze, and head movement also may have been preserved. In contrast, context-dependent changes in overall eye and head contribution to gaze amplitude were not preserved during SC stimulation. We conclude that 1) the motor output command from the SC to the brain stem can be adapted to produce different position-dependent coordination strategies for different behavioral contexts, particularly for eye-in-head position, but 2) these brain stem coordination mechanisms implement only the default (normal) level of head amplitude contribution to the gaze shift. We propose that a parallel cortical drive, absent during SC stimulation, is required to adjust the overall head contribution for different behavioral contexts.
Collapse
Affiliation(s)
- Alina G Constantin
- Center for Vision Research, York University, 4700 Keele Street, Toronto, Ontario M3J 1P3, Canada.
| | | | | |
Collapse
|
44
|
Crawford JD, Tweed DB, Vilis T. Static ocular counterroll is implemented through the 3-D neural integrator. J Neurophysiol 2004; 90:2777-84. [PMID: 14534281 DOI: 10.1152/jn.00231.2003] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Static head roll about the naso-occipital axis is known to produce an opposite ocular counterroll with a gain of approximately 10%, but the purpose and neural mechanism of this response remain obscure. In theory counterroll could be maintained either by direct tonic vestibular inputs to motoneurons, or by a neurally integrated pulse, as observed in the saccade generator and vestibulo-ocular reflex. When simulated together with ocular drift related to torsional integrator failure, the direct tonic input model predicted that the pattern of drift would shift torsionally as in ordinary counterroll, but the integrated pulse model predicted that the equilibrium position of torsional drift would be unaffected by head roll. This was tested experimentally by measuring ocular counterroll in 2 monkeys after injection of muscimol into the mesencephalic interstitial nucleus of Cajal. Whereas 90 degrees head roll produced a mean ocular counterroll of 8.5 degrees (+/-0.7 degrees SE) in control experiments, the torsional equilibrium position observed during integrator failure failed to counterroll, showing a torsional shift of only 0.3 degrees (+/-0.6 degrees SE). This result contradicted the direct tonic input model, but was consistent with models that implement counterroll by a neurally integrated pulse.
Collapse
Affiliation(s)
- J Douglas Crawford
- Canadian Institutes for Health Research Group for Action and Perception, York University, Toronto, Ontario M3J 1P3, Canada.
| | | | | |
Collapse
|
45
|
Martinez-Trujillo JC, Klier EM, Wang H, Crawford JD. Contribution of head movement to gaze command coding in monkey frontal cortex and superior colliculus. J Neurophysiol 2004; 90:2770-6. [PMID: 14534280 DOI: 10.1152/jn.00330.2003] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Most of what we know about the neural control of gaze comes from experiments in head-fixed animals, but several "head-free" studies have suggested that fixing the head dramatically alters the apparent gaze command. We directly investigated this issue by quantitatively comparing head-fixed and head-free gaze trajectories evoked by electrically stimulating 52 sites in the superior colliculus (SC) of two monkeys and 23 sites in the supplementary eye fields (SEF) of two other monkeys. We found that head movements made a significant contribution to gaze shifts evoked from both neural structures. In the majority of the stimulated sites, average gaze amplitude was significantly larger and individual gaze trajectories were significantly less convergent in space with the head free to move. Our results are consistent with the hypothesis that head-fixed stimulation only reveals the oculomotor component of the gaze shift, not the true, planned goal of the movement. One implication of this finding is that when comparing stimulation data against popular gaze control models, freeing the head shifts the apparent coding of gaze away from a "spatial code" toward a simpler visual model in the SC and toward an eye-centered or fixed-vector model representation in the SEF.
Collapse
|
46
|
Marotta JJ, Medendorp WP, Crawford JD. Kinematic rules for upper and lower arm contributions to grasp orientation. J Neurophysiol 2003; 90:3816-27. [PMID: 12930815 DOI: 10.1152/jn.00418.2003] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The purpose of the current study was to investigate the contribution of upper and lower arm torsion to grasp orientation during a reaching and grasping movement. In particular, we examined how the visuomotor system deals with the conflicting demands of coordinating upper and lower arm torsion and maintaining Donders' Law of the upper arm (a behavioral restriction of the axes of arm rotation to a two-dimensional "surface"). In experiment 1, subjects reached out and grasped a target block that was presented in one of 19 orientations (5 degrees clockwise increments from horizontal to vertical) at one position in a vertical presentation board. In experiment 2, target blocks were presented in one of three orientations (horizontal, three-quarter, and vertical) at nine different positions in the presentation board. If reach and grasp commands control the proximal and distal arms separately, then one would only expect the lower arm to contribute to grasp orientations and that Donders' Law would hold for the upper arm-independent of grasp orientations. Instead, as the required grasp orientation increased from horizontal to vertical, there was a significant clockwise torsional rotation in the upper arm, which accounted for 9% of the final vertical grasp orientation, and the lower arm, which accounted for 42%. A linear relationship existed between the torsional rotations of the upper and lower arm, which indicates that the components of the arm rotate in coordination with one another. The location-dependent aspects of upper and lower arm torsion remained invariant, however, yielding consistently shaped Donders' "surfaces" (with different torsional offsets) for different grasp orientations. These observations suggest that the entire arm-hand system contributes to grasp orientation, and therefore, the reach/grasp distinction is not directly reflected in proximal-distal kinematics but is better reflected in the distinction between these coordinated orienting rules and the location-dependent kinematic rules for the upper arm that result in Donders' Law for one given grasp orientation.
Collapse
Affiliation(s)
- J J Marotta
- Canadian Institutes of Health Research Group for Action and Perception, Department of Psychology, Centre for Vision Research, York University, Toronto, Ontario M3J-1P3, Canada.
| | | | | |
Collapse
|
47
|
Hess BJM, Angelaki DE. Gravity modulates Listing's plane orientation during both pursuit and saccades. J Neurophysiol 2003; 90:1340-5. [PMID: 12904513 DOI: 10.1152/jn.00167.2003] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous studies have shown that the spatial organization of all eye orientations during visually guided saccadic eye movements (Listing's plane) varies systematically as a function of static and dynamic head orientation in space. Here we tested if a similar organization also applies to the spatial orientation of eye positions during smooth pursuit eye movements. Specifically, we characterized the three-dimensional distribution of eye positions during horizontal and vertical pursuit (0.1 Hz, +/-15 degrees and 0.5 Hz, +/-8 degrees) at different eccentricities and elevations while rhesus monkeys were sitting upright or being statically tilted in different roll and pitch positions. We found that the spatial organization of eye positions during smooth pursuit depends on static orientation in space, similarly as during visually guided saccades and fixations. In support of recent modeling studies, these results are consistent with a role of gravity on defining the parameters of Listing's law.
Collapse
Affiliation(s)
- Bernhard J M Hess
- Department of Neurology, University Hospital Zürich, CH-8091, Switzerland.
| | | |
Collapse
|
48
|
Martinez-Trujillo JC, Wang H, Crawford JD. Electrical stimulation of the supplementary eye fields in the head-free macaque evokes kinematically normal gaze shifts. J Neurophysiol 2003; 89:2961-74. [PMID: 12611991 DOI: 10.1152/jn.01065.2002] [Citation(s) in RCA: 48] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The supplementary eye fields (SEFs), located on the dorsomedial surface of the frontal cortex, are involved in high-level aspects of saccade generation. Some reports suggest that the same area could also be involved in the generation of motor commands for the head. If so, it is important to establish whether this structure encodes eye and head commands separately or gaze commands that give rise to coordinated eye-head movements. Here we systematically stimulated (50 microA, 300 Hz, 200 ms) the SEF of two head-free (head unrestrained) macaques while recording three-dimensional eye and head rotations. A total of 55 sites were found to consistently elicit saccade-like gaze movements, always in the contralateral direction with variable vertical components, and ranging in average amplitude from 5 to 60 degrees. These movements were always a combination of eye-in-head saccades and head-in-space movements. We then performed a comparison between these movements and natural gaze shifts. The kinematics of the elicited movements (i.e., their temporal structure, their velocity-amplitude relationships, and the relative contributions of the eye and the head as a function of movement amplitude) were indistinguishable from those of natural gaze shifts. Additionally, they obeyed the same three-dimensional constraints as natural gaze shifts (i.e., eye-in-head movements obeyed Listing's law, whereas head- and eye-in-space movements obeyed Donders' law). In summary, gaze movements evoked by stimulating the SEF were indistinguishable from natural coordinated eye-head gaze shifts. Based on this we conclude that the SEF explicitly encodes gaze and that the kinematics aspects of eye-head coordination are implicitly specified by mechanisms downstream from the SEF.
Collapse
Affiliation(s)
- Julio C Martinez-Trujillo
- Centre for Vision Research and Canadian Institute for Health Research, Group for Action and Perception, Department of Psychology, York University, Toronto, Ontario M3J 1P3, Canada.
| | | | | |
Collapse
|
49
|
Klier EM, Martinez-Trujillo JC, Medendorp WP, Smith MA, Crawford JD. Neural control of 3-D gaze shifts in the primate. PROGRESS IN BRAIN RESEARCH 2003; 142:109-24. [PMID: 12693257 DOI: 10.1016/s0079-6123(03)42009-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
The neural mechanisms that specify target locations for gaze shifts and then convert these into desired patterns of coordinated eye and head movements are complex. Much of this complexity is only revealed when one takes a realistic three-dimensional (3-D) view of these processes, where fundamental computational problems such as kinematic redundancy, reference-frame transformations, and non-commutativity emerge. Here we review the underlying mechanisms and solutions for these problems, starting with a consideration of the kinematics of 3-D gaze shifts in human and non-human primates. We then consider the neural mechanisms, including cortical representation of gaze targets, the nature of the gaze motor command used by the superior colliculus, and how these gaze commands are decomposed into brainstem motor commands for the eyes and head. A general conclusion is that fairly simple coding mechanisms may be used to represent gaze at the cortical and collicular level, but this then necessitates complexity for the spatial updating of these representations and in the brainstem sensorimotor transformations that convert these signals into eye and head movements.
Collapse
Affiliation(s)
- Eliana M Klier
- CIHR Group for Action and Perception, Centre for Vision Research, Department of Biology, York University, Toronto, ON M3J 1P3, Canada
| | | | | | | | | |
Collapse
|
50
|
Klier EM, Wang H, Crawford JD. Three-dimensional eye-head coordination is implemented downstream from the superior colliculus. J Neurophysiol 2003; 89:2839-53. [PMID: 12740415 DOI: 10.1152/jn.00763.2002] [Citation(s) in RCA: 56] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
How the brain transforms two-dimensional visual signals into multi-dimensional motor commands, and subsequently how it constrains the redundant degrees of freedom, are fundamental problems in sensorimotor control. During fixations between gaze shifts, the redundant torsional degree of freedom is determined by various neural constraints. For example, the eye- and head-in-space are constrained by Donders' law, whereas the eye-in-head obeys Listing's law. However, where and how the brain implements these laws is not yet known. In this study, we show that eye and head movements, elicited by unilateral microstimulations of the superior colliculus (SC) in head-free monkeys, obey the same Donders' strategies observed in normal behavior (i.e., Listing's law for final eye positions and the Fick strategy for the head). Moreover, these evoked movements showed a pattern of three-dimensional eye-head coordination, consistent with normal behavior, where the eye is driven purposely out of Listing's plane during the saccade portion of the gaze shift in opposition to a subsequent torsional vestibuloocular reflex slow phase, such that the final net torsion at the end of each head-free gaze shift is zero. The required amount of saccade-related torsion was highly variable, depending on the initial position of the eye and head prior to a gaze shift and the size of the gaze shift, pointing to a neural basis of torsional control. Because these variable, context-appropriate torsional saccades were correctly elicited by fixed SC commands during head-free stimulations, this shows that the SC only encodes the horizontal and vertical components of gaze, leaving the complexity of torsional organization to downstream control systems. Thus we conclude that Listing's and Donders' laws of the eyes and head, and their three-dimensional coordination mechanisms, must be implemented after the SC.
Collapse
Affiliation(s)
- Eliana M Klier
- Canadian Institutes of Health Research Group for Action and Perception, Toronto, Ontario M3J 1P3, Canada.
| | | | | |
Collapse
|