1
|
Cisek P, Green AM. Toward a neuroscience of natural behavior. Curr Opin Neurobiol 2024; 86:102859. [PMID: 38583263 DOI: 10.1016/j.conb.2024.102859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 03/04/2024] [Indexed: 04/09/2024]
Abstract
One of the most exciting new developments in systems neuroscience is the progress being made toward neurophysiological experiments that move beyond simplified laboratory settings and address the richness of natural behavior. This is enabled by technological advances such as wireless recording in freely moving animals, automated quantification of behavior, and new methods for analyzing large data sets. Beyond new empirical methods and data, however, there is also a need for new theories and concepts to interpret that data. Such theories need to address the particular challenges of natural behavior, which often differ significantly from the scenarios studied in traditional laboratory settings. Here, we discuss some strategies for developing such novel theories and concepts and some example hypotheses being proposed.
Collapse
Affiliation(s)
- Paul Cisek
- Department of Neuroscience, University of Montréal, Montréal, Québec, Canada.
| | - Andrea M Green
- Department of Neuroscience, University of Montréal, Montréal, Québec, Canada
| |
Collapse
|
2
|
Ugolini G, Graf W. Pathways from the superior colliculus and the nucleus of the optic tract to the posterior parietal cortex in macaque monkeys: Functional frameworks for representation updating and online movement guidance. Eur J Neurosci 2024; 59:2792-2825. [PMID: 38544445 DOI: 10.1111/ejn.16314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 05/22/2024]
Abstract
The posterior parietal cortex (PPC) integrates multisensory and motor-related information for generating and updating body representations and movement plans. We used retrograde transneuronal transfer of rabies virus combined with a conventional tracer in macaque monkeys to identify direct and disynaptic pathways to the arm-related rostral medial intraparietal area (MIP), the ventral lateral intraparietal area (LIPv), belonging to the parietal eye field, and the pursuit-related lateral subdivision of the medial superior temporal area (MSTl). We found that these areas receive major disynaptic pathways via the thalamus from the nucleus of the optic tract (NOT) and the superior colliculus (SC), mainly ipsilaterally. NOT pathways, targeting MSTl most prominently, serve to process the sensory consequences of slow eye movements for which the NOT is the key sensorimotor interface. They potentially contribute to the directional asymmetry of the pursuit and optokinetic systems. MSTl and LIPv receive feedforward inputs from SC visual layers, which are potential correlates for fast detection of motion, perceptual saccadic suppression and visual spatial attention. MSTl is the target of efference copy pathways from saccade- and head-related compartments of SC motor layers and head-related reticulospinal neurons. They are potential sources of extraretinal signals related to eye and head movement in MSTl visual-tracking neurons. LIPv and rostral MIP receive efference copy pathways from all SC motor layers, providing online estimates of eye, head and arm movements. Our findings have important implications for understanding the role of the PPC in representation updating, internal models for online movement guidance, eye-hand coordination and optic ataxia.
Collapse
Affiliation(s)
- Gabriella Ugolini
- Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR9197 CNRS - Université Paris-Saclay, Campus CEA Saclay, Saclay, France
| | - Werner Graf
- Department of Physiology and Biophysics, Howard University, Washington, DC, USA
| |
Collapse
|
3
|
van Opstal AJ. Neural encoding of instantaneous kinematics of eye-head gaze shifts in monkey superior Colliculus. Commun Biol 2023; 6:927. [PMID: 37689726 PMCID: PMC10492853 DOI: 10.1038/s42003-023-05305-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 08/31/2023] [Indexed: 09/11/2023] Open
Abstract
The midbrain superior colliculus is a crucial sensorimotor stage for programming and generating saccadic eye-head gaze shifts. Although it is well established that superior colliculus cells encode a neural command that specifies the amplitude and direction of the upcoming gaze-shift vector, there is controversy about the role of the firing-rate dynamics of these neurons during saccades. In our earlier work, we proposed a simple quantitative model that explains how the recruited superior colliculus population may specify the detailed kinematics (trajectories and velocity profiles) of head-restrained saccadic eye movements. We here show that the same principles may apply to a wide range of saccadic eye-head gaze shifts with strongly varying kinematics, despite the substantial nonlinearities and redundancy in programming and execute rapid goal-directed eye-head gaze shifts to peripheral targets. Our findings could provide additional evidence for an important role of the superior colliculus in the optimal control of saccades.
Collapse
Affiliation(s)
- A John van Opstal
- Section Neurophysics, Donders Centre for Neuroscience, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
4
|
Ventral premotor cortex encodes task relevant features during eye and head movements. Sci Rep 2022; 12:22093. [PMID: 36543870 PMCID: PMC9772313 DOI: 10.1038/s41598-022-26479-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022] Open
Abstract
Visual exploration of the environment is achieved through gaze shifts or coordinated movements of the eyes and the head. The kinematics and contributions of each component can be decoupled to fit the context of the required behavior, such as redirecting the visual axis without moving the head or rotating the head without changing the line of sight. A neural controller of these effectors, therefore, must show code relating to multiple muscle groups, and it must also differentiate its code based on context. In this study we tested whether the ventral premotor cortex (PMv) in monkey exhibits a population code relating to various features of eye and head movements. We constructed three different behavioral tasks or contexts, each with four variables to explore whether PMv modulates its activity in accordance with these factors. We found that task related population code in PMv differentiates between all task related features and conclude that PMv carries information about task relevant features during eye and head movements. Furthermore, this code represents both lower-level (effector and movement direction) and higher-level (context) information.
Collapse
|
5
|
Choi V, Priebe NJ. Interocular velocity cues elicit vergence eye movements in mice. J Neurophysiol 2020; 124:623-633. [PMID: 32727261 DOI: 10.1152/jn.00697.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
We stabilize the dynamic visual world on our retina by moving our eyes in response to motion signals. Coordinated movements between the two eyes are characterized as version when both eyes move in the same direction and vergence when the two eyes move in opposite directions. Vergence eye movements are necessary to track objects in three dimensions. In primates they can be elicited by intraocular differences in either spatial signals (disparity) or velocity, requiring the integration of left and right eye inputs. Whether mice are capable of similar behaviors is not known. To address this issue, we measured vergence eye movements in mice using a stereoscopic stimulus known to elicit vergence eye movements in primates. We found that mice also exhibit vergence eye movements, although at a low gain and that the primary driver of these vergence eye movements is interocular motion. Spatial disparity cues alone are ineffective. We also found that the vergence eye movements we observed in mice were robust to silencing visual cortex and to manipulations that disrupt the normal development of binocularity in visual cortex. A sublinear combination of motor commands driven by monocular signals is sufficient to account for our results.NEW & NOTEWORTHY The visual system integrates signals from the left and right eye to generate a representation of the world in depth. The binocular integration of signals may be observed from the coordinated vergence eye movements elicited by object motion in depth. We explored the circuits and signals responsible for these vergence eye movements in rodent and find these vergence eye movements are generated by a comparison of the motion and not spatial visual signals.
Collapse
Affiliation(s)
- Veronica Choi
- Center for Perceptual Systems, The University of Texas, Austin, Texas.,Center for Learning and Memory, The University of Texas, Austin, Texas.,Department of Neuroscience, The University of Texas, Austin, Texas
| | - Nicholas J Priebe
- Center for Learning and Memory, The University of Texas, Austin, Texas.,Department of Neuroscience, The University of Texas, Austin, Texas
| |
Collapse
|
6
|
Lev-Ari T, Zahar Y, Agarwal A, Gutfreund Y. Behavioral and neuronal study of inhibition of return in barn owls. Sci Rep 2020; 10:7267. [PMID: 32350332 PMCID: PMC7190666 DOI: 10.1038/s41598-020-64197-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 04/08/2020] [Indexed: 11/21/2022] Open
Abstract
Inhibition of return (IOR) is the reduction of detection speed and/or detection accuracy of a target in a recently attended location. This phenomenon, which has been discovered and studied thoroughly in humans, is believed to reflect a brain mechanism for controlling the allocation of spatial attention in a manner that enhances efficient search. Findings showing that IOR is robust, apparent at a very early age and seemingly dependent on midbrain activity suggest that IOR is a universal attentional mechanism in vertebrates. However, studies in non-mammalian species are scarce. To explore this hypothesis comparatively, we tested for IOR in barn owls (Tyto alba) using the classical Posner cueing paradigm. Two barn owls were trained to initiate a trial by fixating on the center of a computer screen and then turning their gaze to the location of a target. A short, non-informative cue appeared before the target, either at a location predicting the target (valid) or a location not predicting the target (invalid). In one barn owl, the response times (RT) to the valid targets compared to the invalid targets shifted from facilitation (lower RTs) to inhibition (higher RTs) when increasing the time lag between the cue and the target. The second owl mostly failed to maintain fixation and responded to the cue before the target onset. However, when including in the analysis only the trials in which the owl maintained fixation, an inhibition in the valid trials could be detected. To search for the neural correlates of IOR, we recorded multiunit responses in the optic tectum (OT) of four head-fixed owls passively viewing a cueing paradigm as in the behavioral experiments. At short cue to target lags (<100 ms), neural responses to the target in the receptive field (RF) were usually enhanced if the cue appeared earlier inside the RF (valid) and were suppressed if the cue appeared earlier outside the RF (invalid). This was reversed at longer lags: neural responses were suppressed in the valid conditions and were unaffected in the invalid conditions. The findings support the notion that IOR is a basic mechanism in the evolution of vertebrate behavior and suggest that the effect appears as a result of the interaction between lateral and forward inhibition in the tectal circuitry.
Collapse
Affiliation(s)
- Tidhar Lev-Ari
- Department of Neuroscience, Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, Technion - Israel Institute of Technology, Haifa, 31096, Israel
| | - Yael Zahar
- Department of Neuroscience, Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, Technion - Israel Institute of Technology, Haifa, 31096, Israel
| | - Arpit Agarwal
- Department of Neuroscience, Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, Technion - Israel Institute of Technology, Haifa, 31096, Israel
| | - Yoram Gutfreund
- Department of Neuroscience, Ruth and Bruce Rappaport Faculty of Medicine and Research Institute, Technion - Israel Institute of Technology, Haifa, 31096, Israel.
| |
Collapse
|
7
|
Timing Determines Tuning: A Rapid Spatial Transformation in Superior Colliculus Neurons during Reactive Gaze Shifts. eNeuro 2020; 7:ENEURO.0359-18.2019. [PMID: 31792117 PMCID: PMC6944480 DOI: 10.1523/eneuro.0359-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 10/12/2019] [Accepted: 10/14/2019] [Indexed: 11/21/2022] Open
Abstract
Gaze saccades, rapid shifts of the eyes and head toward a goal, have provided fundamental insights into the neural control of movement. For example, it has been shown that the superior colliculus (SC) transforms a visual target (T) code to future gaze (G) location commands after a memory delay. However, this transformation has not been observed in "reactive" saccades made directly to a stimulus, so its contribution to normal gaze behavior is unclear. Here, we tested this using a quantitative measure of the intermediate codes between T and G, based on variable errors in gaze endpoints. We demonstrate that a rapid spatial transformation occurs within the primate's SC (Macaca mulatta) during reactive saccades, involving a shift in coding from T, through intermediate codes, to G. This spatial shift progressed continuously both across and within cell populations [visual, visuomotor (VM), motor], rather than relaying discretely between populations with fixed spatial codes. These results suggest that the SC produces a rapid, noisy, and distributed transformation that contributes to variable errors in reactive gaze shifts.
Collapse
|
8
|
Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques. J Neurophysiol 2019; 122:1946-1961. [PMID: 31533015 DOI: 10.1152/jn.00072.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination-eye-head-hand coordination-has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized "chair" that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target.NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Collapse
Affiliation(s)
- Harbandhan Kaur Arora
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - Vishal Bharmauria
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Saihong Sun
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hongying Wang
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - John Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada.,Department of Psychology, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
9
|
Churan J, Braun DI, Gegenfurtner KR, Bremmer F. Comparison of the precision of smooth pursuit in humans and head unrestrained monkeys. J Eye Mov Res 2018; 11. [PMID: 33828708 PMCID: PMC7904314 DOI: 10.16910/jemr.11.4.6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Direct comparison of results of humans and monkeys is often complicated by differences in experimental conditions. We replicated in head unrestrained macaques experiments of a recent study comparing human directional precision during smooth pursuit eye movements (SPEM) and saccades to moving targets (Braun & Gegenfurtner, 2016). Directional precision of human SPEM follows an exponential decay function reaching optimal values of 1.5°-3° within 300 ms after target motion onset, whereas precision of initial saccades to moving targets is slightly better. As in humans, we found general agreement in the devel-opment of directional precision of SPEM over time and in the differences between direc-tional precision of initial saccades and SPEM initiation. However, monkeys showed over-all lower precision in SPEM compared to humans. This was most likely due to differences in experimental conditions, such as in the stabilization of the head, which was by a chin and a head rest in human subjects and unrestrained in monkeys.
Collapse
Affiliation(s)
- Jan Churan
- University of Marburg & CMBB, Marburg, Germany
| | | | | | | |
Collapse
|
10
|
Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. The Influence of a Memory Delay on Spatial Coding in the Superior Colliculus: Is Visual Always Visual and Motor Always Motor? Front Neural Circuits 2018; 12:74. [PMID: 30405361 PMCID: PMC6204359 DOI: 10.3389/fncir.2018.00074] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Accepted: 08/29/2018] [Indexed: 11/13/2022] Open
Abstract
The memory-delay saccade task is often used to separate visual and motor responses in oculomotor structures such as the superior colliculus (SC), with the assumption that these same responses would sum with a short delay during immediate "reactive" saccades to visual stimuli. However, it is also possible that additional signals (suppression, delay) alter visual and/or motor response in the memory delay task. Here, we compared the spatiotemporal properties of visual and motor responses of the same SC neurons recorded during both the reactive and memory-delay tasks in two head-unrestrained monkeys. Comparing tasks, visual (aligned with target onset) and motor (aligned on saccade onset) responses were highly correlated across neurons, but the peak response of visual neurons and peak motor responses (of both visuomotor (VM) and motor neurons) were significantly higher in the reactive task. Receptive field organization was generally similar in both tasks. Spatial coding (along a Target-Gaze (TG) continuum) was also similar, with the exception that pure motor cells showed a stronger tendency to code future gaze location in the memory delay task, suggesting a more complete transformation. These results suggest that the introduction of a trained memory delay alters both the vigor and spatial coding of SC visual and motor responses, likely due to a combination of saccade suppression signals and greater signal noise accumulation during the delay in the memory delay task.
Collapse
Affiliation(s)
- Morteza Sadeh
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Amirsaman Sajad
- York Centre for Vision Research, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Hongying Wang
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - John Douglas Crawford
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| |
Collapse
|
11
|
Tao G, Khan AZ, Blohm G. Corrective response times in a coordinated eye-head-arm countermanding task. J Neurophysiol 2018; 119:2036-2051. [PMID: 29465326 DOI: 10.1152/jn.00460.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Inhibition of motor responses has been described as a race between two competing decision processes of motor initiation and inhibition, which manifest as the reaction time (RT) and the stop signal reaction time (SSRT); in the case where motor initiation wins out over inhibition, an erroneous movement occurs that usually needs to be corrected, leading to corrective response times (CRTs). Here we used a combined eye-head-arm movement countermanding task to investigate the mechanisms governing multiple effector coordination and the timing of corrective responses. We found a high degree of correlation between effector response times for RT, SSRT, and CRT, suggesting that decision processes are strongly dependent across effectors. To gain further insight into the mechanisms underlying CRTs, we tested multiple models to describe the distribution of RTs, SSRTs, and CRTs. The best-ranked model (according to 3 information criteria) extends the LATER race model governing RTs and SSRTs, whereby a second motor initiation process triggers the corrective response (CRT) only after the inhibition process completes in an expedited fashion. Our model suggests that the neural processing underpinning a failed decision has a residual effect on subsequent actions. NEW & NOTEWORTHY Failure to inhibit erroneous movements typically results in corrective movements. For coordinated eye-head-hand movements we show that corrective movements are only initiated after the erroneous movement cancellation signal has reached a decision threshold in an accelerated fashion.
Collapse
Affiliation(s)
- Gordon Tao
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada.,Canadian Action and Perception Network (CAPnet).,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN)
| | - Aarlenne Z Khan
- Canadian Action and Perception Network (CAPnet).,School of Optometry, University of Montreal, Montreal, Quebec, Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada.,Canadian Action and Perception Network (CAPnet).,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN)
| |
Collapse
|
12
|
Wilson JJ, Alexandre N, Trentin C, Tripodi M. Three-Dimensional Representation of Motor Space in the Mouse Superior Colliculus. Curr Biol 2018; 28:1744-1755.e12. [PMID: 29779875 PMCID: PMC5988568 DOI: 10.1016/j.cub.2018.04.021] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Revised: 03/16/2018] [Accepted: 04/05/2018] [Indexed: 11/23/2022]
Abstract
From the act of exploring an environment to that of grasping a cup of tea, animals must put in register their motor acts with their surrounding space. In the motor domain, this is likely to be defined by a register of three-dimensional (3D) displacement vectors, whose recruitment allows motion in the direction of a target. One such spatially targeted action is seen in the head reorientation behavior of mice, yet the neural mechanisms underlying these 3D behaviors remain unknown. Here, by developing a head-mounted inertial sensor for studying 3D head rotations and combining it with electrophysiological recordings, we show that neurons in the mouse superior colliculus are either individually or conjunctively tuned to the three Eulerian components of head rotation. The average displacement vectors associated with motor-tuned colliculus neurons remain stable over time and are unaffected by changes in firing rate or the duration of spike trains. Finally, we show that the motor tuning of collicular neurons is largely independent from visual or landmark cues. By describing the 3D nature of motor tuning in the superior colliculus, we contribute to long-standing debate on the dimensionality of collicular motor decoding; furthermore, by providing an experimental paradigm for the study of the metric of motor tuning in mice, this study also paves the way to the genetic dissection of the circuits underlying spatially targeted motion. Development of inertial sensor system for monitoring 3D head movements in real time Neurons in the superior colliculus code for the full dimensionality of head rotations Firing rate correlates with velocity, but not head displacement angle The spatial tuning of collicular units is largely independent of visual or landmark cues
Collapse
|
13
|
A rightward saccade to an unexpected stimulus as a marker for lateralised visuospatial attention. Sci Rep 2018; 8:7562. [PMID: 29765090 PMCID: PMC5954050 DOI: 10.1038/s41598-018-25890-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Accepted: 05/01/2018] [Indexed: 11/08/2022] Open
Abstract
The human brain is lateralised to the right for visuospatial attention, particularly when reorienting attention to unexpected stimuli. However, the developmental characteristics of lateralisation remain unclear. To address this question, we devised a saccade task applicable for both adults and children. To assess the utility of this system, we investigated the correlation between line bisection test performance and the saccade task for 54 healthy adult volunteers. Participants followed a visual target that jumped 10 times, alternating between two fixed positions across the midline with a constant pace. In both the rightward and leftward directions, saccadic reaction time (RT) to the target jump decreased and reached a plateau from the first to the tenth jumps. Furthermore, we obtained the time required for reorienting in the contralateral hemisphere using the corrected value of the first RT. We found that longer corrected RTs in the rightward saccade were associated with greater deviation to the left in the line bisection task. This correlation was not observed for leftward saccades. Thus, corrected RTs in rightward saccades reflected the strength of individual hemispheric lateralisation. In conclusion, the rightward saccade task provides a suitable marker for lateralised visuospatial attention, and for investigating the development of lateralisation.
Collapse
|
14
|
Bourrelly C, Quinet J, Goffart L. The caudal fastigial nucleus and the steering of saccades toward a moving visual target. J Neurophysiol 2018; 120:421-438. [PMID: 29641309 DOI: 10.1152/jn.00141.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The caudal fastigial nuclei (cFN) are the output nuclei by which the medio-posterior cerebellum influences the production of visual saccades. We investigated in two head-restrained monkeys their contribution to the generation of interceptive saccades toward a target moving centrifugally by analyzing the consequences of a unilateral inactivation (10 injection sessions). We describe here the effects on saccades made toward a centrifugal target that moved along the horizontal meridian with a constant (10, 20, or 40°/s), increasing (from 0 to 40°/s over 600 ms), or decreasing (from 40 to 0°/s over 600 ms) speed. After muscimol injection, the monkeys were unable to foveate the current location of the moving target. The horizontal amplitude of interceptive saccades was reduced during contralesional target motions and hypermetric during ipsilesional ones. For both contralesional and ipsilesional saccades, the magnitude of dysmetria increased with target speed. However, the use of accelerating and decelerating targets revealed that the dependence of dysmetria upon target velocity was not due to the current velocity but to the required amplitude of saccade. We discuss these results in the framework of two hypotheses, the so-called "dual drive" and "bilateral" hypotheses. NEW & NOTEWORTHY Unilateral inactivation of the caudal fastigial nucleus impairs the accuracy of saccades toward a moving target. Like saccades toward a static target, interceptive saccades are hypometric when directed toward the contralesional side and hypermetric when they are ipsilesional. The dysmetria depends on target velocity, but the use of accelerating or decelerating targets reveals that velocity is not the crucial parameter. We extend the bilateral fastigial control of saccades and fixation to the production of interceptive saccades.
Collapse
Affiliation(s)
- Clara Bourrelly
- Institut de Neurosciences de la Timone, UMR 7289, Centre National de la Recherche Scientifique, Aix-Marseille Université , Marseille , France.,Laboratoire Psychologie de la Perception, UMR 8242, Centre National de la Recherche Scientifique, Université Paris Descartes , Paris , France
| | - Julie Quinet
- Institut de Neurosciences de la Timone, UMR 7289, Centre National de la Recherche Scientifique, Aix-Marseille Université , Marseille , France
| | - Laurent Goffart
- Institut de Neurosciences de la Timone, UMR 7289, Centre National de la Recherche Scientifique, Aix-Marseille Université , Marseille , France
| |
Collapse
|
15
|
Caruso VC, Pages DS, Sommer MA, Groh JM. Beyond the labeled line: variation in visual reference frames from intraparietal cortex to frontal eye fields and the superior colliculus. J Neurophysiol 2018; 119:1411-1421. [PMID: 29357464 PMCID: PMC5966730 DOI: 10.1152/jn.00584.2017] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 12/16/2017] [Accepted: 12/18/2017] [Indexed: 11/22/2022] Open
Abstract
We accurately perceive the visual scene despite moving our eyes ~3 times per second, an ability that requires incorporation of eye position and retinal information. In this study, we assessed how this neural computation unfolds across three interconnected structures: frontal eye fields (FEF), intraparietal cortex (LIP/MIP), and the superior colliculus (SC). Single-unit activity was assessed in head-restrained monkeys performing visually guided saccades from different initial fixations. As previously shown, the receptive fields of most LIP/MIP neurons shifted to novel positions on the retina for each eye position, and these locations were not clearly related to each other in either eye- or head-centered coordinates (defined as hybrid coordinates). In contrast, the receptive fields of most SC neurons were stable in eye-centered coordinates. In FEF, visual signals were intermediate between those patterns: around 60% were eye-centered, whereas the remainder showed changes in receptive field location, boundaries, or responsiveness that rendered the response patterns hybrid or occasionally head-centered. These results suggest that FEF may act as a transitional step in an evolution of coordinates between LIP/MIP and SC. The persistence across cortical areas of mixed representations that do not provide unequivocal location labels in a consistent reference frame has implications for how these representations must be read out. NEW & NOTEWORTHY How we perceive the world as stable using mobile retinas is poorly understood. We compared the stability of visual receptive fields across different fixation positions in three visuomotor regions. Irregular changes in receptive field position were ubiquitous in intraparietal cortex, evident but less common in the frontal eye fields, and negligible in the superior colliculus (SC), where receptive fields shifted reliably across fixations. Only the SC provides a stable labeled-line code for stimuli across saccades.
Collapse
Affiliation(s)
- Valeria C Caruso
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| | - Daniel S Pages
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| | - Marc A Sommer
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
- Department of Biomedical Engineering, Duke University , Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| |
Collapse
|
16
|
Soares SC, Maior RS, Isbell LA, Tomaz C, Nishijo H. Fast Detector/First Responder: Interactions between the Superior Colliculus-Pulvinar Pathway and Stimuli Relevant to Primates. Front Neurosci 2017; 11:67. [PMID: 28261046 PMCID: PMC5314318 DOI: 10.3389/fnins.2017.00067] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Accepted: 01/30/2017] [Indexed: 12/17/2022] Open
Abstract
Primates are distinguished from other mammals by their heavy reliance on the visual sense, which occurred as a result of natural selection continually favoring those individuals whose visual systems were more responsive to challenges in the natural world. Here we describe two independent but also interrelated visual systems, one cortical and the other subcortical, both of which have been modified and expanded in primates for different functions. Available evidence suggests that while the cortical visual system mainly functions to give primates the ability to assess and adjust to fluid social and ecological environments, the subcortical visual system appears to function as a rapid detector and first responder when time is of the essence, i.e., when survival requires very quick action. We focus here on the subcortical visual system with a review of behavioral and neurophysiological evidence that demonstrates its sensitivity to particular, often emotionally charged, ecological and social stimuli, i.e., snakes and fearful and aggressive facial expressions in conspecifics. We also review the literature on subcortical involvement during another, less emotional, situation that requires rapid detection and response-visually guided reaching and grasping during locomotion-to further emphasize our argument that the subcortical visual system evolved as a rapid detector/first responder, a function that remains in place today. Finally, we argue that investigating deficits in this subcortical system may provide greater understanding of Parkinson's disease and Autism Spectrum disorders (ASD).
Collapse
Affiliation(s)
- Sandra C. Soares
- Department of Education and Psychology, CINTESIS.UA, University of AveiroAveiro, Portugal
- Division of Psychology, Department of Clinical Neuroscience, Karolinska InstituteStockholm, Sweden
- William James Research Center, Instituto Superior de Psicologia AplicadaLisbon, Portugal
| | - Rafael S. Maior
- Division of Psychology, Department of Clinical Neuroscience, Karolinska InstituteStockholm, Sweden
- Department of Physiological Sciences, Primate Center, Institute of Biology, University of BrasíliaBrasília, Brazil
| | - Lynne A. Isbell
- Department of Anthropology, University of California, DavisDavis, CA, USA
| | - Carlos Tomaz
- Department of Physiological Sciences, Primate Center, Institute of Biology, University of BrasíliaBrasília, Brazil
- Ceuma University, Neuroscience Research CoordinationSão Luis, Brazil
| | - Hisao Nishijo
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of ToyamaToyama, Japan
| |
Collapse
|
17
|
Hammond‐Kenny A, Bajo VM, King AJ, Nodal FR. Behavioural benefits of multisensory processing in ferrets. Eur J Neurosci 2017; 45:278-289. [PMID: 27740711 PMCID: PMC5298019 DOI: 10.1111/ejn.13440] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 09/22/2016] [Accepted: 10/10/2016] [Indexed: 12/29/2022]
Abstract
Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour.
Collapse
Affiliation(s)
- Amy Hammond‐Kenny
- Department of Physiology, Anatomy and GeneticsUniversity of OxfordOxfordOX1 3PTUK
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and GeneticsUniversity of OxfordOxfordOX1 3PTUK
| | - Andrew J. King
- Department of Physiology, Anatomy and GeneticsUniversity of OxfordOxfordOX1 3PTUK
| | - Fernando R. Nodal
- Department of Physiology, Anatomy and GeneticsUniversity of OxfordOxfordOX1 3PTUK
| |
Collapse
|
18
|
Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts. Eur J Neurosci 2016; 42:2934-51. [PMID: 26448341 DOI: 10.1111/ejn.13093] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2014] [Revised: 09/14/2015] [Accepted: 09/30/2015] [Indexed: 11/27/2022]
Abstract
We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure.
Collapse
Affiliation(s)
- Morteza Sadeh
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Amirsaman Sajad
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Hongying Wang
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - John Douglas Crawford
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| |
Collapse
|
19
|
Quinlivan B, Butler JS, Beiser I, Williams L, McGovern E, O'Riordan S, Hutchinson M, Reilly RB. Application of virtual reality head mounted display for investigation of movement: a novel effect of orientation of attention. J Neural Eng 2016; 13:056006. [PMID: 27518212 DOI: 10.1088/1741-2560/13/5/056006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE To date human kinematics research has relied on video processing, motion capture and magnetic search coil data acquisition techniques. However, the use of head mounted display virtual reality systems, as a novel research tool, could facilitate novel studies into human movement and movement disorders. These systems have the unique ability of presenting immersive 3D stimulus while also allowing participants to make ecologically valid movement-based responses. APPROACH We employed one such system (Oculus Rift DK2) in this study to present visual stimulus and acquire head-turn data from a cohort of 40 healthy adults. Participants were asked to complete head movements towards eccentrically located visual targets following valid and invalid cues. Such tasks are commonly employed for investigating the effects orientation of attention and are known as Posner cueing paradigms. Electrooculography was also recorded for a subset of 18 participants. MAIN RESULTS A delay was observed in onset of head movement and saccade onset during invalid trials, both at the group and single participant level. We found that participants initiated head turns 57.4 ms earlier during valid trials. A strong relationship between saccade onset and head movement onset was also observed during valid trials. SIGNIFICANCE This work represents the first time that the Posner cueing effect has been observed in onset of head movement in humans. The results presented here highlight the role of head-mounted display systems as a novel and practical research tool for investigations of normal and abnormal movement patterns.
Collapse
|
20
|
Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory-Motor Transformation. eNeuro 2016; 3:eN-TNWR-0040-16. [PMID: 27092335 PMCID: PMC4829728 DOI: 10.1523/eneuro.0040-16.2016] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Accepted: 03/23/2016] [Indexed: 01/01/2023] Open
Abstract
The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.
Collapse
|
21
|
Taouali W, Goffart L, Alexandre F, Rougier NP. A parsimonious computational model of visual target position encoding in the superior colliculus. BIOLOGICAL CYBERNETICS 2015; 109:549-559. [PMID: 26342605 DOI: 10.1007/s00422-015-0660-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2014] [Accepted: 08/20/2015] [Indexed: 06/05/2023]
Abstract
The superior colliculus (SC) is a brainstem structure at the crossroad of multiple functional pathways. Several neurophysiological studies suggest that the population of active neurons in the SC encodes the location of a visual target to foveate, pursue or attend to. Although extensive research has been carried out on computational modeling, most of the reported models are often based on complex mechanisms and explain a limited number of experimental results. This suggests that a key aspect may have been overlooked in the design of previous computational models. After a careful study of the literature, we hypothesized that the representation of the whole retinal stimulus (not only its center) might play an important role in the dynamics of SC activity. To test this hypothesis, we designed a model of the SC which is built upon three well-accepted principles: the log-polar representation of the visual field onto the SC, the interplay between a center excitation and a surround inhibition and a simple neuronal dynamics, like the one proposed by the dynamic neural field theory. Results show that the retinotopic organization of the collicular activity conveys an implicit computation that deeply impacts the target selection process.
Collapse
Affiliation(s)
- Wahiba Taouali
- Institut de Neurobiologie de la Méditerrantée, INSERM, UMR 901, Aix-Marseille University, Marseille, France.
| | - Laurent Goffart
- Institut de Neurosciences de la Timone, CNRS, UMR 7289, Aix-Marseille University, Marseille, France
| | - Frédéric Alexandre
- INRIA Bordeaux Sud-West, Talence, France
- LaBRI, Université de Bordeaux, Bordeaux INP, UMR 5800, Centre National de la Recherche Scientifique, Talence, France
- Institut des Maladies Neurodégénératives, Université de Bordeaux, UMR 5293, Centre National de la Recherche Scientifique, Bordeaux, France
| | - Nicolas P Rougier
- INRIA Bordeaux Sud-West, Talence, France
- LaBRI, Université de Bordeaux, Bordeaux INP, UMR 5800, Centre National de la Recherche Scientifique, Talence, France
- Institut des Maladies Neurodégénératives, Université de Bordeaux, UMR 5293, Centre National de la Recherche Scientifique, Bordeaux, France
| |
Collapse
|
22
|
Godfroy-Cooper M, Sandor PMB, Miller JD, Welch RB. The interaction of vision and audition in two-dimensional space. Front Neurosci 2015; 9:311. [PMID: 26441492 PMCID: PMC4585004 DOI: 10.3389/fnins.2015.00311] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2014] [Accepted: 08/19/2015] [Indexed: 11/29/2022] Open
Abstract
Using a mouse-driven visual pointer, 10 participants made repeated open-loop egocentric localizations of memorized visual, auditory, and combined visual-auditory targets projected randomly across the two-dimensional frontal field (2D). The results are reported in terms of variable error, constant error and local distortion. The results confirmed that auditory and visual maps of the egocentric space differ in their precision (variable error) and accuracy (constant error), both from one another and as a function of eccentricity and direction within a given modality. These differences were used, in turn, to make predictions about the precision and accuracy within which spatially and temporally congruent bimodal visual-auditory targets are localized. Overall, the improvement in precision for bimodal relative to the best unimodal target revealed the presence of optimal integration well-predicted by the Maximum Likelihood Estimation (MLE) model. Conversely, the hypothesis that accuracy in localizing the bimodal visual-auditory targets would represent a compromise between auditory and visual performance in favor of the most precise modality was rejected. Instead, the bimodal accuracy was found to be equivalent to or to exceed that of the best unimodal condition. Finally, we described how the different types of errors could be used to identify properties of the internal representations and coordinate transformations within the central nervous system (CNS). The results provide some insight into the structure of the underlying sensorimotor processes employed by the brain and confirm the usefulness of capitalizing on naturally occurring differences between vision and audition to better understand their interaction and their contribution to multimodal perception.
Collapse
Affiliation(s)
- Martine Godfroy-Cooper
- Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center Moffett Field, CA, USA ; San Jose State University Research Foundation San José, CA, USA
| | - Patrick M B Sandor
- Institut de Recherche Biomédicale des Armées, Département Action et Cognition en Situation Opérationnelle Brétigny-sur-Orge, France ; Aix Marseille Université, Centre National de la Recherche Scientifique, ISM UMR 7287 Marseille, France
| | - Joel D Miller
- Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center Moffett Field, CA, USA ; San Jose State University Research Foundation San José, CA, USA
| | - Robert B Welch
- Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center Moffett Field, CA, USA
| |
Collapse
|
23
|
Kardamakis AA, Saitoh K, Grillner S. Tectal microcircuit generating visual selection commands on gaze-controlling neurons. Proc Natl Acad Sci U S A 2015; 112:E1956-65. [PMID: 25825743 PMCID: PMC4403191 DOI: 10.1073/pnas.1504866112] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
The optic tectum (called superior colliculus in mammals) is critical for eye-head gaze shifts as we navigate in the terrain and need to adapt our movements to the visual scene. The neuronal mechanisms underlying the tectal contribution to stimulus selection and gaze reorientation remains, however, unclear at the microcircuit level. To analyze this complex--yet phylogenetically conserved--sensorimotor system, we developed a novel in vitro preparation in the lamprey that maintains the eye and midbrain intact and allows for whole-cell recordings from prelabeled tectal gaze-controlling cells in the deep layer, while visual stimuli are delivered. We found that receptive field activation of these cells provide monosynaptic retinal excitation followed by local GABAergic inhibition (feedforward). The entire remaining retina, on the other hand, elicits only inhibition (surround inhibition). If two stimuli are delivered simultaneously, one inside and one outside the receptive field, the former excitatory response is suppressed. When local inhibition is pharmacologically blocked, the suppression induced by competing stimuli is canceled. We suggest that this rivalry between visual areas across the tectal map is triggered through long-range inhibitory tectal connections. Selection commands conveyed via gaze-controlling neurons in the optic tectum are, thus, formed through synaptic integration of local retinotopic excitation and global tectal inhibition. We anticipate that this mechanism not only exists in lamprey but is also conserved throughout vertebrate evolution.
Collapse
Affiliation(s)
- Andreas A Kardamakis
- Nobel Institute for Neurophysiology, Department of Neuroscience, Karolinska Institutet, SE-17177 Stockholm, Sweden; and
| | - Kazuya Saitoh
- Nobel Institute for Neurophysiology, Department of Neuroscience, Karolinska Institutet, SE-17177 Stockholm, Sweden; and Faculty of Education, Kumamoto University, Kumamoto 860-8556, Japan
| | - Sten Grillner
- Nobel Institute for Neurophysiology, Department of Neuroscience, Karolinska Institutet, SE-17177 Stockholm, Sweden; and
| |
Collapse
|
24
|
Sajad A, Sadeh M, Keith GP, Yan X, Wang H, Crawford JD. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey. Cereb Cortex 2014; 25:3932-52. [PMID: 25491118 PMCID: PMC4585524 DOI: 10.1093/cercor/bhu279] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas.
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology
| | - Morteza Sadeh
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program School of Kinesiology and Health Sciences
| | - Gerald P Keith
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - Hongying Wang
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - John Douglas Crawford
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology School of Kinesiology and Health Sciences Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| |
Collapse
|
25
|
Cerkevich CM, Lyon DC, Balaram P, Kaas JH. Distribution of cortical neurons projecting to the superior colliculus in macaque monkeys. Eye Brain 2014; 2014:121-137. [PMID: 25663799 PMCID: PMC4316385 DOI: 10.2147/eb.s53613] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
To better reveal the pattern of corticotectal projections to the superficial layers of the superior colliculus (SC), we made a total of ten retrograde tracer injections into the SC of three macaque monkeys (Macaca mulatta). The majority of these injections were in the superficial layers of the SC, which process visual information. To isolate inputs to the purely visual layers in the superficial SC from those inputs to the motor and multisensory layers deeper in the SC, two injections were placed to include the intermediate and deep layers of the SC. In another case, an injection was placed in the medial pulvinar, a nucleus not known to be strongly connected with visual cortex, to identify possible projections from tracer spread past the lateral boundary of the SC. Four conclusions are supported by the results: 1) all early visual areas of cortex, including V1, V2, V3, and the middle temporal area, project to the superficial layers of the SC; 2) with the possible exception of the frontal eye field, few areas of cortex outside of the early visual areas project to the superficial SC, although many do, however, project to the intermediate and deep layers of the SC; 3) roughly matching retinotopy is conserved in the projections of visual areas to the SC; and 4) the projections from different visual areas are similarly dense, although projections from early visual areas appear somewhat denser than those of higher order visual areas in macaque cortex.
Collapse
Affiliation(s)
- Christina M Cerkevich
- Department of Neurobiology, University of Pittsburgh School of Medicine, Systems Neuroscience Institute, Pittsburgh, PA, USA
| | - David C Lyon
- Department of Anatomy and Neurobiology, University of California, Irvine, CA, USA
| | - Pooja Balaram
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| | - Jon H Kaas
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
26
|
|
27
|
Endogenous attention signals evoked by threshold contrast detection in human superior colliculus. J Neurosci 2014; 34:892-900. [PMID: 24431447 DOI: 10.1523/jneurosci.3026-13.2014] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Human superior colliculus (SC) responds in a retinotopically selective manner when attention is deployed on a high-contrast visual stimulus using a discrimination task. To further elucidate the role of SC in endogenous visual attention, high-resolution fMRI was used to demonstrate that SC also exhibits a retinotopically selective response for covert attention in the absence of significant visual stimulation using a threshold-contrast detection task. SC neurons have a laminar organization according to their function, with visually responsive neurons present in the superficial layers and visuomotor neurons in the intermediate layers. The results show that the response evoked by the threshold-contrast detection task is significantly deeper than the response evoked by the high-contrast speed discrimination task, reflecting a functional dissociation of the attentional enhancement of visuomotor and visual neurons, respectively. Such a functional dissociation of attention within SC laminae provides a subcortical basis for the oculomotor theory of attention.
Collapse
|
28
|
Takahashi M, Sugiuchi Y, Shinoda Y. Convergent synaptic inputs from the caudal fastigial nucleus and the superior colliculus onto pontine and pontomedullary reticulospinal neurons. J Neurophysiol 2013; 111:849-67. [PMID: 24285869 DOI: 10.1152/jn.00634.2013] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The caudal fastigial nucleus (FN) is known to be related to the control of eye movements and projects mainly to the contralateral reticular nuclei where excitatory and inhibitory burst neurons for saccades exist [the caudal portion of the nucleus reticularis pontis caudalis (NRPc), and the rostral portion of the nucleus reticularis gigantocellularis (NRG) respectively]. However, the exact reticular neurons targeted by caudal fastigioreticular cells remain unknown. We tried to determine the target reticular neurons of the caudal FN and superior colliculus (SC) by recording intracellular potentials from neurons in the NRPc and NRG of anesthetized cats. Neurons in the rostral NRG received bilateral, monosynaptic excitation from the caudal FNs, with contralateral predominance. They also received strong monosynaptic excitation from the rostral and caudal contralateral SC, and disynaptic excitation from the rostral ipsilateral SC. These reticular neurons with caudal fastigial monosynaptic excitation were not activated antidromically from the contralateral abducens nucleus, but most of them were reticulospinal neurons (RSNs) that were activated antidromically from the cervical cord. RSNs in the caudal NRPc received very weak monosynaptic excitation from only the contralateral caudal FN, and received either monosynaptic excitation only from the contralateral caudal SC, or monosynaptic and disynaptic excitation from the contralateral caudal and ipsilateral rostral SC, respectively. These results suggest that the caudal FN helps to control also head movements via RSNs targeted by the SC, and these RSNs with SC topographic input play different functional roles in head movements.
Collapse
Affiliation(s)
- Mayu Takahashi
- Department of Systems Neurophysiology, Graduate School of Medicine, Tokyo Medical and Dental University, Tokyo, Japan
| | | | | |
Collapse
|
29
|
Abstract
The mechanisms by which the human brain controls eye movements are reasonably well understood, but those for the head less so. Here, we show that the mechanisms for keeping the head aimed at a stationary target follow strategies similar to those for holding the eyes steady on stationary targets. Specifically, we applied the neural integrator hypothesis that originally was developed for holding the eyes still in eccentric gaze positions to describe how the head is held still when turned toward an eccentric target. We found that normal humans make head movements consistent with the neural integrator hypothesis, except that additional sensory feedback is needed, from proprioceptors in the neck, to keep the head on target. We also show that the complicated patterns of head movements in patients with cervical dystonia can be predicted by deficits in a neural integrator for head motor control. These results support ideas originally developed from animal studies that suggest fundamental similarities between oculomotor and cephalomotor control, as well as a conceptual framework for cervical dystonia that departs considerably from current clinical views.
Collapse
|
30
|
Merker B. The efference cascade, consciousness, and its self: naturalizing the first person pivot of action control. Front Psychol 2013; 4:501. [PMID: 23950750 PMCID: PMC3738861 DOI: 10.3389/fpsyg.2013.00501] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2013] [Accepted: 07/16/2013] [Indexed: 11/13/2022] Open
Abstract
The 20 billion neurons of the neocortex have a mere hundred thousand motor neurons by which to express cortical contents in overt behavior. Implemented through a staggered cortical "efference cascade" originating in the descending axons of layer five pyramidal cells throughout the neocortical expanse, this steep convergence accomplishes final integration for action of cortical information through a system of interconnected subcortical way stations. Coherent and effective action control requires the inclusion of a continually updated joint "global best estimate" of current sensory, motivational, and motor circumstances in this process. I have previously proposed that this running best estimate is extracted from cortical probabilistic preliminaries by a subcortical neural "reality model" implementing our conscious sensory phenomenology. As such it must exhibit first person perspectival organization, suggested to derive from formating requirements of the brain's subsystem for gaze control, with the superior colliculus at its base. Gaze movements provide the leading edge of behavior by capturing targets of engagement prior to contact. The rotation-based geometry of directional gaze movements places their implicit origin inside the head, a location recoverable by cortical probabilistic source reconstruction from the rampant primary sensory variance generated by the incessant play of collicularly triggered gaze movements. At the interface between cortex and colliculus lies the dorsal pulvinar. Its unique long-range inhibitory circuitry may precipitate the brain's global best estimate of its momentary circumstances through multiple constraint satisfaction across its afferents from numerous cortical areas and colliculus. As phenomenal content of our sensory awareness, such a global best estimate would exhibit perspectival organization centered on a purely implicit first person origin, inherently incapable of appearing as a phenomenal content of the sensory space it serves.
Collapse
|
31
|
Wulff S, Bosco A, Havermann K, Placenti G, Fattori P, Lappe M. Eye position effects in saccadic adaptation in macaque monkeys. J Neurophysiol 2012; 108:2819-26. [DOI: 10.1152/jn.00212.2012] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The saccadic amplitude of humans and monkeys can be adapted using intrasaccadic target steps in the McLaughlin paradigm. It is generally believed that, as a result of a purely retinal reference frame, after adaptation of a saccade of a certain amplitude and direction, saccades of the same amplitude and direction are all adapted to the same extent, independently from the initial eye position. However, recent studies in humans have put the pure retinal coding in doubt by revealing that the initial eye position has an effect on the transfer of adaptation to saccades of different starting points. Since humans and monkeys show some species differences in adaptation, we tested the eye position dependence in monkeys. Two trained Macaca fascicularis performed reactive rightward saccades from five equally horizontally distributed starting positions. All saccades were made to targets with the same retinotopic motor vector. In each session, the saccades that started at one particular initial eye position, the adaptation position, were adapted to shorter amplitude, and the adaptation of the saccades starting at the other four positions was measured. The results show that saccades that started at the other positions were less adapted than saccades that started at the adaptation position. With increasing distance between the starting position of the test saccade and the adaptation position, the amplitude change of the test saccades decreased with a Gaussian profile. We conclude that gain-decreasing saccadic adaptation in macaques is specific to the initial eye position at which the adaptation has been induced.
Collapse
Affiliation(s)
- Svenja Wulff
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeld Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany; and
| | - Annalisa Bosco
- Department of Human and General Physiology, University of Bologna, Bologna, Italy
| | - Katharina Havermann
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeld Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany; and
| | - Giacomo Placenti
- Department of Human and General Physiology, University of Bologna, Bologna, Italy
| | - Patrizia Fattori
- Department of Human and General Physiology, University of Bologna, Bologna, Italy
| | - Markus Lappe
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeld Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany; and
| |
Collapse
|
32
|
Knight TA. Contribution of the frontal eye field to gaze shifts in the head-unrestrained rhesus monkey: neuronal activity. Neuroscience 2012; 225:213-36. [PMID: 22944386 DOI: 10.1016/j.neuroscience.2012.08.050] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2012] [Revised: 08/02/2012] [Accepted: 08/24/2012] [Indexed: 11/16/2022]
Abstract
The frontal eye field (FEF) has a strong influence on saccadic eye movements with the head restrained. With the head unrestrained, eye saccades combine with head movements to produce large gaze shifts, and microstimulation of the FEF evokes both eye and head movements. To test whether the dorsomedial FEF provides commands for the entire gaze shift or its separate eye and head components, we recorded extracellular single-unit activity in monkeys trained to make large head-unrestrained gaze shifts. We recorded 80 units active during gaze shifts, and closely examined 26 of these that discharged a burst of action potentials that preceded horizontal gaze movements. These units were movement or visuomovement related and most exhibited open movement fields with respect to amplitude. To reveal the relations of burst parameters to gaze, eye, and/or head movement metrics, we used behavioral dissociations of gaze, eye, and head movements and linear regression analyses. The burst number of spikes (NOS) was strongly correlated with movement amplitude and burst temporal parameters were strongly correlated with movement temporal metrics for eight gaze-related burst neurons and five saccade-related burst neurons. For the remaining 13 neurons, the NOS was strongly correlated with the head movement amplitude, but burst temporal parameters were most strongly correlated with eye movement temporal metrics (head-eye-related burst neurons, HEBNs). These results suggest that FEF units do not encode a command for the unified gaze shift only; instead, different units may carry signals related to the overall gaze shift or its eye and/or head components. Moreover, the HEBNs exhibit bursts whose magnitude and timing may encode a head displacement signal and a signal that influences the timing of the eye saccade, thereby serving as a mechanism for coordinating the eye and head movements of a gaze shift.
Collapse
Affiliation(s)
- T A Knight
- Graduate Program in Neurobiology and Behavior, Washington National Primate Research Center, University of Washington, Seattle, WA 98195-7330, United States.
| |
Collapse
|
33
|
Monteon JA, Avillac M, Yan X, Wang H, Crawford JD. Neural mechanisms for predictive head movement strategies during sequential gaze shifts. J Neurophysiol 2012; 108:2689-707. [PMID: 22933720 DOI: 10.1152/jn.00222.2012] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Humans adopt very different head movement strategies for different gaze behaviors, for example, when playing sports versus watching sports on television. Such strategy switching appears to depend on both context and expectation of future gaze positions. Here, we explored the neural mechanisms for such behaviors by training three monkeys to make head-unrestrained gaze shifts toward eccentric radial targets. A randomized color cue provided predictive information about whether that target would be followed by either a return gaze shift to center or another, more eccentric gaze shift, but otherwise animals were allowed to develop their own eye-head coordination strategy. In the first two animals we then stimulated the frontal eye fields (FEF) in conjunction with the color cue, and in the third animal we recorded from neurons in the superior colliculus (SC). Our results show that 1) monkeys can optimize eye-head coordination strategies from trial to trial, based on learned associations between color cues and future gaze sequences, 2) these cue-dependent coordination strategies were preserved in gaze saccades evoked during electrical stimulation of the FEF, and 3) two types of SC responses (the saccade burst and a more prolonged response related to head movement) modulated with these cue-dependent strategies, although only one (the saccade burst) varied in a predictive fashion. These data show that from one moment to the next, the brain can use contextual sensory cues to set up internal "coordination states" that convert fixed cortical gaze commands into the brain stem signals required for predictive head motion.
Collapse
Affiliation(s)
- Jachin A Monteon
- York Centre for Vision Research, York University, Toronto, Ontario, Canada
| | | | | | | | | |
Collapse
|
34
|
Lee J, Groh JM. Auditory signals evolve from hybrid- to eye-centered coordinates in the primate superior colliculus. J Neurophysiol 2012; 108:227-42. [PMID: 22514295 DOI: 10.1152/jn.00706.2011] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Visual and auditory spatial signals initially arise in different reference frames. It has been postulated that auditory signals are translated from a head-centered to an eye-centered frame of reference compatible with the visual spatial maps, but, to date, only various forms of hybrid reference frames for sound have been identified. Here, we show that the auditory representation of space in the superior colliculus involves a hybrid reference frame immediately after the sound onset but evolves to become predominantly eye centered, and more similar to the visual representation, by the time of a saccade to that sound. Specifically, during the first 500 ms after the sound onset, auditory response patterns (N = 103) were usually neither head nor eye centered: 64% of neurons showed such a hybrid pattern, whereas 29% were more eye centered and 8% were more head centered. This differed from the pattern observed for visual targets (N = 156): 86% were eye centered, <1% were head centered, and only 13% exhibited a hybrid of both reference frames. For auditory-evoked activity observed within 20 ms of the saccade (N = 154), the proportion of eye-centered response patterns increased to 69%, whereas the hybrid and head-centered response patterns dropped to 30% and <1%, respectively. This pattern approached, although did not quite reach, that observed for saccade-related activity for visual targets: 89% were eye centered, 11% were hybrid, and <1% were head centered (N = 162). The plainly eye-centered visual response patterns and predominantly eye-centered auditory motor response patterns lie in marked contrast to our previous study of the intraparietal cortex, where both visual and auditory sensory and motor-related activity used a predominantly hybrid reference frame (Mullette-Gillman et al. 2005, 2009). Our present findings indicate that auditory signals are ultimately translated into a reference frame roughly similar to that used for vision, but suggest that such signals might emerge only in motor areas responsible for directing gaze to visual and auditory stimuli.
Collapse
Affiliation(s)
- Jungah Lee
- Center for Cognitive Neuroscience, Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA.
| | | |
Collapse
|
35
|
Interactions between gaze-evoked blinks and gaze shifts in monkeys. Exp Brain Res 2011; 216:321-39. [PMID: 22083094 DOI: 10.1007/s00221-011-2937-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2011] [Accepted: 10/31/2011] [Indexed: 10/15/2022]
Abstract
Rapid eyelid closure, or a blink, often accompanies head-restrained and head-unrestrained gaze shifts. This study examines the interactions between such gaze-evoked blinks and gaze shifts in monkeys. Blink probability increases with gaze amplitude and at a faster rate for head-unrestrained movements. Across animals, blink likelihood is inversely correlated with the average gaze velocity of large-amplitude control movements. Gaze-evoked blinks induce robust perturbations in eye velocity. Peak and average velocities are reduced, duration is increased, but accuracy is preserved. The temporal features of the perturbation depend on factors such as the time of blink relative to gaze onset, inherent velocity kinematics of control movements, and perhaps initial eye-in-head position. Although variable across animals, the initial effect is a reduction in eye velocity, followed by a reacceleration that yields two or more peaks in its waveform. Interestingly, head velocity is not attenuated; instead, it peaks slightly later and with a larger magnitude. Gaze latency is slightly reduced on trials with gaze-evoked blinks, although the effect was more variable during head-unrestrained movements; no reduction in head latency is observed. Preliminary data also demonstrate a similar perturbation of gaze-evoked blinks during vertical saccades. The results are compared with previously reported effects of reflexive blinks (evoked by air-puff delivered to one eye or supraorbital nerve stimulation) and discussed in terms of effects of blinks on saccadic suppression, neural correlates of the altered eye velocity signals, and implications on the hypothesis that the attenuation in eye velocity is produced by a head movement command.
Collapse
|
36
|
Saeb S, Weber C, Triesch J. Learning the optimal control of coordinated eye and head movements. PLoS Comput Biol 2011; 7:e1002253. [PMID: 22072953 PMCID: PMC3207939 DOI: 10.1371/journal.pcbi.1002253] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2010] [Accepted: 09/13/2011] [Indexed: 11/20/2022] Open
Abstract
Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements. Human beings and many other species redirect their gaze towards targets of interest through rapid gaze shifts known as saccades. These are made approximately three to four times every second, and larger saccades result from fast and concurrent movement of the animal's eyes and head. Experimental studies have revealed that during saccades, the motor system follows certain principles such as respecting a specific relationship between the relative contribution of eye and head motor systems to total gaze shift. Various researchers have hypothesized that these principles are implications of some optimality criteria in the brain, but it remains unclear how the brain can learn such an optimal behavior. We propose a new model that uses a plausible learning mechanism to satisfy an optimality criterion. We show that after learning, the model is able to reproduce motor behavior with biologically plausible properties. In addition, it predicts the nature of the learning signals. Further experimental research is necessary to test the validity of our model.
Collapse
Affiliation(s)
- Sohrab Saeb
- Frankfurt Institute for Advanced Studies (FIAS), Goethe University Frankfurt, Germany.
| | | | | |
Collapse
|
37
|
Abstract
The mammalian superior colliculus (SC) and its nonmammalian homolog, the optic tectum, constitute a major node in processing sensory information, incorporating cognitive factors, and issuing motor commands. The resulting action-to orient toward or away from a stimulus-can be accomplished as an integrated movement across oculomotor, cephalomotor, and skeletomotor effectors. The SC also participates in preserving fixation during intersaccadic intervals. This review highlights the repertoire of movements attributed to SC function and analyzes the significance of results obtained from causality-based experiments (microstimulation and inactivation). The mechanisms potentially used to decode the population activity in the SC into an appropriate movement command are also discussed.
Collapse
Affiliation(s)
- Neeraj J Gandhi
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania 15213, USA.
| | | |
Collapse
|
38
|
|
39
|
|
40
|
Populin LC, Rajala AZ. Target modality determines eye-head coordination in nonhuman primates: implications for gaze control. J Neurophysiol 2011; 106:2000-11. [PMID: 21795625 DOI: 10.1152/jn.00331.2011] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We have studied eye-head coordination in nonhuman primates with acoustic targets after finding that they are unable to make accurate saccadic eye movements to targets of this type with the head restrained. Three male macaque monkeys with experience in localizing sounds for rewards by pointing their gaze to the perceived location of sources served as subjects. Visual targets were used as controls. The experimental sessions were configured to minimize the chances that the subject would be able to predict the modality of the target as well as its location and time of presentation. The data show that eye and head movements are coordinated differently to generate gaze shifts to acoustic targets. Chiefly, the head invariably started to move before the eye and contributed more to the gaze shift. These differences were more striking for gaze shifts of <20-25° in amplitude, to which the head contributes very little or not at all when the target is visual. Thus acoustic and visual targets trigger gaze shifts with different eye-head coordination. This, coupled to the fact that anatomic evidence involves the superior colliculus as the link between auditory spatial processing and the motor system, suggests that separate signals are likely generated within this midbrain structure.
Collapse
Affiliation(s)
- Luis C Populin
- Department of Neuroscience, University of Wisconsin-Madison, Madison, Wisconsin, USA.
| | | |
Collapse
|
41
|
Perceived touch location is coded using a gaze signal. Exp Brain Res 2011; 213:229-34. [PMID: 21559744 DOI: 10.1007/s00221-011-2713-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2010] [Accepted: 04/26/2011] [Indexed: 10/18/2022]
Abstract
The location of a touch to the skin, first coded in body coordinates, may be transformed into retinotopic coordinates to facilitate visual-tactile integration. In order for the touch location to be transformed into a retinotopic reference frame, the location of the eyes and head must be taken into account. Previous studies have found eye position-related errors (Harrar and Harris in Exp Brain Res 203:615-620, 2009) and head position-related errors (Ho and Spence Brain Res 1144:136-141, 2007) in tactile localization, indicating that imperfect versions of eye and head signals may be used in the body-to-visual coordinate transformation. Here, we investigated the combined effects of head and eye position on the perceived location of a mechanical touch to the arm. Subjects reported the perceived position of a touch that was presented while their head was positioned to the left, right, or center of the body and their eyes were positioned to the left, right, or center in their orbits. The perceived location of a touch shifted in the direction of both head and the eyes by approximately the same amount. We interpret these shifts as being consistent with touch location being coded in a visual reference frame with a gaze signal used to compute the transformation.
Collapse
|
42
|
Groh JM. Effects of Initial Eye Position on Saccades Evoked by Microstimulation in the Primate Superior Colliculus: Implications for Models of the SC Read-Out Process. Front Integr Neurosci 2011; 4:130. [PMID: 21267431 PMCID: PMC3025650 DOI: 10.3389/fnint.2010.00130] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2010] [Accepted: 12/28/2010] [Indexed: 11/13/2022] Open
Abstract
The motor layers of the superior colliculus (SC) are thought to specify saccade amplitude and direction, independent of initial eye position. However, recent evidence suggests that eye position can modulate the level of activity of SC motor neurons. In this study, we tested whether initial eye position has an effect on microstimulation-evoked saccade amplitude. High (>300 Hz) and low (<300 Hz) frequency microstimulation was applied to 30 sites in the rostral part of the SC of two monkeys while they fixated one of six different locations. We found that the amplitude of the evoked saccades decreased with more contralateral initial eye positions. This effect was more pronounced in low frequency- compared to high frequency-evoked saccades, although it was present for both. Replication of these findings in head-free experiments showed that the effect of initial eye position was not due to physical constraints imposed by the oculomotor range. In addition to the effect of eye position on saccade amplitude, we also observed an increase in saccade latency and a decrease in the probability that microstimulation would evoke a saccade for low frequency stimulation at more contralateral eye positions. These findings suggest that an eye position signal can contribute to the read-out of the SC. Models of the saccadic pulse-step generator may need revision to incorporate an eye position modulation at the input stage.
Collapse
Affiliation(s)
- Jennifer M Groh
- Center for Cognitive Neuroscience, Duke University Durham, NC, USA
| |
Collapse
|
43
|
Chapman BB, Corneil BD. Neuromuscular recruitment related to stimulus presentation and task instruction during the anti-saccade task. Eur J Neurosci 2010; 33:349-60. [DOI: 10.1111/j.1460-9568.2010.07496.x] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
44
|
Meeter M, Van der Stigchel S, Theeuwes J. A competitive integration model of exogenous and endogenous eye movements. BIOLOGICAL CYBERNETICS 2010; 102:271-291. [PMID: 20162429 DOI: 10.1007/s00422-010-0365-y] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2008] [Accepted: 01/23/2010] [Indexed: 05/28/2023]
Abstract
We present a model of the eye movement system in which the programming of an eye movement is the result of the competitive integration of information in the superior colliculi (SC). This brain area receives input from occipital cortex, the frontal eye fields, and the dorsolateral prefrontal cortex, on the basis of which it computes the location of the next saccadic target. Two critical assumptions in the model are that cortical inputs are not only excitatory, but can also inhibit saccades to specific locations, and that the SC continue to influence the trajectory of a saccade while it is being executed. With these assumptions, we account for many neurophysiological and behavioral findings from eye movement research. Interactions within the saccade map are shown to account for effects of distractors on saccadic reaction time (SRT) and saccade trajectory, including the global effect and oculomotor capture. In addition, the model accounts for express saccades, the gap effect, saccadic reaction times for antisaccades, and recorded responses from neurons in the SC and frontal eye fields in these tasks.
Collapse
Affiliation(s)
- Martijn Meeter
- Department of Cognitive Psychology, Vrije Universiteit, Van Der Boechorststraat 1, 1081 BT, Amsterdam, The Netherlands.
| | | | | |
Collapse
|
45
|
Fuchs AF, Brettler S, Ling L. Head-free gaze shifts provide further insights into the role of the medial cerebellum in the control of primate saccadic eye movements. J Neurophysiol 2010; 103:2158-73. [PMID: 20164388 PMCID: PMC2853288 DOI: 10.1152/jn.91361.2008] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2009] [Accepted: 02/12/2010] [Indexed: 11/22/2022] Open
Abstract
This study examines how signals generated in the oculomotor cerebellum could be involved in the control of gaze shifts, which rapidly redirect the eyes from one object to another. Neurons in the caudal fastigial nucleus (cFN), the output of the oculomotor cerebellum, discharged when monkeys made horizontal head-unrestrained gaze shifts, composed of an eye saccade and a head movement. Eighty-seven percent of our neurons discharged a burst of spikes for both ipsiversive and contraversive gaze shifts. In both directions, burst end was much better timed with gaze end than was burst start with gaze start, was well correlated with eye end, and was poorly correlated with head end or the time of peak head velocity. Moreover, bursts accompanied all head-unrestrained gaze shifts whether the head moved or not. Therefore we conclude that the cFN is not part of the pathway that controls head movement. For contraversive gaze shifts, the early part of the burst was correlated with gaze acceleration. Thereafter, the burst of the neuronal population continued throughout the prolonged deceleration of large gaze shifts. For a majority of neurons, gaze duration was correlated with burst duration; for some, gaze amplitude was less well correlated with the number of spikes. Therefore we suggest that the population burst provides an acceleration boost for high acceleration (smaller) contraversive gaze shifts and helps maintain the drive required to extend the deceleration of large contraversive gaze shifts. In contrast, the ipsiversive population burst, which is less well correlated with gaze metrics but whose peak rate occurs before gaze end, seems responsible primarily for terminating the gaze shift.
Collapse
Affiliation(s)
- Albert F Fuchs
- Washington National Primate Research Ctr., Univ. of Washington, Box 357330, 1705 NE Pacific St. HSB I421, Seattle, WA 98195-7330, USA.
| | | | | |
Collapse
|
46
|
Kardamakis AA, Grantyn A, Moschovakis AK. Neural network simulations of the primate oculomotor system. V. Eye-head gaze shifts. BIOLOGICAL CYBERNETICS 2010; 102:209-225. [PMID: 20094729 DOI: 10.1007/s00422-010-0363-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2008] [Accepted: 01/07/2010] [Indexed: 05/28/2023]
Abstract
We examined the performance of a dynamic neural network that replicates much of the psychophysics and neurophysiology of eye-head gaze shifts without relying on gaze feedback control. For example, our model generates gaze shifts with ocular components that do not exceed 35 degrees in amplitude, whatever the size of the gaze shifts (up to 75 degrees in our simulations), without relying on a saturating nonlinearity to accomplish this. It reproduces the natural patterns of eye-head coordination in that head contributions increase and ocular contributions decrease together with the size of gaze shifts and this without compromising the accuracy of gaze realignment. It also accounts for the dependence of the relative contributions of the eyes and the head on the initial positions of the eyes, as well as for the position sensitivity of saccades evoked by electrical stimulation of the superior colliculus. Finally, it shows why units of the saccadic system could appear to carry gaze-related signals even if they do not operate within a gaze control loop and do not receive head-related information.
Collapse
Affiliation(s)
- A A Kardamakis
- Institute of Applied and Computational Mathematics, FORTH, Heraklion, Crete, Greece
| | | | | |
Collapse
|
47
|
Nagy B, Corneil BD. Representation of Horizontal head-on-body position in the primate superior colliculus. J Neurophysiol 2009; 103:858-74. [PMID: 20007503 DOI: 10.1152/jn.00099.2009] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Movement-related activity within the superior colliculus (SC) represents the desired displacement of an impending gaze shift. This representation must ultimately be transformed into position-based reference frames appropriate for coordinated eye-head gaze shifts. Parietal areas that project to the SC are modulated by the initial position of both the eye-re-head and head-re-body and SC activity is modulated by eye-re-head position. These considerations led us to investigate whether SC activity is modulated by the head-re-body position. We recorded activity from movement-related SC neurons while head-restrained monkeys performed a delayed-saccade task. Across blocks of trials, the horizontal position of the body was rotated under a space-fixed head to three to five different positions spanning +/-25 degrees . We observed a significant influence of body-under-head position on SC activity in 50/60 neurons. This influence was expressed predominantly as a linear gain field, scaling task-related SC activity without changing the location of the response field (linear gain fields explained >/=20% of the variance in neural activity in approximately 50% of our sample). Smaller nonlinear modulations were also observed in roughly 30% of our sample. SC activity was equally likely to increase or decrease as the body was rotated to the side of neuronal recording and we found no systematic relationship between the directionality or magnitude of the linear gain field with recording location in the SC. We conclude that a signal conveying head-re-body position is present in the SC. Although the functional significance remains open, our findings are consistent with the SC contributing to a displacement-to-position transformation for oculomotor control.
Collapse
Affiliation(s)
- Benjamin Nagy
- Canadian Institutes of Health Research Group in Action and Perception, University of Western Ontario, London, Ontario, Canada
| | | |
Collapse
|
48
|
Perkins E, Warren S, May PJ. The mesencephalic reticular formation as a conduit for primate collicular gaze control: tectal inputs to neurons targeting the spinal cord and medulla. Anat Rec (Hoboken) 2009; 292:1162-81. [PMID: 19645020 DOI: 10.1002/ar.20935] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
The superior colliculus (SC), which directs orienting movements of both the eyes and head, is reciprocally connected to the mesencephalic reticular formation (MRF), suggesting the latter is involved in gaze control. The MRF has been provisionally subdivided to include a rostral portion, which subserves vertical gaze, and a caudal portion, which subserves horizontal gaze. Both regions contain cells projecting downstream that may provide a conduit for tectal signals targeting the gaze control centers which direct head movements. We determined the distribution of cells targeting the cervical spinal cord and rostral medullary reticular formation (MdRF), and investigated whether these MRF neurons receive input from the SC by the use of dual tracer techniques in Macaca fascicularis monkeys. Either biotinylated dextran amine or Phaseolus vulgaris leucoagglutinin was injected into the SC. Wheat germ agglutinin conjugated horseradish peroxidase was placed into the ipsilateral cervical spinal cord or medial MdRF to retrogradely label MRF neurons. A small number of medially located cells in the rostral and caudal MRF were labeled following spinal cord injections, and greater numbers were labeled in the same region following MdRF injections. In both cases, anterogradely labeled tectoreticular terminals were observed in close association with retrogradely labeled neurons. These close associations between tectoreticular terminals and neurons with descending projections suggest the presence of a trans-MRF pathway that provides a conduit for tectal control over head orienting movements. The medial location of these reticulospinal and reticuloreticular neurons suggests this MRF region may be specialized for head movement control.
Collapse
Affiliation(s)
- Eddie Perkins
- Department of Anatomy, University of Mississippi Medical Center, Jackson, Mississippi 39216-4405, USA
| | | | | |
Collapse
|
49
|
Pérez ML, Shanbhag SJ, Peña JL. Auditory spatial tuning at the crossroads of the midbrain and forebrain. J Neurophysiol 2009; 102:1472-82. [PMID: 19571193 DOI: 10.1152/jn.00400.2009] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The barn owl's midbrain and forebrain contain neurons tuned to sound direction. The spatial receptive fields of these neurons result from sensitivity to combinations of interaural time (ITD) and level (ILD) differences over a broad frequency range. While a map of auditory space has been described in the midbrain, no similar topographic representation has been found in the forebrain. The first nuclei that belong exclusively to the forebrain and midbrain pathways are the thalamic nucleus ovoidalis (Ov) and the external nucleus of the inferior colliculus (ICx), respectively. The midbrain projects to the auditory thalamus before sharp spatial receptive fields emerge; although Ov and ICx receive projections from the same midbrain nuclei, they are not directly connected. We compared the spatial tuning in Ov and ICx. Thalamic neurons respond to a broader frequency range and their ITD and ILD tuning varied more across frequency. However, neurons in Ov showed spatial receptive fields as selective as neurons in ICx. Thalamic spatial receptive fields were tuned to frontal and contralateral space and correlated with their tuning to ITD and ILD. Our results indicate that spatial tuning emerges in both pathways by similar combination selectivity to ITD and ILD. However, the midbrain and the thalamus do not appear to repeat exactly the same processing, as indicated by the difference in frequency range and the broader tuning to binaural cues. The differences observed at the initial stages of these sound-localization pathways may reflect diverse functions and coding schemes of midbrain and forebrain.
Collapse
Affiliation(s)
- M Lucía Pérez
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Rose F. Kennedy Center, Rm. 529, 1410 Pelham Pkwy. S., Bronx, NY 10461, USA
| | | | | |
Collapse
|
50
|
Keith GP, DeSouza JFX, Yan X, Wang H, Crawford JD. A method for mapping response fields and determining intrinsic reference frames of single-unit activity: applied to 3D head-unrestrained gaze shifts. J Neurosci Methods 2009; 180:171-84. [PMID: 19427544 DOI: 10.1016/j.jneumeth.2009.03.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2009] [Revised: 03/08/2009] [Accepted: 03/09/2009] [Indexed: 10/21/2022]
Abstract
Natural movements towards a target show metric variations between trials. When movements combine contributions from multiple body-parts, such as head-unrestrained gaze shifts involving both eye and head rotation, the individual body-part movements may vary even more than the overall movement. The goal of this investigation was to develop a general method for both mapping sensory or motor response fields of neurons and determining their intrinsic reference frames, where these movement variations are actually utilized rather than avoided. We used head-unrestrained gaze shifts, three-dimensional (3D) geometry, and naturalistic distributions of eye and head orientation to explore the theoretical relationship between the intrinsic reference frame of a sensorimotor neuron's response field and the coherence of the activity when this response field is fitted non-parametrically using different kernel bandwidths in different reference frames. We measure how well the regression surface predicts unfitted data using the PREdictive Sum-of-Squares (PRESS) statistic. The reference frame with the smallest PRESS statistic was categorized as the intrinsic reference frame if the PRESS statistic was significantly larger in other reference frames. We show that the method works best when targets are at regularly spaced positions within the response field's active region, and that the method identifies the best kernel bandwidth for response field estimation. We describe how gain-field effects may be dealt with, and how to test neurons within a population that fall on a continuum between specific reference frames. This method may be applied to any spatially coherent single-unit activity related to sensation and/or movement during naturally varying behaviors.
Collapse
Affiliation(s)
- Gerald P Keith
- Canadian Action and Perception Network, York University, 4700 Keele Street, Toronto, Ontario M3J1P3, Canada
| | | | | | | | | |
Collapse
|