1
|
Bharmauria V, Seo S, Crawford JD. Neural integration of egocentric and allocentric visual cues in the gaze system. J Neurophysiol 2025; 133:109-120. [PMID: 39584726 DOI: 10.1152/jn.00498.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Revised: 11/14/2024] [Accepted: 11/16/2024] [Indexed: 11/26/2024] Open
Abstract
A fundamental question in neuroscience is how the brain integrates egocentric (body-centered) and allocentric (landmark-centered) visual cues, but for many years this question was ignored in sensorimotor studies. This changed in recent behavioral experiments, but the underlying physiology of ego/allocentric integration remained largely unstudied. The specific goal of this review is to explain how prefrontal neurons integrate eye-centered and landmark-centered visual codes for optimal gaze behavior. First, we briefly review the whole brain/behavioral mechanisms for ego/allocentric integration in the human and summarize egocentric coding mechanisms in the primate gaze system. We then focus in more depth on cellular mechanisms for ego/allocentric coding in the frontal and supplementary eye fields. We first explain how prefrontal visual responses integrate eye-centered target and landmark codes to produce a transformation toward landmark-centered coordinates. Next, we describe what happens when a landmark shifts during the delay between seeing and acquiring a remembered target, initially resulting in independently coexisting ego/allocentric memory codes. We then describe how these codes are reintegrated in the motor burst for the gaze shift. Deep network simulations suggest that these properties emerge spontaneously for optimal gaze behavior. Finally, we synthesize these observations and relate them to normal brain function through a simplified conceptual model. Together, these results show that integration of visuospatial features continues well beyond visual cortex and suggest a general cellular mechanism for goal-directed visual behavior.
Collapse
Affiliation(s)
- Vishal Bharmauria
- The Tampa Human Neurophysiology Lab & Department of Neurosurgery and Brain Repair, Morsani College of Medicine, University of South Florida, Tampa, Florida, United States
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| | - Serah Seo
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
- Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - J Douglas Crawford
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Seo S, Bharmauria V, Schütz A, Yan X, Wang H, Crawford JD. Multiunit Frontal Eye Field Activity Codes the Visuomotor Transformation, But Not Gaze Prediction or Retrospective Target Memory, in a Delayed Saccade Task. eNeuro 2024; 11:ENEURO.0413-23.2024. [PMID: 39054056 PMCID: PMC11373882 DOI: 10.1523/eneuro.0413-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 07/27/2024] Open
Abstract
Single-unit (SU) activity-action potentials isolated from one neuron-has traditionally been employed to relate neuronal activity to behavior. However, recent investigations have shown that multiunit (MU) activity-ensemble neural activity recorded within the vicinity of one microelectrode-may also contain accurate estimations of task-related neural population dynamics. Here, using an established model-fitting approach, we compared the spatial codes of SU response fields with corresponding MU response fields recorded from the frontal eye fields (FEFs) in head-unrestrained monkeys (Macaca mulatta) during a memory-guided saccade task. Overall, both SU and MU populations showed a simple visuomotor transformation: the visual response coded target-in-eye coordinates, transitioning progressively during the delay toward a future gaze-in-eye code in the saccade motor response. However, the SU population showed additional secondary codes, including a predictive gaze code in the visual response and retention of a target code in the motor response. Further, when SUs were separated into regular/fast spiking neurons, these cell types showed different spatial code progressions during the late delay period, only converging toward gaze coding during the final saccade motor response. Finally, reconstructing MU populations (by summing SU data within the same sites) failed to replicate either the SU or MU pattern. These results confirm the theoretical and practical potential of MU activity recordings as a biomarker for fundamental sensorimotor transformations (e.g., target-to-gaze coding in the oculomotor system), while also highlighting the importance of SU activity for coding more subtle (e.g., predictive/memory) aspects of sensorimotor behavior.
Collapse
Affiliation(s)
- Serah Seo
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
- Department of Neurosurgery and Brain Repair, Morsani College of Medicine, University of South Florida, Tampa, Florida 33606
| | - Adrian Schütz
- Department of Neurophysics, Philipps-Universität Marburg, 35032 Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, 35032 Marburg, and Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Xiaogang Yan
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - Hongying Wang
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
3
|
González-Rueda A, Jensen K, Noormandipour M, de Malmazet D, Wilson J, Ciabatti E, Kim J, Williams E, Poort J, Hennequin G, Tripodi M. Kinetic features dictate sensorimotor alignment in the superior colliculus. Nature 2024; 631:378-385. [PMID: 38961292 PMCID: PMC11236723 DOI: 10.1038/s41586-024-07619-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 05/28/2024] [Indexed: 07/05/2024]
Abstract
The execution of goal-oriented behaviours requires a spatially coherent alignment between sensory and motor maps. The current model for sensorimotor transformation in the superior colliculus relies on the topographic mapping of static spatial receptive fields onto movement endpoints1-6. Here, to experimentally assess the validity of this canonical static model of alignment, we dissected the visuo-motor network in the superior colliculus and performed in vivo intracellular and extracellular recordings across layers, in restrained and unrestrained conditions, to assess both the motor and the visual tuning of individual motor and premotor neurons. We found that collicular motor units have poorly defined visual static spatial receptive fields and respond instead to kinetic visual features, revealing the existence of a direct alignment in vectorial space between sensory and movement vectors, rather than between spatial receptive fields and movement endpoints as canonically hypothesized. We show that a neural network built according to these kinetic alignment principles is ideally placed to sustain ethological behaviours such as the rapid interception of moving and static targets. These findings reveal a novel dimension of the sensorimotor alignment process. By extending the alignment from the static to the kinetic domain this work provides a novel conceptual framework for understanding the nature of sensorimotor convergence and its relevance in guiding goal-directed behaviours.
Collapse
Affiliation(s)
- Ana González-Rueda
- MRC Laboratory of Molecular Biology, Cambridge, UK.
- St Edmund's College, University of Cambridge, Cambridge, UK.
| | | | | | | | | | | | - Jisoo Kim
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, UK
| | | | - Jasper Poort
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, UK
| | - Guillaume Hennequin
- MRC Laboratory of Molecular Biology, Cambridge, UK
- Department of Engineering, University of Cambridge, Cambridge, UK
| | | |
Collapse
|
4
|
Liu X, Melcher D, Carrasco M, Hanning NM. Pre-saccadic Preview Shapes Post-Saccadic Processing More Where Perception is Poor. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.18.541028. [PMID: 37292871 PMCID: PMC10245755 DOI: 10.1101/2023.05.18.541028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The pre-saccadic preview of a peripheral target enhances the efficiency of its post-saccadic processing, termed the extrafoveal preview effect. Peripheral visual performance -and thus the quality of the preview- varies around the visual field, even at iso-eccentric locations: it is better along the horizontal than vertical meridian and along the lower than upper vertical meridian. To investigate whether these polar angle asymmetries influence the preview effect, we asked human participants (to preview four tilted gratings at the cardinals, until a central cue indicated to which one to saccade. During the saccade, the target orientation either remained or slightly changed (valid/invalid preview). After saccade landing, participants discriminated the orientation of the (briefly presented) second grating. Stimulus contrast was titrated with adaptive staircases to assess visual performance. Expectedly, valid previews increased participants' post-saccadic contrast sensitivity. This preview benefit, however, was inversely related to polar angle perceptual asymmetries; largest at the upper, and smallest at the horizontal meridian. This finding reveals that the visual system compensates for peripheral asymmetries when integrating information across saccades, by selectively assigning higher weights to the less-well perceived preview information. Our study supports the recent line of evidence showing that perceptual dynamics around saccades vary with eye movement direction.
Collapse
|
5
|
Schütz A, Bharmauria V, Yan X, Wang H, Bremmer F, Crawford JD. Integration of landmark and saccade target signals in macaque frontal cortex visual responses. Commun Biol 2023; 6:938. [PMID: 37704829 PMCID: PMC10499799 DOI: 10.1038/s42003-023-05291-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 08/26/2023] [Indexed: 09/15/2023] Open
Abstract
Visual landmarks influence spatial cognition and behavior, but their influence on visual codes for action is poorly understood. Here, we test landmark influence on the visual response to saccade targets recorded from 312 frontal and 256 supplementary eye field neurons in rhesus macaques. Visual response fields are characterized by recording neural responses to various target-landmark combinations, and then we test against several candidate spatial models. Overall, frontal/supplementary eye fields response fields preferentially code either saccade targets (40%/40%) or landmarks (30%/4.5%) in gaze fixation-centered coordinates, but most cells show multiplexed target-landmark coding within intermediate reference frames (between fixation-centered and landmark-centered). Further, these coding schemes interact: neurons with near-equal target and landmark coding show the biggest shift from fixation-centered toward landmark-centered target coding. These data show that landmark information is preserved and influences target coding in prefrontal visual responses, likely to stabilize movement goals in the presence of noisy egocentric signals.
Collapse
Affiliation(s)
- Adrian Schütz
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Vishal Bharmauria
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Xiaogang Yan
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Hongying Wang
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Frank Bremmer
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - J Douglas Crawford
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada.
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
6
|
Ghaderi A, Niemeier M, Crawford JD. Saccades and presaccadic stimulus repetition alter cortical network topology and dynamics: evidence from EEG and graph theoretical analysis. Cereb Cortex 2023; 33:2075-2100. [PMID: 35639544 DOI: 10.1093/cercor/bhac194] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 04/27/2022] [Accepted: 04/29/2022] [Indexed: 11/13/2022] Open
Abstract
Parietal and frontal cortex are involved in saccade generation, and their output signals modify visual signals throughout cortex. Local signals associated with these interactions are well described, but their large-scale progression and network dynamics are unknown. Here, we combined source localized electroencephalography (EEG) and graph theory analysis (GTA) to understand how saccades and presaccadic visual stimuli interactively alter cortical network dynamics in humans. Twenty-one participants viewed 1-3 vertical/horizontal grids, followed by grid with the opposite orientation just before a horizontal saccade or continued fixation. EEG signals from the presaccadic interval (or equivalent fixation period) were used for analysis. Source localization-through-time revealed a rapid frontoparietal progression of presaccadic motor signals and stimulus-motor interactions, with additional band-specific modulations in several frontoparietal regions. GTA analysis revealed a saccade-specific functional network with major hubs in inferior parietal cortex (alpha) and the frontal eye fields (beta), and major saccade-repetition interactions in left prefrontal (theta) and supramarginal gyrus (gamma). This network showed enhanced segregation, integration, synchronization, and complexity (compared with fixation), whereas stimulus repetition interactions reduced synchronization and complexity. These cortical results demonstrate a widespread influence of saccades on both regional and network dynamics, likely responsible for both the motor and perceptual aspects of saccades.
Collapse
Affiliation(s)
- Amirhossein Ghaderi
- Centre for Vision Research, York University, 4700 Keele St, Toronto, ON M3J 1P3, Canada.,Vision Science to Applications (VISTA) Program York University, 4700 Keele St, Toronto, ON M3J 1P3, Canada
| | - Matthias Niemeier
- Centre for Vision Research, York University, 4700 Keele St, Toronto, ON M3J 1P3, Canada.,Vision Science to Applications (VISTA) Program York University, 4700 Keele St, Toronto, ON M3J 1P3, Canada.,Department of Psychology, University of Toronto Scarborough, 1265 Military Trail, Scarborough, ON M1C 1A4, Canada
| | - John Douglas Crawford
- Centre for Vision Research, York University, 4700 Keele St, Toronto, ON M3J 1P3, Canada.,Vision Science to Applications (VISTA) Program York University, 4700 Keele St, Toronto, ON M3J 1P3, Canada.,Department of Biology, York University, 4700 Keele St,, Toronto, ON M3J 1P3, Canada.,Department of Psychology, York University, 4700 Keele St,, Toronto, ON M3J 1P3, Canada.,Department of Kinesiology and Health Sciences, York University, 4700 Keele St., Toronto, ON M3J 1P3, Canada
| |
Collapse
|
7
|
Abedi Khoozani P, Bharmauria V, Schütz A, Wildes RP, Crawford JD. Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts. Cereb Cortex Commun 2022; 3:tgac026. [PMID: 35909704 PMCID: PMC9334293 DOI: 10.1093/texcom/tgac026] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 06/14/2022] [Accepted: 06/21/2022] [Indexed: 11/13/2022] Open
Abstract
Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a convolutional neural network (CNN) of the visual system with a multilayer perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings and actual monkey data where the landmark shift had a partial influence (R2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric–egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.
Collapse
Affiliation(s)
- Parisa Abedi Khoozani
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Adrian Schütz
- Department of Neurophysics Phillips-University Marburg , Marburg 35037 , Germany
| | - Richard P Wildes
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Department of Electrical Engineering and Computer Science , York University, Toronto, ON M3J 1P3 , Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Departments of Psychology, Biology and Kinesiology & Health Sciences, York University , Toronto, Ontario M3J 1P3 , Canada
| |
Collapse
|
8
|
Allen KM, Lawlor J, Salles A, Moss CF. Orienting our view of the superior colliculus: specializations and general functions. Curr Opin Neurobiol 2021; 71:119-126. [PMID: 34826675 PMCID: PMC8996328 DOI: 10.1016/j.conb.2021.10.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 09/10/2021] [Accepted: 10/20/2021] [Indexed: 11/15/2022]
Abstract
The mammalian superior colliculus (SC) and its non-mammalian homolog, the optic tectum are implicated in sensorimotor transformations. Historically, emphasis on visuomotor functions of the SC has led to a popular view that it operates as an oculomotor structure rather than a more general orienting structure. In this review, we consider comparative work on the SC/optic tectum, with a particular focus on non-visual sensing and orienting, which reveals a broader perspective on SC functions and their role in species-specific behaviors. We highlight several recent studies that consider ethological context and natural behaviors to advance knowledge of the SC as a site of multi-sensory integration and motor initiation in diverse species.
Collapse
Affiliation(s)
- Kathryne M Allen
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Jennifer Lawlor
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Angeles Salles
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Cynthia F Moss
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, USA; The Solomon Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, 21205, USA; Department of Mechanical Engineering, Whiting School of Engineering, Johns Hopkins University, USA.
| |
Collapse
|
9
|
Caruso VC, Pages DS, Sommer MA, Groh JM. Compensating for a shifting world: evolving reference frames of visual and auditory signals across three multimodal brain areas. J Neurophysiol 2021; 126:82-94. [PMID: 33852803 DOI: 10.1152/jn.00385.2020] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Stimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually guided saccades from variable initial fixation locations and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become "predominantly" eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.NEW & NOTEWORTHY Models for visual-auditory integration posit that visual signals are eye-centered throughout the brain, whereas auditory signals are converted from head-centered to eye-centered coordinates. We show instead that both modalities largely employ hybrid reference frames: neither fully head- nor eye-centered. Across three hubs of the oculomotor network (intraparietal cortex, frontal eye field, and superior colliculus) visual and auditory signals evolve from hybrid to a common eye-centered format via different dynamics across brain areas and time.
Collapse
Affiliation(s)
- Valeria C Caruso
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychiatry, University of Michigan, Ann Arbor, Michigan
| | - Daniel S Pages
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina
| | - Marc A Sommer
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Biomedical Engineering, Duke University, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Biomedical Engineering, Duke University, Durham, North Carolina
| |
Collapse
|
10
|
Hafed ZM, Chen CY, Tian X, Baumann MP, Zhang T. Active vision at the foveal scale in the primate superior colliculus. J Neurophysiol 2021; 125:1121-1138. [PMID: 33534661 DOI: 10.1152/jn.00724.2020] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
The primate superior colliculus (SC) has recently been shown to possess both a large foveal representation as well as a varied visual processing repertoire. This structure is also known to contribute to eye movement generation. Here, we describe our current understanding of how SC visual and movement-related signals interact within the realm of small eye movements associated with the foveal scale of visuomotor behavior. Within the SC's foveal representation, there is a full spectrum of visual, visual-motor, and motor-related discharge for fixational eye movements. Moreover, a substantial number of neurons only emit movement-related discharge when microsaccades are visually guided, but not when similar movements are generated toward a blank. This represents a particularly striking example of integrating vision and action at the foveal scale. Beyond that, SC visual responses themselves are strongly modulated, and in multiple ways, by the occurrence of small eye movements. Intriguingly, this impact can extend to eccentricities well beyond the fovea, causing both sensitivity enhancement and suppression in the periphery. Because of large foveal magnification of neural tissue, such long-range eccentricity effects are neurally warped into smaller differences in anatomical space, providing a structural means for linking peripheral and foveal visual modulations around fixational eye movements. Finally, even the retinal-image visual flows associated with tiny fixational eye movements are signaled fairly faithfully by peripheral SC neurons with relatively large receptive fields. These results demonstrate how studying active vision at the foveal scale represents an opportunity for understanding primate vision during natural behaviors involving ever-present foveating eye movements.NEW & NOTEWORTHY The primate superior colliculus (SC) is ideally suited for active vision at the foveal scale: it enables detailed foveal visual analysis by accurately driving small eye movements, and it also possesses a visual processing machinery that is sensitive to active eye movement behavior. Studying active vision at the foveal scale in the primate SC is informative for broader aspects of active perception, including the overt and covert processing of peripheral extra-foveal visual scene locations.
Collapse
Affiliation(s)
- Ziad M Hafed
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen University, Tübingen, Germany.,Hertie Institute for Clinical Brain Research, Tübingen University, Tübingen, Germany
| | - Chih-Yang Chen
- Institute for the Advanced Study of Human Biology (WPI-ASHBi), Kyoto University, Kyoto, Japan
| | - Xiaoguang Tian
- University of Pittsburgh Brain Institute, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Matthias P Baumann
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen University, Tübingen, Germany.,Hertie Institute for Clinical Brain Research, Tübingen University, Tübingen, Germany
| | - Tong Zhang
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen University, Tübingen, Germany.,Hertie Institute for Clinical Brain Research, Tübingen University, Tübingen, Germany
| |
Collapse
|
11
|
Spatiotemporal Coding in the Macaque Supplementary Eye Fields: Landmark Influence in the Target-to-Gaze Transformation. eNeuro 2021; 8:ENEURO.0446-20.2020. [PMID: 33318073 PMCID: PMC7877461 DOI: 10.1523/eneuro.0446-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 11/24/2020] [Indexed: 11/21/2022] Open
Abstract
Eye-centered (egocentric) and landmark-centered (allocentric) visual signals influence spatial cognition, navigation, and goal-directed action, but the neural mechanisms that integrate these signals for motor control are poorly understood. A likely candidate for egocentric/allocentric integration in the gaze control system is the supplementary eye fields (SEF), a mediofrontal structure with high-level “executive” functions, spatially tuned visual/motor response fields, and reciprocal projections with the frontal eye fields (FEF). To test this hypothesis, we trained two head-unrestrained monkeys (Macaca mulatta) to saccade toward a remembered visual target in the presence of a visual landmark that shifted during the delay, causing gaze end points to shift partially in the same direction. A total of 256 SEF neurons were recorded, including 68 with spatially tuned response fields. Model fits to the latter established that, like the FEF and superior colliculus (SC), spatially tuned SEF responses primarily showed an egocentric (eye-centered) target-to-gaze position transformation. However, the landmark shift influenced this default egocentric transformation: during the delay, motor neurons (with no visual response) showed a transient but unintegrated shift (i.e., not correlated with the target-to-gaze transformation), whereas during the saccade-related burst visuomotor (VM) neurons showed an integrated shift (i.e., correlated with the target-to-gaze transformation). This differed from our simultaneous FEF recordings (Bharmauria et al., 2020), which showed a transient shift in VM neurons, followed by an integrated response in all motor responses. Based on these findings and past literature, we propose that prefrontal cortex incorporates landmark-centered information into a distributed, eye-centered target-to-gaze transformation through a reciprocal prefrontal circuit.
Collapse
|
12
|
Khanna SB, Scott JA, Smith MA. Dynamic shifts of visual and saccadic signals in prefrontal cortical regions 8Ar and FEF. J Neurophysiol 2020; 124:1774-1791. [PMID: 33026949 DOI: 10.1152/jn.00669.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Active vision is a fundamental process by which primates gather information about the external world. Multiple brain regions have been studied in the context of simple active vision tasks in which a visual target's appearance is temporally separated from saccade execution. Most neurons have tight spatial registration between visual and saccadic signals, and in areas such as prefrontal cortex (PFC), some neurons show persistent delay activity that links visual and motor epochs and has been proposed as a basis for spatial working memory. Many PFC neurons also show rich dynamics, which have been attributed to alternative working memory codes and the representation of other task variables. Our study investigated the transition between processing a visual stimulus and generating an eye movement in populations of PFC neurons in macaque monkeys performing a memory guided saccade task. We found that neurons in two subregions of PFC, the frontal eye fields (FEF) and area 8Ar, differed in their dynamics and spatial response profiles. These dynamics could be attributed largely to shifts in the spatial profile of visual and motor responses in individual neurons. This led to visual and motor codes for particular spatial locations that were instantiated by different mixtures of neurons, which could be important in PFC's flexible role in multiple sensory, cognitive, and motor tasks.NEW & NOTEWORTHY A central question in neuroscience is how the brain transitions from sensory representations to motor outputs. The prefrontal cortex contains neurons that have long been implicated as important in this transition and in working memory. We found evidence for rich and diverse tuning in these neurons, which was often spatially misaligned between visual and saccadic responses. This feature may play an important role in flexible working memory capabilities.
Collapse
Affiliation(s)
- Sanjeev B Khanna
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Jonathan A Scott
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Matthew A Smith
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania.,Carnegie Mellon Neuroscience Institute, Pittsburgh, Pennsylvania
| |
Collapse
|
13
|
Sajad A, Sadeh M, Crawford JD. Spatiotemporal transformations for gaze control. Physiol Rep 2020; 8:e14533. [PMID: 32812395 PMCID: PMC7435051 DOI: 10.14814/phy2.14533] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/30/2020] [Accepted: 07/01/2020] [Indexed: 12/13/2022] Open
Abstract
Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time-locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during ordinary gaze shifts. In this review, we describe the various models and methods that have been devised to test this question, and their limitations. We then describe a new method that can (a) simultaneously test between all of these models during natural, head-unrestrained conditions, and (b) track the evolving spatial continuum from target (T) to future gaze coding (G, including errors) through time. We then summarize some applications of this technique, comparing spatiotemporal coding in the primate frontal eye field (FEF) and superior colliculus (SC). The results confirm that these areas preferentially encode eye-centered, effector-independent parameters, and show-for the first time in ordinary gaze shifts-a spatial transformation between visual and motor responses from T to G coding. We introduce a new set of spatial models (T-G continuum) that revealed task-dependent timing of this transformation: progressive during a memory delay between vision and action, and almost immediate without such a delay. We synthesize the results from our studies and supplement it with previous knowledge of anatomy and physiology to propose a conceptual model where cumulative transformation noise is realized as inaccuracies in gaze behavior. We conclude that the spatiotemporal transformation for gaze is both local (observed within and across neurons in a given area) and distributed (with common signals shared across remote but interconnected structures).
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Psychology DepartmentVanderbilt UniversityNashvilleTNUSA
| | - Morteza Sadeh
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Department of NeurosurgeryUniversity of Illinois at ChicagoChicagoILUSA
| | - John Douglas Crawford
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Vision: Science to Applications Program (VISTA)Neuroscience Graduate Diploma ProgramDepartments of Psychology, Biology, Kinesiology & Health SciencesYork UniversityTorontoONCanada
| |
Collapse
|
14
|
Bharmauria V, Sajad A, Li J, Yan X, Wang H, Crawford JD. Integration of Eye-Centered and Landmark-Centered Codes in Frontal Eye Field Gaze Responses. Cereb Cortex 2020; 30:4995-5013. [PMID: 32390052 DOI: 10.1093/cercor/bhaa090] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/07/2020] [Accepted: 03/23/2020] [Indexed: 12/19/2022] Open
Abstract
The visual system is thought to separate egocentric and allocentric representations, but behavioral experiments show that these codes are optimally integrated to influence goal-directed movements. To test if frontal cortex participates in this integration, we recorded primate frontal eye field activity during a cue-conflict memory delay saccade task. To dissociate egocentric and allocentric coordinates, we surreptitiously shifted a visual landmark during the delay period, causing saccades to deviate by 37% in the same direction. To assess the cellular mechanisms, we fit neural response fields against an egocentric (eye-centered target-to-gaze) continuum, and an allocentric shift (eye-to-landmark-centered) continuum. Initial visual responses best-fit target position. Motor responses (after the landmark shift) predicted future gaze position but embedded within the motor code was a 29% shift toward allocentric coordinates. This shift appeared transiently in memory-related visuomotor activity, and then reappeared in motor activity before saccades. Notably, fits along the egocentric and allocentric shift continua were initially independent, but became correlated across neurons just before the motor burst. Overall, these results implicate frontal cortex in the integration of egocentric and allocentric visual information for goal-directed action, and demonstrate the cell-specific, temporal progression of signal multiplexing for this process in the gaze system.
Collapse
Affiliation(s)
- Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Amirsaman Sajad
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN 37240, USA
| | - Jirui Li
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Hongying Wang
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - John Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Departments of Psychology, Biology and Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada M3J 1P3
| |
Collapse
|
15
|
Timing Determines Tuning: A Rapid Spatial Transformation in Superior Colliculus Neurons during Reactive Gaze Shifts. eNeuro 2020; 7:ENEURO.0359-18.2019. [PMID: 31792117 PMCID: PMC6944480 DOI: 10.1523/eneuro.0359-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 10/12/2019] [Accepted: 10/14/2019] [Indexed: 11/21/2022] Open
Abstract
Gaze saccades, rapid shifts of the eyes and head toward a goal, have provided fundamental insights into the neural control of movement. For example, it has been shown that the superior colliculus (SC) transforms a visual target (T) code to future gaze (G) location commands after a memory delay. However, this transformation has not been observed in "reactive" saccades made directly to a stimulus, so its contribution to normal gaze behavior is unclear. Here, we tested this using a quantitative measure of the intermediate codes between T and G, based on variable errors in gaze endpoints. We demonstrate that a rapid spatial transformation occurs within the primate's SC (Macaca mulatta) during reactive saccades, involving a shift in coding from T, through intermediate codes, to G. This spatial shift progressed continuously both across and within cell populations [visual, visuomotor (VM), motor], rather than relaying discretely between populations with fixed spatial codes. These results suggest that the SC produces a rapid, noisy, and distributed transformation that contributes to variable errors in reactive gaze shifts.
Collapse
|
16
|
Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques. J Neurophysiol 2019; 122:1946-1961. [PMID: 31533015 DOI: 10.1152/jn.00072.2019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination-eye-head-hand coordination-has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized "chair" that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target.NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Collapse
Affiliation(s)
- Harbandhan Kaur Arora
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - Vishal Bharmauria
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Saihong Sun
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hongying Wang
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - John Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada.,Department of Psychology, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
17
|
Massot C, Jagadisan UK, Gandhi NJ. Sensorimotor transformation elicits systematic patterns of activity along the dorsoventral extent of the superior colliculus in the macaque monkey. Commun Biol 2019; 2:287. [PMID: 31396567 PMCID: PMC6677725 DOI: 10.1038/s42003-019-0527-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Accepted: 06/27/2019] [Indexed: 12/21/2022] Open
Abstract
The superior colliculus (SC) is an excellent substrate to study sensorimotor transformations. To date, the spatial and temporal properties of population activity along its dorsoventral axis have been inferred from single electrode studies. Here, we recorded SC population activity in non-human primates using a linear multi-contact array during delayed saccade tasks. We show that during the visual epoch, information appeared first in dorsal layers and systematically later in ventral layers. During the delay period, the laminar organization of low-spiking rate activity matched that of the visual epoch. During the pre-saccadic epoch, spiking activity emerged first in a more ventral layer, ~ 100 ms before saccade onset. This buildup of activity appeared later on nearby neurons situated both dorsally and ventrally, culminating in a synchronous burst across the dorsoventral axis, ~ 28 ms before saccade onset. Collectively, these results reveal a principled spatiotemporal organization of SC population activity underlying sensorimotor transformation for the control of gaze.
Collapse
Affiliation(s)
- Corentin Massot
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260 USA
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA 15260 USA
| | - Uday K. Jagadisan
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260 USA
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA 15260 USA
| | - Neeraj J. Gandhi
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260 USA
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA 15260 USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA 15260 USA
| |
Collapse
|
18
|
Blohm G, Alikhanian H, Gaetz W, Goltz H, DeSouza J, Cheyne D, Crawford J. Neuromagnetic signatures of the spatiotemporal transformation for manual pointing. Neuroimage 2019; 197:306-319. [DOI: 10.1016/j.neuroimage.2019.04.074] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 03/28/2019] [Accepted: 04/27/2019] [Indexed: 11/29/2022] Open
|
19
|
Helmbrecht TO, dal Maschio M, Donovan JC, Koutsouli S, Baier H. Topography of a Visuomotor Transformation. Neuron 2018; 100:1429-1445.e4. [DOI: 10.1016/j.neuron.2018.10.021] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 08/31/2018] [Accepted: 10/09/2018] [Indexed: 01/07/2023]
|
20
|
Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. The Influence of a Memory Delay on Spatial Coding in the Superior Colliculus: Is Visual Always Visual and Motor Always Motor? Front Neural Circuits 2018; 12:74. [PMID: 30405361 PMCID: PMC6204359 DOI: 10.3389/fncir.2018.00074] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Accepted: 08/29/2018] [Indexed: 11/13/2022] Open
Abstract
The memory-delay saccade task is often used to separate visual and motor responses in oculomotor structures such as the superior colliculus (SC), with the assumption that these same responses would sum with a short delay during immediate "reactive" saccades to visual stimuli. However, it is also possible that additional signals (suppression, delay) alter visual and/or motor response in the memory delay task. Here, we compared the spatiotemporal properties of visual and motor responses of the same SC neurons recorded during both the reactive and memory-delay tasks in two head-unrestrained monkeys. Comparing tasks, visual (aligned with target onset) and motor (aligned on saccade onset) responses were highly correlated across neurons, but the peak response of visual neurons and peak motor responses (of both visuomotor (VM) and motor neurons) were significantly higher in the reactive task. Receptive field organization was generally similar in both tasks. Spatial coding (along a Target-Gaze (TG) continuum) was also similar, with the exception that pure motor cells showed a stronger tendency to code future gaze location in the memory delay task, suggesting a more complete transformation. These results suggest that the introduction of a trained memory delay alters both the vigor and spatial coding of SC visual and motor responses, likely due to a combination of saccade suppression signals and greater signal noise accumulation during the delay in the memory delay task.
Collapse
Affiliation(s)
- Morteza Sadeh
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Amirsaman Sajad
- York Centre for Vision Research, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Hongying Wang
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - John Douglas Crawford
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| |
Collapse
|
21
|
Wilson JJ, Alexandre N, Trentin C, Tripodi M. Three-Dimensional Representation of Motor Space in the Mouse Superior Colliculus. Curr Biol 2018; 28:1744-1755.e12. [PMID: 29779875 PMCID: PMC5988568 DOI: 10.1016/j.cub.2018.04.021] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Revised: 03/16/2018] [Accepted: 04/05/2018] [Indexed: 11/23/2022]
Abstract
From the act of exploring an environment to that of grasping a cup of tea, animals must put in register their motor acts with their surrounding space. In the motor domain, this is likely to be defined by a register of three-dimensional (3D) displacement vectors, whose recruitment allows motion in the direction of a target. One such spatially targeted action is seen in the head reorientation behavior of mice, yet the neural mechanisms underlying these 3D behaviors remain unknown. Here, by developing a head-mounted inertial sensor for studying 3D head rotations and combining it with electrophysiological recordings, we show that neurons in the mouse superior colliculus are either individually or conjunctively tuned to the three Eulerian components of head rotation. The average displacement vectors associated with motor-tuned colliculus neurons remain stable over time and are unaffected by changes in firing rate or the duration of spike trains. Finally, we show that the motor tuning of collicular neurons is largely independent from visual or landmark cues. By describing the 3D nature of motor tuning in the superior colliculus, we contribute to long-standing debate on the dimensionality of collicular motor decoding; furthermore, by providing an experimental paradigm for the study of the metric of motor tuning in mice, this study also paves the way to the genetic dissection of the circuits underlying spatially targeted motion. Development of inertial sensor system for monitoring 3D head movements in real time Neurons in the superior colliculus code for the full dimensionality of head rotations Firing rate correlates with velocity, but not head displacement angle The spatial tuning of collicular units is largely independent of visual or landmark cues
Collapse
|
22
|
Caruso VC, Pages DS, Sommer MA, Groh JM. Beyond the labeled line: variation in visual reference frames from intraparietal cortex to frontal eye fields and the superior colliculus. J Neurophysiol 2018; 119:1411-1421. [PMID: 29357464 PMCID: PMC5966730 DOI: 10.1152/jn.00584.2017] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 12/16/2017] [Accepted: 12/18/2017] [Indexed: 11/22/2022] Open
Abstract
We accurately perceive the visual scene despite moving our eyes ~3 times per second, an ability that requires incorporation of eye position and retinal information. In this study, we assessed how this neural computation unfolds across three interconnected structures: frontal eye fields (FEF), intraparietal cortex (LIP/MIP), and the superior colliculus (SC). Single-unit activity was assessed in head-restrained monkeys performing visually guided saccades from different initial fixations. As previously shown, the receptive fields of most LIP/MIP neurons shifted to novel positions on the retina for each eye position, and these locations were not clearly related to each other in either eye- or head-centered coordinates (defined as hybrid coordinates). In contrast, the receptive fields of most SC neurons were stable in eye-centered coordinates. In FEF, visual signals were intermediate between those patterns: around 60% were eye-centered, whereas the remainder showed changes in receptive field location, boundaries, or responsiveness that rendered the response patterns hybrid or occasionally head-centered. These results suggest that FEF may act as a transitional step in an evolution of coordinates between LIP/MIP and SC. The persistence across cortical areas of mixed representations that do not provide unequivocal location labels in a consistent reference frame has implications for how these representations must be read out. NEW & NOTEWORTHY How we perceive the world as stable using mobile retinas is poorly understood. We compared the stability of visual receptive fields across different fixation positions in three visuomotor regions. Irregular changes in receptive field position were ubiquitous in intraparietal cortex, evident but less common in the frontal eye fields, and negligible in the superior colliculus (SC), where receptive fields shifted reliably across fixations. Only the SC provides a stable labeled-line code for stimuli across saccades.
Collapse
Affiliation(s)
- Valeria C Caruso
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| | - Daniel S Pages
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| | - Marc A Sommer
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
- Department of Biomedical Engineering, Duke University , Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| |
Collapse
|
23
|
Disruption of Fixation Reveals Latent Sensorimotor Processes in the Superior Colliculus. J Neurosci 2017; 36:6129-40. [PMID: 27251631 DOI: 10.1523/jneurosci.3685-15.2016] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2015] [Accepted: 04/12/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Executive control of voluntary movements is a hallmark of the mammalian brain. In the gaze-control network, this function is thought to be mediated by a critical balance between neurons responsible for generating movements and those responsible for fixating or suppressing movements, but the nature of this balance between the relevant elements-saccade-generating and fixation-related neurons-remains unclear. Specifically, it has been debated whether the two functions are necessarily coupled (i.e., push-and-pull) or independently controlled. Here we show that behavioral perturbation of ongoing fixation with the trigeminal blink reflex in monkeys (Macaca mulatta) alters the effective balance between fixation and saccade-generating neurons in the superior colliculus (SC) and can lead to premature gaze shifts reminiscent of compromised inhibitory control. The shift in balance is primarily driven by an increase in the activity of visuomovement neurons in the caudal SC, and the extent to which fixation-related neurons in the rostral SC play a role seems to be linked to the animal's propensity to make microsaccades. The perturbation also reveals a hitherto unknown feature of sensorimotor integration: the presence of a hidden visual response in canonical movement neurons. These findings offer new insights into the latent functional interactions, or lack thereof, between components of the gaze-control network, suggesting that the perturbation technique used here may prove to be a useful tool for probing the neural mechanisms of movement generation in executive function and dysfunction. SIGNIFICANCE STATEMENT Eye movements are an integral part of how we explore the environment. Although we know a great deal about where sensorimotor transformations leading to saccadic eye movements are implemented in the brain, less is known about the functional interactions between neurons that maintain gaze fixation and neurons that program saccades. In this study, we used a novel approach to study these interactions. By transient disruption of fixation, we found that activity of saccade-generating neurons can increase independently of modulation in fixation-related neurons, which may occasionally lead to premature movements mimicking lack of impulse control. Our findings support the notion of a common pathway for sensory and movement processing and suggest that impulsive movements arise when sensory processes become "motorized."
Collapse
|
24
|
Chen Y, Crawford JD. Cortical Activation during Landmark-Centered vs. Gaze-Centered Memory of Saccade Targets in the Human: An FMRI Study. Front Syst Neurosci 2017; 11:44. [PMID: 28690501 PMCID: PMC5481872 DOI: 10.3389/fnsys.2017.00044] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2017] [Accepted: 06/06/2017] [Indexed: 11/13/2022] Open
Abstract
A remembered saccade target could be encoded in egocentric coordinates such as gaze-centered, or relative to some external allocentric landmark that is independent of the target or gaze (landmark-centered). In comparison to egocentric mechanisms, very little is known about such a landmark-centered representation. Here, we used an event-related fMRI design to identify brain areas supporting these two types of spatial coding (i.e., landmark-centered vs. gaze-centered) for target memory during the Delay phase where only target location, not saccade direction, was specified. The paradigm included three tasks with identical display of visual stimuli but different auditory instructions: Landmark Saccade (remember target location relative to a visual landmark, independent of gaze), Control Saccade (remember original target location relative to gaze fixation, independent of the landmark), and a non-spatial control, Color Report (report target color). During the Delay phase, the Control and Landmark Saccade tasks activated overlapping areas in posterior parietal cortex (PPC) and frontal cortex as compared to the color control, but with higher activation in PPC for target coding in the Control Saccade task and higher activation in temporal and occipital cortex for target coding in Landmark Saccade task. Gaze-centered directional selectivity was observed in superior occipital gyrus and inferior occipital gyrus, whereas landmark-centered directional selectivity was observed in precuneus and midposterior intraparietal sulcus. During the Response phase after saccade direction was specified, the parietofrontal network in the left hemisphere showed higher activation for rightward than leftward saccades. Our results suggest that cortical activation for coding saccade target direction relative to a visual landmark differs from gaze-centered directional selectivity for target memory, from the mechanisms for other types of allocentric tasks, and from the directionally selective mechanisms for saccade planning and execution.
Collapse
Affiliation(s)
- Ying Chen
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada
| | - J D Crawford
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada.,Vision: Science to Applications Program, York University, TorontoON, Canada
| |
Collapse
|
25
|
Cappadocia DC, Monaco S, Chen Y, Blohm G, Crawford JD. Temporal Evolution of Target Representation, Movement Direction Planning, and Reach Execution in Occipital–Parietal–Frontal Cortex: An fMRI Study. Cereb Cortex 2016; 27:5242-5260. [DOI: 10.1093/cercor/bhw304] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Accepted: 09/08/2016] [Indexed: 11/14/2022] Open
|
26
|
Hafed Z, Chen CY. Sharper, Stronger, Faster Upper Visual Field Representation in Primate Superior Colliculus. Curr Biol 2016; 26:1647-1658. [DOI: 10.1016/j.cub.2016.04.059] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Revised: 03/23/2016] [Accepted: 04/22/2016] [Indexed: 10/21/2022]
|
27
|
Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory-Motor Transformation. eNeuro 2016; 3:eN-TNWR-0040-16. [PMID: 27092335 PMCID: PMC4829728 DOI: 10.1523/eneuro.0040-16.2016] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Accepted: 03/23/2016] [Indexed: 01/01/2023] Open
Abstract
The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.
Collapse
|