1
|
Kim JJJ, Harris LR. Updating the remembered position of targets following passive lateral translation. PLoS One 2024; 19:e0316469. [PMID: 39739643 DOI: 10.1371/journal.pone.0316469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 12/11/2024] [Indexed: 01/02/2025] Open
Abstract
Spatial updating, the ability to track the egocentric position of surrounding objects during self-motion, is fundamental to navigating around the world. However, people make systematic errors when updating the position of objects after linear self-motion. To determine the source of these errors, we measured errors in remembered target position with or without passive lateral translations. Self-motion was presented both visually (simulated in virtual reality) and physically (on a 6-DOF motion platform). People underestimated targets' eccentricity in general even when just asked to remember them for a few seconds (5-7 seconds), with larger underestimations of more eccentric targets. We hypothesized that updating errors would depend on target eccentricity, which was manifested as errors depending not only on target eccentricity but also the observer's movement range. When updating the position of targets within the range of movement (such that their actual locations crossed the viewer's midline), people overestimated their change in position relative to their head/body compared to when judging the location of objects that were outside the range of movement and therefore did not cross the midline. We interpret these results as revealing changes in the efficacy of spatial updating depending on participant's perception of self-motion and the perceptual consequences for targets represented initially in one half of the visual field having to be reconstructed in the opposite hemifield.
Collapse
Affiliation(s)
- John J J Kim
- Department of Psychology, York University, Toronto, Ontario, Canada
| | | |
Collapse
|
2
|
Seo S, Bharmauria V, Schütz A, Yan X, Wang H, Crawford JD. Multiunit Frontal Eye Field Activity Codes the Visuomotor Transformation, But Not Gaze Prediction or Retrospective Target Memory, in a Delayed Saccade Task. eNeuro 2024; 11:ENEURO.0413-23.2024. [PMID: 39054056 PMCID: PMC11373882 DOI: 10.1523/eneuro.0413-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 07/27/2024] Open
Abstract
Single-unit (SU) activity-action potentials isolated from one neuron-has traditionally been employed to relate neuronal activity to behavior. However, recent investigations have shown that multiunit (MU) activity-ensemble neural activity recorded within the vicinity of one microelectrode-may also contain accurate estimations of task-related neural population dynamics. Here, using an established model-fitting approach, we compared the spatial codes of SU response fields with corresponding MU response fields recorded from the frontal eye fields (FEFs) in head-unrestrained monkeys (Macaca mulatta) during a memory-guided saccade task. Overall, both SU and MU populations showed a simple visuomotor transformation: the visual response coded target-in-eye coordinates, transitioning progressively during the delay toward a future gaze-in-eye code in the saccade motor response. However, the SU population showed additional secondary codes, including a predictive gaze code in the visual response and retention of a target code in the motor response. Further, when SUs were separated into regular/fast spiking neurons, these cell types showed different spatial code progressions during the late delay period, only converging toward gaze coding during the final saccade motor response. Finally, reconstructing MU populations (by summing SU data within the same sites) failed to replicate either the SU or MU pattern. These results confirm the theoretical and practical potential of MU activity recordings as a biomarker for fundamental sensorimotor transformations (e.g., target-to-gaze coding in the oculomotor system), while also highlighting the importance of SU activity for coding more subtle (e.g., predictive/memory) aspects of sensorimotor behavior.
Collapse
Affiliation(s)
- Serah Seo
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
- Department of Neurosurgery and Brain Repair, Morsani College of Medicine, University of South Florida, Tampa, Florida 33606
| | - Adrian Schütz
- Department of Neurophysics, Philipps-Universität Marburg, 35032 Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, 35032 Marburg, and Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Xiaogang Yan
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - Hongying Wang
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
3
|
Fooken J, Baltaretu BR, Barany DA, Diaz G, Semrau JA, Singh T, Crawford JD. Perceptual-Cognitive Integration for Goal-Directed Action in Naturalistic Environments. J Neurosci 2023; 43:7511-7522. [PMID: 37940592 PMCID: PMC10634571 DOI: 10.1523/jneurosci.1373-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/15/2023] [Accepted: 08/18/2023] [Indexed: 11/10/2023] Open
Abstract
Real-world actions require one to simultaneously perceive, think, and act on the surrounding world, requiring the integration of (bottom-up) sensory information and (top-down) cognitive and motor signals. Studying these processes involves the intellectual challenge of cutting across traditional neuroscience silos, and the technical challenge of recording data in uncontrolled natural environments. However, recent advances in techniques, such as neuroimaging, virtual reality, and motion tracking, allow one to address these issues in naturalistic environments for both healthy participants and clinical populations. In this review, we survey six topics in which naturalistic approaches have advanced both our fundamental understanding of brain function and how neurologic deficits influence goal-directed, coordinated action in naturalistic environments. The first part conveys fundamental neuroscience mechanisms related to visuospatial coding for action, adaptive eye-hand coordination, and visuomotor integration for manual interception. The second part discusses applications of such knowledge to neurologic deficits, specifically, steering in the presence of cortical blindness, impact of stroke on visual-proprioceptive integration, and impact of visual search and working memory deficits. This translational approach-extending knowledge from lab to rehab-provides new insights into the complex interplay between perceptual, motor, and cognitive control in naturalistic tasks that are relevant for both basic and clinical research.
Collapse
Affiliation(s)
- Jolande Fooken
- Centre for Neuroscience, Queen's University, Kingston, Ontario K7L3N6, Canada
| | - Bianca R Baltaretu
- Department of Psychology, Justus Liebig University, Giessen, 35394, Germany
| | - Deborah A Barany
- Department of Kinesiology, University of Georgia, and Augusta University/University of Georgia Medical Partnership, Athens, Georgia 30602
| | - Gabriel Diaz
- Center for Imaging Science, Rochester Institute of Technology, Rochester, New York 14623
| | - Jennifer A Semrau
- Department of Kinesiology and Applied Physiology, University of Delaware, Newark, Delaware 19713
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, Pennsylvania 16802
| | - J Douglas Crawford
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
4
|
Loh Z, Hall EH, Cronin D, Henderson JM. Working memory control predicts fixation duration in scene-viewing. PSYCHOLOGICAL RESEARCH 2023; 87:1143-1154. [PMID: 35879564 PMCID: PMC11129724 DOI: 10.1007/s00426-022-01694-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/02/2022] [Indexed: 11/28/2022]
Abstract
When viewing scenes, observers differ in how long they linger at each fixation location and how far they move their eyes between fixations. What factors drive these differences in eye-movement behaviors? Previous work suggests individual differences in working memory capacity may influence fixation durations and saccade amplitudes. In the present study, participants (N = 98) performed two scene-viewing tasks, aesthetic judgment and memorization, while viewing 100 photographs of real-world scenes. Working memory capacity, working memory processing ability, and fluid intelligence were assessed with an operation span task, a memory updating task, and Raven's Advanced Progressive Matrices, respectively. Across participants, we found significant effects of task on both fixation durations and saccade amplitudes. At the level of each individual participant, we also found a significant relationship between memory updating task performance and participants' fixation duration distributions. However, we found no effect of fluid intelligence and no effect of working memory capacity on fixation duration or saccade amplitude distributions, inconsistent with previous findings. These results suggest that the ability to flexibly maintain and update working memory is strongly related to fixation duration behavior.
Collapse
Affiliation(s)
- Zoe Loh
- Management of Complex Systems Department, University of California Merced, Merced, CA, 95343, USA
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA
| | - Elizabeth H Hall
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA.
- Department of Psychology, University of California Davis, Davis, CA, 95616, USA.
| | - Deborah Cronin
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA
- Department of Psychology, Drake University, Des Moines, IA, 50311, USA
| | - John M Henderson
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA
- Department of Psychology, University of California Davis, Davis, CA, 95616, USA
| |
Collapse
|
5
|
Rahmati M, Curtis CE, Sreenivasan KK. Mnemonic representations in human lateral geniculate nucleus. Front Behav Neurosci 2023; 17:1094226. [PMID: 37234404 PMCID: PMC10206025 DOI: 10.3389/fnbeh.2023.1094226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 04/20/2023] [Indexed: 05/28/2023] Open
Abstract
There is a growing appreciation for the role of the thalamus in high-level cognition. Motivated by findings that internal cognitive state drives activity in feedback layers of primary visual cortex (V1) that target the lateral geniculate nucleus (LGN), we investigated the role of LGN in working memory (WM). Specifically, we leveraged model-based neuroimaging approaches to test the hypothesis that human LGN encodes information about spatial locations temporarily encoded in WM. First, we localized and derived a detailed topographic organization in LGN that accords well with previous findings in humans and non-human primates. Next, we used models constructed on the spatial preferences of LGN populations in order to reconstruct spatial locations stored in WM as subjects performed modified memory-guided saccade tasks. We found that population LGN activity faithfully encoded the spatial locations held in memory in all subjects. Importantly, our tasks and models allowed us to dissociate the locations of retinal stimulation and the motor metrics of memory-guided saccades from the maintained spatial locations, thus confirming that human LGN represents true WM information. These findings add LGN to the growing list of subcortical regions involved in WM, and suggest a key pathway by which memories may influence incoming processing at the earliest levels of the visual hierarchy.
Collapse
Affiliation(s)
- Masih Rahmati
- Department of Psychology, New York University, New York, NY, United States
- Division of Science and Mathematics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Department of Psychiatry, Yale University, New Haven, CT, United States
| | - Clayton E. Curtis
- Department of Psychology, New York University, New York, NY, United States
- Center for Neural Science, New York University, New York, NY, United States
| | - Kartik K. Sreenivasan
- Division of Science and Mathematics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
6
|
Curtis CE, Sprague TC. Persistent Activity During Working Memory From Front to Back. Front Neural Circuits 2021; 15:696060. [PMID: 34366794 PMCID: PMC8334735 DOI: 10.3389/fncir.2021.696060] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 06/28/2021] [Indexed: 01/06/2023] Open
Abstract
Working memory (WM) extends the duration over which information is available for processing. Given its importance in supporting a wide-array of high level cognitive abilities, uncovering the neural mechanisms that underlie WM has been a primary goal of neuroscience research over the past century. Here, we critically review what we consider the two major "arcs" of inquiry, with a specific focus on findings that were theoretically transformative. For the first arc, we briefly review classic studies that led to the canonical WM theory that cast the prefrontal cortex (PFC) as a central player utilizing persistent activity of neurons as a mechanism for memory storage. We then consider recent challenges to the theory regarding the role of persistent neural activity. The second arc, which evolved over the last decade, stemmed from sophisticated computational neuroimaging approaches enabling researchers to decode the contents of WM from the patterns of neural activity in many parts of the brain including early visual cortex. We summarize key findings from these studies, their implications for WM theory, and finally the challenges these findings pose. Our goal in doing so is to identify barriers to developing a comprehensive theory of WM that will require a unification of these two "arcs" of research.
Collapse
Affiliation(s)
- Clayton E. Curtis
- Department of Psychology, New York University, New York, NY, United States
- Center for Neural Science, New York University, New York, NY, United States
| | - Thomas C. Sprague
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, United States
| |
Collapse
|
7
|
Ashiri M, Lithgow B, Suleiman A, Mansouri B, Moussavi Z. Electrovestibulography (EVestG) application for measuring vestibular response to horizontal pursuit and saccadic eye movements. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.03.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
8
|
Coutinho JD, Lefèvre P, Blohm G. Confidence in predicted position error explains saccadic decisions during pursuit. J Neurophysiol 2020; 125:748-767. [PMID: 33356899 DOI: 10.1152/jn.00492.2019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A fundamental problem in motor control is the coordination of complementary movement types to achieve a common goal. As a common example, humans view moving objects through coordinated pursuit and saccadic eye movements. Pursuit is initiated and continuously controlled by retinal image velocity. During pursuit, eye position may lag behind the target. This can be compensated by the discrete execution of a catch-up saccade. The decision to trigger a saccade is influenced by both position and velocity errors, and the timing of saccades can be highly variable. The observed distributions of saccade frequency and trigger time remain poorly understood, and this decision process remains imprecisely quantified. Here, we propose a predictive, probabilistic model explaining the decision to trigger saccades during pursuit to foveate moving targets. In this model, expected position error and its associated uncertainty are predicted through Bayesian inference across noisy, delayed sensory observations (Kalman filtering). This probabilistic prediction is used to estimate the confidence that a saccade is needed (quantified through log-probability ratio), triggering a saccade upon accumulating to a fixed threshold. The model qualitatively explains behavioral observations on the frequency and trigger time distributions of saccades during pursuit over a range of target motion trajectories. Furthermore, this model makes novel predictions that saccade decisions are highly sensitive to uncertainty for small predicted position errors, but this influence diminishes as the magnitude of predicted position error increases. We suggest that this predictive, confidence-based decision-making strategy represents a fundamental principle for the probabilistic neural control of coordinated movements.NEW & NOTEWORTHY This is the first stochastic dynamical systems model of pursuit-saccade coordination accounting for noise and delays in the sensorimotor system. The model uses Bayesian inference to predictively estimate visual motion, triggering saccades when confidence in predicted position error accumulates to a threshold. This model explains saccade frequency and trigger time distributions across target trajectories and makes novel predictions about the influence of sensory uncertainty in saccade decisions during pursuit.
Collapse
Affiliation(s)
- Jonathan D Coutinho
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Philippe Lefèvre
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Université catholique de Louvain, Louvain-la-Neuve, Belgium.,Institute of Neuroscience, Université catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
9
|
Spatially Specific Working Memory Activity in the Human Superior Colliculus. J Neurosci 2020; 40:9487-9495. [PMID: 33115927 PMCID: PMC7724141 DOI: 10.1523/jneurosci.2016-20.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2020] [Revised: 09/26/2020] [Accepted: 10/21/2020] [Indexed: 11/24/2022] Open
Abstract
Theoretically, working memory (WM) representations are encoded by population activity of neurons with distributed tuning across the stored feature. Here, we leverage computational neuroimaging approaches to map the topographic organization of human superior colliculus (SC) and model how population activity in SC encodes WM representations. We first modeled receptive field properties of voxels in SC, deriving a detailed topographic organization resembling that of the primate SC. Neural activity within human (5 male and 1 female) SC persisted throughout a retention interval of several types of modified memory-guided saccade tasks. Assuming an underlying neural architecture of the SC based on its retinotopic organization, we used an encoding model to show that the pattern of activity in human SC represents locations stored in WM. Our tasks and models allowed us to dissociate the locations of visual targets and the motor metrics of memory-guided saccades from the spatial locations stored in WM, thus confirming that human SC represents true WM information. These data have several important implications. They add the SC to a growing number of cortical and subcortical brain areas that form distributed networks supporting WM functions. Moreover, they specify a clear neural mechanism by which topographically organized SC encodes WM representations. SIGNIFICANCE STATEMENT Using computational neuroimaging approaches, we mapped the topographic organization of human superior colliculus (SC) and modeled how population activity in SC encodes working memory (WM) representations, rather than simpler visual or motor properties that have been traditionally associated with the laminar maps in the primate SC. Together, these data both position the human SC into a distributed network of brain areas supporting WM and elucidate the neural mechanisms by which the SC supports WM.
Collapse
|
10
|
Sajad A, Sadeh M, Crawford JD. Spatiotemporal transformations for gaze control. Physiol Rep 2020; 8:e14533. [PMID: 32812395 PMCID: PMC7435051 DOI: 10.14814/phy2.14533] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/30/2020] [Accepted: 07/01/2020] [Indexed: 12/13/2022] Open
Abstract
Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time-locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during ordinary gaze shifts. In this review, we describe the various models and methods that have been devised to test this question, and their limitations. We then describe a new method that can (a) simultaneously test between all of these models during natural, head-unrestrained conditions, and (b) track the evolving spatial continuum from target (T) to future gaze coding (G, including errors) through time. We then summarize some applications of this technique, comparing spatiotemporal coding in the primate frontal eye field (FEF) and superior colliculus (SC). The results confirm that these areas preferentially encode eye-centered, effector-independent parameters, and show-for the first time in ordinary gaze shifts-a spatial transformation between visual and motor responses from T to G coding. We introduce a new set of spatial models (T-G continuum) that revealed task-dependent timing of this transformation: progressive during a memory delay between vision and action, and almost immediate without such a delay. We synthesize the results from our studies and supplement it with previous knowledge of anatomy and physiology to propose a conceptual model where cumulative transformation noise is realized as inaccuracies in gaze behavior. We conclude that the spatiotemporal transformation for gaze is both local (observed within and across neurons in a given area) and distributed (with common signals shared across remote but interconnected structures).
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Psychology DepartmentVanderbilt UniversityNashvilleTNUSA
| | - Morteza Sadeh
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Department of NeurosurgeryUniversity of Illinois at ChicagoChicagoILUSA
| | - John Douglas Crawford
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Vision: Science to Applications Program (VISTA)Neuroscience Graduate Diploma ProgramDepartments of Psychology, Biology, Kinesiology & Health SciencesYork UniversityTorontoONCanada
| |
Collapse
|
11
|
Timing Determines Tuning: A Rapid Spatial Transformation in Superior Colliculus Neurons during Reactive Gaze Shifts. eNeuro 2020; 7:ENEURO.0359-18.2019. [PMID: 31792117 PMCID: PMC6944480 DOI: 10.1523/eneuro.0359-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 10/12/2019] [Accepted: 10/14/2019] [Indexed: 11/21/2022] Open
Abstract
Gaze saccades, rapid shifts of the eyes and head toward a goal, have provided fundamental insights into the neural control of movement. For example, it has been shown that the superior colliculus (SC) transforms a visual target (T) code to future gaze (G) location commands after a memory delay. However, this transformation has not been observed in "reactive" saccades made directly to a stimulus, so its contribution to normal gaze behavior is unclear. Here, we tested this using a quantitative measure of the intermediate codes between T and G, based on variable errors in gaze endpoints. We demonstrate that a rapid spatial transformation occurs within the primate's SC (Macaca mulatta) during reactive saccades, involving a shift in coding from T, through intermediate codes, to G. This spatial shift progressed continuously both across and within cell populations [visual, visuomotor (VM), motor], rather than relaying discretely between populations with fixed spatial codes. These results suggest that the SC produces a rapid, noisy, and distributed transformation that contributes to variable errors in reactive gaze shifts.
Collapse
|
12
|
Blohm G, Alikhanian H, Gaetz W, Goltz H, DeSouza J, Cheyne D, Crawford J. Neuromagnetic signatures of the spatiotemporal transformation for manual pointing. Neuroimage 2019; 197:306-319. [DOI: 10.1016/j.neuroimage.2019.04.074] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 03/28/2019] [Accepted: 04/27/2019] [Indexed: 11/29/2022] Open
|
13
|
White BJ, Itti L, Munoz DP. Superior colliculus encodes visual saliency during smooth pursuit eye movements. Eur J Neurosci 2019; 54:4258-4268. [PMID: 31077473 DOI: 10.1111/ejn.14432] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Revised: 04/15/2019] [Accepted: 04/25/2019] [Indexed: 11/27/2022]
Abstract
The saliency map has played a long-standing role in models and theories of visual attention, and it is now supported by neurobiological evidence from several cortical and subcortical brain areas. While visual saliency is computed during moments of active fixation, it is not known whether the same is true while engaged in smooth pursuit of a moving stimulus, which is very common in real-world vision. Here, we examined extrafoveal saliency coding in the superior colliculus, a midbrain area associated with attention and gaze, during smooth pursuit eye movements. We found that SC neurons from the superficial visual layers showed a robust representation of peripheral saliency evoked by a conspicuous stimulus embedded in a wide-field array of goal-irrelevant stimuli. In contrast, visuomotor neurons from the intermediate saccade-related layers showed a poor saliency representation, even though most of these neurons were visually responsive during smooth pursuit. These results confirm and extend previous findings that place the SCs in a unique role as a saliency map that monitors peripheral vision during foveation of stationary and now moving objects.
Collapse
Affiliation(s)
- Brian J White
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Laurent Itti
- Department of Computer Science, University of Southern California, Los Angeles, California
| | - Douglas P Munoz
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
14
|
Abstract
After been exposed to the visual input, in the first year of life, the brain experiences subtle but massive changes apparently crucial for communicative/emotional and social human development. Its lack could be the explanation of the very high prevalence of autism in children with total congenital blindness. The present theory postulates that the superior colliculus is the key structure for such changes for several reasons: it dominates visual behavior during the first months of life; it is ready at birth for complex visual tasks; it has a significant influence on several hemispheric regions; it is the main brain hub that permanently integrates visual and non-visual, external and internal information (bottom-up and top-down respectively); and it owns the enigmatic ability to take non-conscious decisions about where to focus attention. It is also a sentinel that triggers the subcortical mechanisms which drive social motivation to follow faces from birth and to react automatically to emotional stimuli. Through indirect connections it also activates simultaneously several cortical structures necessary to develop social cognition and to accomplish the multiattentional task required for conscious social interaction in real life settings. Genetic or non-genetic prenatal or early postnatal factors could disrupt the SC functions resulting in autism. The timing of postnatal biological disruption matches the timing of clinical autism manifestations. Astonishing coincidences between etiologies, clinical manifestations, cognitive and pathogenic autism theories on one side and SC functions on the other are disclosed in this review. Although the visual system dependent of the SC is usually considered as accessory of the LGN canonical pathway, its imprinting gives the brain a qualitatively specific functions not supplied by any other brain structure.
Collapse
Affiliation(s)
- Rubin Jure
- Centro Privado de Neurología y Neuropsicología Infanto Juvenil WERNICKE, Córdoba, Argentina
| |
Collapse
|
15
|
Leong ATL, Dong CM, Gao PP, Chan RW, To A, Sanes DH, Wu EX. Optogenetic auditory fMRI reveals the effects of visual cortical inputs on auditory midbrain response. Sci Rep 2018; 8:8736. [PMID: 29880842 PMCID: PMC5992211 DOI: 10.1038/s41598-018-26568-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 05/10/2018] [Indexed: 12/20/2022] Open
Abstract
Sensory cortices contain extensive descending (corticofugal) pathways, yet their impact on brainstem processing - particularly across sensory systems - remains poorly understood. In the auditory system, the inferior colliculus (IC) in the midbrain receives cross-modal inputs from the visual cortex (VC). However, the influences from VC on auditory midbrain processing are unclear. To investigate whether and how visual cortical inputs affect IC auditory responses, the present study combines auditory blood-oxygenation-level-dependent (BOLD) functional MRI (fMRI) with cell-type specific optogenetic manipulation of visual cortex. The results show that predominant optogenetic excitation of the excitatory pyramidal neurons in the infragranular layers of the primary VC enhances the noise-evoked BOLD fMRI responses within the IC. This finding reveals that inputs from VC influence and facilitate basic sound processing in the auditory midbrain. Such combined optogenetic and auditory fMRI approach can shed light on the large-scale modulatory effects of corticofugal pathways and guide detailed electrophysiological studies in the future.
Collapse
Affiliation(s)
- Alex T L Leong
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.,Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Celia M Dong
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.,Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Patrick P Gao
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.,Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Russell W Chan
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.,Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Anthea To
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.,Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY, 10003, United States
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China. .,Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China. .,School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China. .,Department of Medicine, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.
| |
Collapse
|
16
|
Spatial localization of sound elicits early responses from occipital visual cortex in humans. Sci Rep 2017; 7:10415. [PMID: 28874681 PMCID: PMC5585168 DOI: 10.1038/s41598-017-09142-z] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2017] [Accepted: 07/20/2017] [Indexed: 11/08/2022] Open
Abstract
Much evidence points to an interaction between vision and audition at early cortical sites. However, the functional role of these interactions is not yet understood. Here we show an early response of the occipital cortex to sound that it is strongly linked to the spatial localization task performed by the observer. The early occipital response to a sound, usually absent, increased by more than 10-fold when presented during a space localization task, but not during a time localization task. The response amplification was not only specific to the task, but surprisingly also to the position of the stimulus in the two hemifields. We suggest that early occipital processing of sound is linked to the construction of an audio spatial map that may utilize the visual map of the occipital cortex.
Collapse
|
17
|
Schut MJ, Fabius JH, Van der Stoep N, Van der Stigchel S. Object files across eye movements: Previous fixations affect the latencies of corrective saccades. Atten Percept Psychophys 2017; 79:138-153. [PMID: 27743259 PMCID: PMC5179592 DOI: 10.3758/s13414-016-1220-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
One of the factors contributing to a seamless visual experience is object correspondence-that is, the integration of pre- and postsaccadic visual object information into one representation. Previous research had suggested that before the execution of a saccade, a target object is loaded into visual working memory and subsequently is used to locate the target object after the saccade. Until now, studies on object correspondence have not taken previous fixations into account. In the present study, we investigated the influence of previously fixated information on object correspondence. To this end, we adapted a gaze correction paradigm in which a saccade was executed toward either a previously fixated or a novel target. During the saccade, the stimuli were displaced such that the participant's gaze landed between the target stimulus and a distractor. Participants then executed a corrective saccade to the target. The results indicated that these corrective saccades had lower latencies toward previously fixated than toward nonfixated targets, indicating object-specific facilitation. In two follow-up experiments, we showed that presaccadic spatial and object (surface feature) information can contribute separately to the execution of a corrective saccade, as well as in conjunction. Whereas the execution of a corrective saccade to a previously fixated target object at a previously fixated location is slowed down (i.e., inhibition of return), corrective saccades toward either a previously fixated target object or a previously fixated location are facilitated. We concluded that corrective saccades are executed on the basis of object files rather than of unintegrated feature information.
Collapse
Affiliation(s)
- Martijn J Schut
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | - Jasper H Fabius
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| |
Collapse
|
18
|
Abstract
AbstractMore than 35 years ago, Meltzoff and Moore (1977) published their famous article, “Imitation of facial and manual gestures by human neonates.” Their central conclusion, that neonates can imitate, was and continues to be controversial. Here, we focus on an often-neglected aspect of this debate, namely, neonatal spontaneous behaviors themselves. We present a case study of a paradigmatic orofacial “gesture,” namely tongue protrusion and retraction (TP/R). Against the background of new research on mammalian aerodigestive development, we ask: How does the human aerodigestive system develop, and what role does TP/R play in the neonate's emerging system of aerodigestion? We show that mammalian aerodigestion develops in two phases: (1) from the onset of isolated orofacial movementsin uteroto the postnatal mastery of suckling at 4 months after birth; and (2) thereafter, from preparation to the mastery of mastication and deglutition of solid foods. Like other orofacial stereotypies, TP/R emerges in the first phase and vanishes prior to the second. Based upon recent advances in activity-driven early neural development, we suggest a sequence of three developmental events in which TP/R might participate: the acquisition of tongue control, the integration of the central pattern generator (CPG) for TP/R with other aerodigestive CPGs, and the formation of connections within the cortical maps of S1 and M1. If correct, orofacial stereotypies are crucial to the maturation of aerodigestion in the neonatal period but also unlikely to co-occur with imitative behavior.
Collapse
|
19
|
Hafed Z, Chen CY. Sharper, Stronger, Faster Upper Visual Field Representation in Primate Superior Colliculus. Curr Biol 2016; 26:1647-1658. [DOI: 10.1016/j.cub.2016.04.059] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Revised: 03/23/2016] [Accepted: 04/22/2016] [Indexed: 10/21/2022]
|
20
|
Mohsenzadeh Y, Dash S, Crawford JD. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements. Front Syst Neurosci 2016; 10:39. [PMID: 27242452 PMCID: PMC4867689 DOI: 10.3389/fnsys.2016.00039] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Accepted: 04/19/2016] [Indexed: 12/02/2022] Open
Abstract
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks.
Collapse
Affiliation(s)
- Yalda Mohsenzadeh
- York Center for Vision Research, Canadian Action and Perception Network, York University Toronto, ON, Canada
| | - Suryadeep Dash
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Department of Physiology and Pharmacology, Robarts Research Institute, Western UniversityLondon, ON, Canada
| | - J Douglas Crawford
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Departments of Psychology, Biology, and Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| |
Collapse
|
21
|
Dash S, Nazari SA, Yan X, Wang H, Crawford JD. Superior Colliculus Responses to Attended, Unattended, and Remembered Saccade Targets during Smooth Pursuit Eye Movements. Front Syst Neurosci 2016; 10:34. [PMID: 27147987 PMCID: PMC4828430 DOI: 10.3389/fnsys.2016.00034] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 03/30/2016] [Indexed: 11/16/2022] Open
Abstract
In realistic environments, keeping track of multiple visual targets during eye movements likely involves an interaction between vision, top-down spatial attention, memory, and self-motion information. Recently we found that the superior colliculus (SC) visual memory response is attention-sensitive and continuously updated relative to gaze direction. In that study, animals were trained to remember the location of a saccade target across an intervening smooth pursuit (SP) eye movement (Dash et al., 2015). Here, we modified this paradigm to directly compare the properties of visual and memory updating responses to attended and unattended targets. Our analysis shows that during SP, active SC visual vs. memory updating responses share similar gaze-centered spatio-temporal profiles (suggesting a common mechanism), but updating was weaker by ~25%, delayed by ~55 ms, and far more dependent on attention. Further, during SP the sum of passive visual responses (to distracter stimuli) and memory updating responses (to saccade targets) closely resembled the responses for active attentional tracking of visible saccade targets. These results suggest that SP updating signals provide a damped, delayed estimate of attended location that contributes to the gaze-centered tracking of both remembered and visible saccade targets.
Collapse
Affiliation(s)
- Suryadeep Dash
- Center for Vision Research, York UniversityToronto, ON, Canada; Department of Physiology and Pharmacology, Robarts Research Institute, Western UniversityLondon, ON, Canada
| | | | - Xiaogang Yan
- Center for Vision Research, York University Toronto, ON, Canada
| | - Hongying Wang
- Center for Vision Research, York University Toronto, ON, Canada
| | - J Douglas Crawford
- Center for Vision Research, York UniversityToronto, ON, Canada; Department of Psychology, Biology and Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| |
Collapse
|
22
|
Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory-Motor Transformation. eNeuro 2016; 3:eN-TNWR-0040-16. [PMID: 27092335 PMCID: PMC4829728 DOI: 10.1523/eneuro.0040-16.2016] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Accepted: 03/23/2016] [Indexed: 01/01/2023] Open
Abstract
The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.
Collapse
|
23
|
Tramper JJ, Medendorp WP. Parallel updating and weighting of multiple spatial maps for visual stability during whole body motion. J Neurophysiol 2015; 114:3211-9. [PMID: 26490289 DOI: 10.1152/jn.00576.2015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Accepted: 10/21/2015] [Indexed: 11/22/2022] Open
Abstract
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.
Collapse
Affiliation(s)
- J J Tramper
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - W P Medendorp
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
24
|
Stanford T. Vision: A Moving Hill for Spatial Updating on the Fly. Curr Biol 2015; 25:R115-R117. [DOI: 10.1016/j.cub.2014.12.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|