1
|
Rao HM, San Juan J, Shen FY, Villa JE, Rafie KS, Sommer MA. Neural Network Evidence for the Coupling of Presaccadic Visual Remapping to Predictive Eye Position Updating. Front Comput Neurosci 2016; 10:52. [PMID: 27313528 PMCID: PMC4889583 DOI: 10.3389/fncom.2016.00052] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 05/18/2016] [Indexed: 11/13/2022] Open
Abstract
As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.
Collapse
Affiliation(s)
- Hrishikesh M Rao
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Juan San Juan
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Fred Y Shen
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Jennifer E Villa
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Kimia S Rafie
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Marc A Sommer
- Department of Biomedical Engineering, Pratt School of Engineering, Duke UniversityDurham, NC, USA; Department of Neurobiology, Duke School of Medicine, Duke UniversityDurham, NC, USA; Center for Cognitive Neuroscience, Duke UniversityDurham, NC, USA
| |
Collapse
|
2
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Computations underlying the visuomotor transformation for smooth pursuit eye movements. J Neurophysiol 2015; 113:1377-99. [PMID: 25475344 PMCID: PMC4346721 DOI: 10.1152/jn.00273.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2014] [Accepted: 12/01/2014] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| | - Guillaume Leclercq
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| |
Collapse
|
3
|
Continuous updating of visuospatial memory in superior colliculus during slow eye movements. Curr Biol 2015; 25:267-274. [PMID: 25601549 DOI: 10.1016/j.cub.2014.11.064] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2013] [Revised: 10/15/2014] [Accepted: 11/25/2014] [Indexed: 11/23/2022]
Abstract
BACKGROUND Primates can remember and spatially update the visual direction of previously viewed objects during various types of self-motion. It is known that the brain "remaps" visual memory traces relative to gaze just before and after, but not during, discrete gaze shifts called saccades. However, it is not known how visual memory is updated during slow, continuous motion of the eyes. RESULTS Here, we recorded the midbrain superior colliculus (SC) of two rhesus monkeys that were trained to spatially update the location of a saccade target across an intervening smooth pursuit (SP) eye movement. Saccade target location was varied across trials so that it passed through the neuron's receptive field at different points of the SP trajectory. Nearly all (99% of) visual responsive neurons, but no motor neurons, showed a transient memory response that continuously updated the saccade goal during SP. These responses were gaze centered (i.e., shifting across the SC's retinotopic map in opposition to gaze). Furthermore, this response was strongly enhanced by attention and/or saccade target selection. CONCLUSIONS This is the first demonstration of continuous updating of visual memory responses during eye motion. We expect that this would generalize to other visuomotor structures when gaze shifts in a continuous, unpredictable fashion.
Collapse
|
4
|
Simulating the cortical 3D visuomotor transformation of reach depth. PLoS One 2012; 7:e41241. [PMID: 22815979 PMCID: PMC3397995 DOI: 10.1371/journal.pone.0041241] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2011] [Accepted: 06/22/2012] [Indexed: 11/22/2022] Open
Abstract
We effortlessly perform reach movements to objects in different directions and depths. However, how networks of cortical neurons compute reach depth from binocular visual inputs remains largely unknown. To bridge the gap between behavior and neurophysiology, we trained a feed-forward artificial neural network to uncover potential mechanisms that might underlie the 3D transformation of reach depth. Our physiologically-inspired 4-layer network receives distributed 3D visual inputs (1st layer) along with eye, head and vergence signals. The desired motor plan was coded in a population (3rd layer) that we read out (4th layer) using an optimal linear estimator. After training, our network was able to reproduce all known single-unit recording evidence on depth coding in the parietal cortex. Network analyses predict the presence of eye/head and vergence changes of depth tuning, pointing towards a gain-modulation mechanism of depth transformation. In addition, reach depth was computed directly from eye-centered (relative) visual distances, without explicit absolute depth coding. We suggest that these effects should be observable in parietal and pre-motor areas.
Collapse
|
5
|
Keith GP, Blohm G, Crawford JD. Influence of saccade efference copy on the spatiotemporal properties of remapping: a neural network study. J Neurophysiol 2009; 103:117-39. [PMID: 19846615 DOI: 10.1152/jn.91191.2008] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Remapping of gaze-centered target-position signals across saccades has been observed in the superior colliculus and several cortical areas. It is generally assumed that this remapping is driven by saccade-related signals. What is not known is how the different potential forms of this signal (i.e., visual, visuomotor, or motor) might influence this remapping. We trained a three-layer recurrent neural network to update target position (represented as a "hill" of activity in a gaze-centered topographic map) across saccades, using discrete time steps and backpropagation-through-time algorithm. Updating was driven by an efference copy of one of three saccade-related signals: a transient visual response to the saccade-target in two-dimensional (2-D) topographic coordinates (Vtop), a temporally extended motor burst in 2-D topographic coordinates (Mtop), or a 3-D eye velocity signal in brain stem coordinates (EV). The Vtop model produced presaccadic remapping in the output layer, with a "jumping hill" of activity and intrasaccadic suppression. The Mtop model also produced presaccadic remapping with a dispersed moving hill of activity that closely reproduced the quantitative results of Sommer and Wurtz. The EV model produced a coherent moving hill of activity but failed to produce presaccadic remapping. When eye velocity and a topographic (Vtop or Mtop) updater signal were used together, the remapping relied primarily on the topographic signal. An analysis of the hidden layer activity revealed that the transient remapping was highly dispersed across hidden-layer units in both Vtop and Mtop models but tightly clustered in the EV model. These results show that the nature of the updater signal influences both the mechanism and final dynamics of remapping. Taken together with the currently known physiology, our simulations suggest that different brain areas might rely on different signals and mechanisms for updating that should be further distinguishable through currently available single- and multiunit recording paradigms.
Collapse
Affiliation(s)
- Gerald P Keith
- York Centre for Vision Research, and Canadian Institute of Health Research Group, York University, 4700 Keele St., Toronto, Ontario, Canada
| | | | | |
Collapse
|
6
|
Trans-saccadic perception. Trends Cogn Sci 2008; 12:466-73. [PMID: 18951831 DOI: 10.1016/j.tics.2008.09.003] [Citation(s) in RCA: 196] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2008] [Revised: 09/05/2008] [Accepted: 09/05/2008] [Indexed: 11/20/2022]
Abstract
A basic question in cognition is how visual information obtained in separate glances can produce a stable, continuous percept. Previous explanations have included theories such as integration in a trans-saccadic buffer or storage in visual memory, or even that perception begins anew with each fixation. Converging evidence from primate neurophysiology, human psychophysics and neuroimaging indicate an additional explanation: the intention to make a saccadic eye movement leads to a fundamental alteration in visual processing itself before and after the saccadic eye movement. We outline five principles of 'trans-saccadic perception' that could help to explain how it is possible - despite discrete sensory input and limited memory - that conscious perception across saccades seems smooth and predictable.
Collapse
|
7
|
Blohm G, Keith GP, Crawford JD. Decoding the cortical transformations for visually guided reaching in 3D space. ACTA ACUST UNITED AC 2008; 19:1372-93. [PMID: 18842662 DOI: 10.1093/cercor/bhn177] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | | | | |
Collapse
|
8
|
Klier EM, Angelaki DE. Spatial updating and the maintenance of visual constancy. Neuroscience 2008; 156:801-18. [PMID: 18786618 DOI: 10.1016/j.neuroscience.2008.07.079] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2008] [Revised: 07/29/2008] [Accepted: 07/30/2008] [Indexed: 11/16/2022]
Abstract
Spatial updating is the means by which we keep track of the locations of objects in space even as we move. Four decades of research have shown that humans and non-human primates can take the amplitude and direction of intervening movements into account, including saccades (both head-fixed and head-free), pursuit, whole-body rotations and translations. At the neuronal level, spatial updating is thought to be maintained by receptive field locations that shift with changes in gaze, and evidence for such shifts has been shown in several cortical areas. These regions receive information about the intervening movement from several sources including motor efference copies when a voluntary movement is made and vestibular/somatosensory signals when the body is in motion. Many of these updating signals arise from brainstem regions that monitor our ongoing movements and subsequently transmit this information to the cortex via pathways that likely include the thalamus. Several issues of debate include (1) the relative contribution of extra-retinal sensory and efference copy signals to spatial updating, (2) the source of an updating signal for real life, three-dimensional motion that cannot arise from brain areas encoding only two-dimensional commands, and (3) the reference frames used by the brain to integrate updating signals from various sources. This review highlights the relevant spatial updating studies and provides a summary of the field today. We find that spatial constancy is maintained by a highly evolved neural mechanism that keeps track of our movements, transmits this information to relevant brain regions, and then uses this information to change the way in which single neurons respond. In this way, we are able to keep track of relevant objects in the outside world and interact with them in meaningful ways.
Collapse
Affiliation(s)
- E M Klier
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | |
Collapse
|
9
|
Keith GP, Crawford JD. Saccade-related remapping of target representations between topographic maps: a neural network study. J Comput Neurosci 2007; 24:157-78. [PMID: 17636448 DOI: 10.1007/s10827-007-0046-6] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2006] [Revised: 05/31/2007] [Accepted: 06/01/2007] [Indexed: 11/26/2022]
Abstract
The goal of this study was to explore how a neural network could solve the updating task associated with the double-saccade paradigm, where two targets are flashed in succession and the subject must make saccades to the remembered locations of both targets. Because of the eye rotation of the saccade to the first target, the remembered retinal position of the second target must be updated if an accurate saccade to that target is to be made. We trained a three-layer, feed-forward neural network to solve this updating task using back-propagation. The network's inputs were the initial retinal position of the second target represented by a hill of activation in a 2D topographic array of units, as well as the initial eye orientation and the motor error of the saccade to the first target, each represented as 3D vectors in brainstem coordinates. The output of the network was the updated retinal position of the second target, also represented in a 2D topographic array of units. The network was trained to perform this updating using the full 3D geometry of eye rotations, and was able to produce the updated second-target position to within a 1 degrees RMS accuracy for a set of test points that included saccades of up to 70 degrees . Emergent properties in the network's hidden layer included sigmoidal receptive fields whose orientations formed distinct clusters, and predictive remapping similar to that seen in brain areas associated with saccade generation. Networks with the larger numbers of hidden-layer units developed two distinct types of units with different transformation properties: units that preferentially performed the linear remapping of vector subtraction, and units that performed the nonlinear elements of remapping that arise from initial eye orientation.
Collapse
Affiliation(s)
- Gerald P Keith
- York Centre for Vision Research, CIHR Group for Action and Perception, Department of Psychology, York University, Toronto, ON M3J 1P3, Canada.
| | | |
Collapse
|
10
|
Cassanello CR, Ferrera VP. Computing vector differences using a gain field-like mechanism in monkey frontal eye field. J Physiol 2007; 582:647-64. [PMID: 17510192 PMCID: PMC2075335 DOI: 10.1113/jphysiol.2007.128801] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Signals related to eye position are essential for visual perception and eye movements, and are powerful modulators of sensory responses in many regions of the visual and oculomotor systems. We show that visual and pre-saccadic responses of frontal eye field (FEF) neurons are modulated by initial eye position in a way suggestive of a multiplicative mechanism (gain field). Furthermore the slope of the eye position sensitivity tends to be negatively correlated with preferred retinal position across the population. A model with Gaussian visual receptive fields and linear-rectified eye position gain fields accounts for a large portion of the variance in the recorded data. Using physiologically derived parameters, this model is able to subtract the gaze shift from the vector representing the retinal location of the target. This computation might be used to maintain a memory of target location in space during ongoing eye movements. This updated spatial memory can be read directly from the locus of the peak of activity across the retinotopic map of FEF and it is the result of a vector subtraction between retinal target location when flashed and subsequent eye displacement in the dark.
Collapse
Affiliation(s)
- Carlos R Cassanello
- Center for Neurobiology and Behavior, Department of Psychiatry, Columbia University, 1051 Riverside Drive, Kolb Annex 504, New York, NY 10032, USA.
| | | |
Collapse
|