1
|
McFadyen JR, Heider B, Karkhanis AN, Cloherty SL, Muñoz F, Siegel RM, Morris AP. Robust Coding of Eye Position in Posterior Parietal Cortex despite Context-Dependent Tuning. J Neurosci 2022; 42:4116-4130. [PMID: 35410881 PMCID: PMC9121829 DOI: 10.1523/jneurosci.0674-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 03/30/2022] [Accepted: 03/30/2022] [Indexed: 11/21/2022] Open
Abstract
Neurons in posterior parietal cortex (PPC) encode many aspects of the sensory world (e.g., scene structure), the posture of the body, and plans for action. For a downstream computation, however, only some of these dimensions are relevant; the rest are "nuisance variables" because their influence on neural activity changes with sensory and behavioral context, potentially corrupting the read-out of relevant information. Here we show that a key postural variable for vision (eye position) is represented robustly in male macaque PPC across a range of contexts, although the tuning of single neurons depended strongly on context. Contexts were defined by different stages of a visually guided reaching task, including (1) a visually sparse epoch, (2) a visually rich epoch, (3) a "go" epoch in which the reach was cued, and (4) during the reach itself. Eye position was constant within trials but varied across trials in a 3 × 3 grid spanning 24° × 24°. Using demixed principal component analysis of neural spike-counts, we found that the subspace of the population response encoding eye position is orthogonal to that encoding task context. Accordingly, a context-naive (fixed-parameter) decoder was nevertheless able to estimate eye position reliably across contexts. Errors were small given the sample size (∼1.78°) and would likely be even smaller with larger populations. Moreover, they were comparable to that of decoders that were optimized for each context. Our results suggest that population codes in PPC shield encoded signals from crosstalk to support robust sensorimotor transformations across contexts.SIGNIFICANCE STATEMENT Neurons in posterior parietal cortex (PPC) which are sensitive to gaze direction are thought to play a key role in spatial perception and behavior (e.g., reaching, navigation), and provide a potential substrate for brain-controlled prosthetics. Many, however, change their tuning under different sensory and behavioral contexts, raising the prospect that they provide unreliable representations of egocentric space. Here, we analyze the structure of encoding dimensions for gaze direction and context in PPC during different stages of a visually guided reaching task. We use demixed dimensionality reduction and decoding techniques to show that the coding of gaze direction in PPC is mostly invariant to context. This suggests that PPC can provide reliable spatial information across sensory and behavioral contexts.
Collapse
Affiliation(s)
- Jamie R McFadyen
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Barbara Heider
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Anushree N Karkhanis
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Shaun L Cloherty
- School of Engineering, RMIT University, Melbourne, VIC, 3001, Australia
| | - Fabian Muñoz
- Department of Neuroscience, Columbia University, New York, NY, 10027
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Ralph M Siegel
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Adam P Morris
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
- Monash Data Futures Institute, Monash University, Clayton, VIC, 3800, Australia
| |
Collapse
|
2
|
Rao HM, San Juan J, Shen FY, Villa JE, Rafie KS, Sommer MA. Neural Network Evidence for the Coupling of Presaccadic Visual Remapping to Predictive Eye Position Updating. Front Comput Neurosci 2016; 10:52. [PMID: 27313528 PMCID: PMC4889583 DOI: 10.3389/fncom.2016.00052] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 05/18/2016] [Indexed: 11/13/2022] Open
Abstract
As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.
Collapse
Affiliation(s)
- Hrishikesh M Rao
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Juan San Juan
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Fred Y Shen
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Jennifer E Villa
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Kimia S Rafie
- Department of Biomedical Engineering, Pratt School of Engineering, Duke University Durham, NC, USA
| | - Marc A Sommer
- Department of Biomedical Engineering, Pratt School of Engineering, Duke UniversityDurham, NC, USA; Department of Neurobiology, Duke School of Medicine, Duke UniversityDurham, NC, USA; Center for Cognitive Neuroscience, Duke UniversityDurham, NC, USA
| |
Collapse
|
3
|
Mohsenzadeh Y, Dash S, Crawford JD. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements. Front Syst Neurosci 2016; 10:39. [PMID: 27242452 PMCID: PMC4867689 DOI: 10.3389/fnsys.2016.00039] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Accepted: 04/19/2016] [Indexed: 12/02/2022] Open
Abstract
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks.
Collapse
Affiliation(s)
- Yalda Mohsenzadeh
- York Center for Vision Research, Canadian Action and Perception Network, York University Toronto, ON, Canada
| | - Suryadeep Dash
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Department of Physiology and Pharmacology, Robarts Research Institute, Western UniversityLondon, ON, Canada
| | - J Douglas Crawford
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Departments of Psychology, Biology, and Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| |
Collapse
|
4
|
|
5
|
Angelaki DE, Klier EM, Snyder LH. A vestibular sensation: probabilistic approaches to spatial perception. Neuron 2009; 64:448-61. [PMID: 19945388 DOI: 10.1016/j.neuron.2009.11.010] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2009] [Indexed: 10/20/2022]
Abstract
The vestibular system helps maintain equilibrium and clear vision through reflexes, but it also contributes to spatial perception. In recent years, research in the vestibular field has expanded to higher-level processing involving the cortex. Vestibular contributions to spatial cognition have been difficult to study because the circuits involved are inherently multisensory. Computational methods and the application of Bayes theorem are used to form hypotheses about how information from different sensory modalities is combined together with expectations based on past experience in order to obtain optimal estimates of cognitive variables like current spatial orientation. To test these hypotheses, neuronal populations are being recorded during active tasks in which subjects make decisions based on vestibular and visual or somatosensory information. This review highlights what is currently known about the role of vestibular information in these processes, the computations necessary to obtain the appropriate signals, and the benefits that have emerged thus far.
Collapse
Affiliation(s)
- Dora E Angelaki
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
6
|
Klier EM, Angelaki DE. Spatial updating and the maintenance of visual constancy. Neuroscience 2008; 156:801-18. [PMID: 18786618 DOI: 10.1016/j.neuroscience.2008.07.079] [Citation(s) in RCA: 66] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2008] [Revised: 07/29/2008] [Accepted: 07/30/2008] [Indexed: 11/16/2022]
Abstract
Spatial updating is the means by which we keep track of the locations of objects in space even as we move. Four decades of research have shown that humans and non-human primates can take the amplitude and direction of intervening movements into account, including saccades (both head-fixed and head-free), pursuit, whole-body rotations and translations. At the neuronal level, spatial updating is thought to be maintained by receptive field locations that shift with changes in gaze, and evidence for such shifts has been shown in several cortical areas. These regions receive information about the intervening movement from several sources including motor efference copies when a voluntary movement is made and vestibular/somatosensory signals when the body is in motion. Many of these updating signals arise from brainstem regions that monitor our ongoing movements and subsequently transmit this information to the cortex via pathways that likely include the thalamus. Several issues of debate include (1) the relative contribution of extra-retinal sensory and efference copy signals to spatial updating, (2) the source of an updating signal for real life, three-dimensional motion that cannot arise from brain areas encoding only two-dimensional commands, and (3) the reference frames used by the brain to integrate updating signals from various sources. This review highlights the relevant spatial updating studies and provides a summary of the field today. We find that spatial constancy is maintained by a highly evolved neural mechanism that keeps track of our movements, transmits this information to relevant brain regions, and then uses this information to change the way in which single neurons respond. In this way, we are able to keep track of relevant objects in the outside world and interact with them in meaningful ways.
Collapse
Affiliation(s)
- E M Klier
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | |
Collapse
|
7
|
Abstract
Historically, inflow and outflow hypotheses have been formulated as the primary explanations for perceptual stability. Central to these hypotheses is the postulation that, following an intended eye movement, knowledge of eye position cancels the consequences of the retinal image motion. Here, we reconsider the evidence for the extra-retinal signal and discuss whether this cancellation approach is compatible with the available empirical evidence. In particular, we propose that visual-oculomotor processing is a distributed process and that population-coding models of sensorimotor transformations are critical elements that need to be incorporated in any comprehensive explanation of spatial constancy.
Collapse
Affiliation(s)
- Richard V Abadi
- Faculty of Life Sciences, University of Manchester, Manchester M60 1QD, UK
| | - Janus J Kulikowski
- Faculty of Life Sciences, University of Manchester, Manchester M60 1QD, UK
| |
Collapse
|
8
|
Tosh CR, Ruxton GD. Introduction. The use of artificial neural networks to study perception in animals. Philos Trans R Soc Lond B Biol Sci 2007; 362:337-8. [PMID: 17255024 PMCID: PMC2042518 DOI: 10.1098/rstb.2006.1961] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Affiliation(s)
- Colin R Tosh
- Division of Environmental & Evolutionary Biology, IBLS, University of Glasgow, Glasgow G12 8QQ, UK.
| | | |
Collapse
|