1
|
Sotomayor-Gómez B, Battaglia FP, Vinck M. Firing rates in visual cortex show representational drift, while temporal spike sequences remain stable. Cell Rep 2025; 44:115547. [PMID: 40202845 DOI: 10.1016/j.celrep.2025.115547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 12/10/2024] [Accepted: 03/19/2025] [Indexed: 04/11/2025] Open
Abstract
Neural firing-rate responses to sensory stimuli show progressive changes both within and across sessions, raising the question of how the brain maintains a stable code. One possibility is that other features of multi-neuron spiking patterns, e.g., the temporal structure, provide a stable coding mechanism. Here, we compared spike-rate and spike-timing codes in neural ensembles from six visual areas during natural video presentations. To quantify information in spike sequences, we used SpikeShip, a method based on the optimal transport theory that considers the relative spike-timing relations among all neurons. For large numbers of active neurons, temporal spike sequences conveyed more information than population firing-rate vectors. Firing-rate vectors exhibited substantial drift across repetitions and between blocks, in contrast to spike sequences, which were stable over time. These findings reveal a stable neural code based on relative spike-timing relations in high-dimensional neural ensembles.
Collapse
Affiliation(s)
- Boris Sotomayor-Gómez
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, Nijmegen, the Netherlands.
| | - Francesco P Battaglia
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, Nijmegen, the Netherlands.
| |
Collapse
|
2
|
Cheng YA, Sanayei M, Chen X, Jia K, Li S, Fang F, Watanabe T, Thiele A, Zhang RY. A neural geometry approach comprehensively explains apparently conflicting models of visual perceptual learning. Nat Hum Behav 2025:10.1038/s41562-025-02149-x. [PMID: 40164913 DOI: 10.1038/s41562-025-02149-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2024] [Accepted: 02/20/2025] [Indexed: 04/02/2025]
Abstract
Visual perceptual learning (VPL), defined as long-term improvement in a visual task, is considered a crucial tool for elucidating underlying visual and brain plasticity. Previous studies have proposed several neural models of VPL, including changes in neural tuning or in noise correlations. Here, to adjudicate different models, we propose that all neural changes at single units can be conceptualized as geometric transformations of population response manifolds in a high-dimensional neural space. Following this neural geometry approach, we identified neural manifold shrinkage due to reduced trial-by-trial population response variability, rather than tuning or correlation changes, as the primary mechanism of VPL. Furthermore, manifold shrinkage successfully explains VPL effects across artificial neural responses in deep neural networks, multivariate blood-oxygenation-level-dependent signals in humans and multiunit activities in monkeys. These converging results suggest that our neural geometry approach comprehensively explains a wide range of empirical results and reconciles previously conflicting models of VPL.
Collapse
Affiliation(s)
- Yu-Ang Cheng
- Brain Health Institute, National Center for Mental Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine and School of Psychology, Shanghai, People's Republic of China
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
| | - Mehdi Sanayei
- Biosciences Institute, Newcastle University, Framlington Place, Newcastle upon Tyne, UK
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences, Tehran, Iran
| | - Xing Chen
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Ke Jia
- Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou, People's Republic of China
- Liangzhu Laboratory, MOE Frontier Science Center for Brain Science and Brain-machine Integration, State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou, People's Republic of China
- NHC and CAMS Key Laboratory of Medical Neurobiology, Zhejiang University, Hangzhou, People's Republic of China
| | - Sheng Li
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, People's Republic of China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, People's Republic of China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, People's Republic of China
| | - Fang Fang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, People's Republic of China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, People's Republic of China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, People's Republic of China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, People's Republic of China
| | - Takeo Watanabe
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
| | - Alexander Thiele
- Biosciences Institute, Newcastle University, Framlington Place, Newcastle upon Tyne, UK
| | - Ru-Yuan Zhang
- Brain Health Institute, National Center for Mental Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine and School of Psychology, Shanghai, People's Republic of China.
| |
Collapse
|
3
|
Greco A, Moser J, Preissl H, Siegel M. Predictive learning shapes the representational geometry of the human brain. Nat Commun 2024; 15:9670. [PMID: 39516221 PMCID: PMC11549346 DOI: 10.1038/s41467-024-54032-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 10/30/2024] [Indexed: 11/16/2024] Open
Abstract
Predictive coding theories propose that the brain constantly updates internal models to minimize prediction errors and optimize sensory processing. However, the neural mechanisms that link prediction error encoding and optimization of sensory representations remain unclear. Here, we provide evidence how predictive learning shapes the representational geometry of the human brain. We recorded magnetoencephalography (MEG) in humans listening to acoustic sequences with different levels of regularity. We found that the brain aligns its representational geometry to match the statistical structure of the sensory inputs, by clustering temporally contiguous and predictable stimuli. Crucially, the magnitude of this representational shift correlates with the synergistic encoding of prediction errors in a network of high-level and sensory areas. Our findings suggest that, in response to the statistical regularities of the environment, large-scale neural interactions engaged in predictive processing modulate the representational content of sensory areas to enhance sensory processing.
Collapse
Affiliation(s)
- Antonino Greco
- Department of Neural Dynamics and Magnetoencephalography, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.
- Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany.
- MEG Center, University of Tübingen, Tübingen, Germany.
| | - Julia Moser
- IDM/fMEG Center of the Helmholtz Center Munich, University of Tübingen, Tübingen, Germany
- Masonic Institute for the Developing Brain (MIDB), University of Minnesota, Minneapolis, USA
| | - Hubert Preissl
- IDM/fMEG Center of the Helmholtz Center Munich, University of Tübingen, Tübingen, Germany
- German Center for Mental Health (DZPG), Tübingen, Germany
- German Center for Diabetes Research (DZD), Tübingen, Germany
- Department of Internal Medicine IV, University Hospital of Tübingen, Tübingen, Germany
- Department of Pharmacy and Biochemistry, University of Tübingen, Tübingen, Germany
| | - Markus Siegel
- Department of Neural Dynamics and Magnetoencephalography, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.
- Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany.
- MEG Center, University of Tübingen, Tübingen, Germany.
- German Center for Mental Health (DZPG), Tübingen, Germany.
| |
Collapse
|
4
|
Stan PL, Smith MA. Recent Visual Experience Reshapes V4 Neuronal Activity and Improves Perceptual Performance. J Neurosci 2024; 44:e1764232024. [PMID: 39187380 PMCID: PMC11466072 DOI: 10.1523/jneurosci.1764-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 07/10/2024] [Accepted: 08/13/2024] [Indexed: 08/28/2024] Open
Abstract
Recent visual experience heavily influences our visual perception, but how neuronal activity is reshaped to alter and improve perceptual discrimination remains unknown. We recorded from populations of neurons in visual cortical area V4 while two male rhesus macaque monkeys performed a natural image change detection task under different experience conditions. We found that maximizing the recent experience with a particular image led to an improvement in the ability to detect a change in that image. This improvement was associated with decreased neural responses to the image, consistent with neuronal changes previously seen in studies of adaptation and expectation. We found that the magnitude of behavioral improvement was correlated with the magnitude of response suppression. Furthermore, this suppression of activity led to an increase in signal separation, providing evidence that a reduction in activity can improve stimulus encoding. Within populations of neurons, greater recent experience was associated with decreased trial-to-trial shared variability, indicating that a reduction in variability is a key means by which experience influences perception. Taken together, the results of our study contribute to an understanding of how recent visual experience can shape our perception and behavior through modulating activity patterns in the mid-level visual cortex.
Collapse
Affiliation(s)
- Patricia L Stan
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Matthew A Smith
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| |
Collapse
|
5
|
Reilly J, Goodwin JD, Lu S, Kozlov AS. Bidirectional generative adversarial representation learning for natural stimulus synthesis. J Neurophysiol 2024; 132:1156-1169. [PMID: 39196986 PMCID: PMC11495180 DOI: 10.1152/jn.00421.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 08/12/2024] [Accepted: 08/14/2024] [Indexed: 08/30/2024] Open
Abstract
Thousands of species use vocal signals to communicate with one another. Vocalizations carry rich information, yet characterizing and analyzing these complex, high-dimensional signals is difficult and prone to human bias. Moreover, animal vocalizations are ethologically relevant stimuli whose representation by auditory neurons is an important subject of research in sensory neuroscience. A method that can efficiently generate naturalistic vocalization waveforms would offer an unlimited supply of stimuli with which to probe neuronal computations. Although unsupervised learning methods allow for the projection of vocalizations into low-dimensional latent spaces learned from the waveforms themselves, and generative modeling allows for the synthesis of novel vocalizations for use in downstream tasks, we are not aware of any model that combines these tasks to synthesize naturalistic vocalizations in the waveform domain for stimulus playback. In this paper, we demonstrate BiWaveGAN: a bidirectional generative adversarial network (GAN) capable of learning a latent representation of ultrasonic vocalizations (USVs) from mice. We show that BiWaveGAN can be used to generate, and interpolate between, realistic vocalization waveforms. We then use these synthesized stimuli along with natural USVs to probe the sensory input space of mouse auditory cortical neurons. We show that stimuli generated from our method evoke neuronal responses as effectively as real vocalizations, and produce receptive fields with the same predictive power. BiWaveGAN is not restricted to mouse USVs but can be used to synthesize naturalistic vocalizations of any animal species and interpolate between vocalizations of the same or different species, which could be useful for probing categorical boundaries in representations of ethologically relevant auditory signals.NEW & NOTEWORTHY A new type of artificial neural network is presented that can be used to generate animal vocalization waveforms and interpolate between them to create new vocalizations. We find that our synthetic naturalistic stimuli drive auditory cortical neurons in the mouse equally well and produce receptive field features with the same predictive power as those obtained with natural mouse vocalizations, confirming the quality of the stimuli produced by the neural network.
Collapse
Affiliation(s)
- Johnny Reilly
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - John D Goodwin
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Sihao Lu
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Andriy S Kozlov
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
6
|
Liao L, Xu K, Wu H, Chen C, Sun W, Yan Q, Jay Kuo CC, Lin W. Blind Video Quality Prediction by Uncovering Human Video Perceptual Representation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4998-5013. [PMID: 39236121 DOI: 10.1109/tip.2024.3445738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
Blind video quality assessment (VQA) has become an increasingly demanding problem in automatically assessing the quality of ever-growing in-the-wild videos. Although efforts have been made to measure temporal distortions, the core to distinguish between VQA and image quality assessment (IQA), the lack of modeling of how the human visual system (HVS) relates to the temporal quality of videos hinders the precise mapping of predicted temporal scores to the human perception. Inspired by the recent discovery of the temporal straightness law of natural videos in the HVS, this paper intends to model the complex temporal distortions of in-the-wild videos in a simple and uniform representation by describing the geometric properties of videos in the visual perceptual domain. A novel videolet, with perceptual representation embedding of a few consecutive frames, is designed as the basic quality measurement unit to quantify temporal distortions by measuring the angular and linear displacements from the straightness law. By combining the predicted score on each videolet, a perceptually temporal quality evaluator (PTQE) is formed to measure the temporal quality of the entire video. Experimental results demonstrate that the perceptual representation in the HVS is an efficient way of predicting subjective temporal quality. Moreover, when combined with spatial quality metrics, PTQE achieves top performance over popular in-the-wild video datasets. More importantly, PTQE requires no additional information beyond the video being assessed, making it applicable to any dataset without parameter tuning. Additionally, the generalizability of PTQE is evaluated on video frame interpolation tasks, demonstrating its potential to benefit temporal-related enhancement tasks.
Collapse
|
7
|
Lindsey JW, Issa EB. Factorized visual representations in the primate visual system and deep neural networks. eLife 2024; 13:RP91685. [PMID: 38968311 PMCID: PMC11226229 DOI: 10.7554/elife.91685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/07/2024] Open
Abstract
Object classification has been proposed as a principal objective of the primate ventral visual stream and has been used as an optimization target for deep neural network models (DNNs) of the visual system. However, visual brain areas represent many different types of information, and optimizing for classification of object identity alone does not constrain how other information may be encoded in visual representations. Information about different scene parameters may be discarded altogether ('invariance'), represented in non-interfering subspaces of population activity ('factorization') or encoded in an entangled fashion. In this work, we provide evidence that factorization is a normative principle of biological visual representations. In the monkey ventral visual hierarchy, we found that factorization of object pose and background information from object identity increased in higher-level regions and strongly contributed to improving object identity decoding performance. We then conducted a large-scale analysis of factorization of individual scene parameters - lighting, background, camera viewpoint, and object pose - in a diverse library of DNN models of the visual system. Models which best matched neural, fMRI, and behavioral data from both monkeys and humans across 12 datasets tended to be those which factorized scene parameters most strongly. Notably, invariance to these parameters was not as consistently associated with matches to neural and behavioral data, suggesting that maintaining non-class information in factorized activity subspaces is often preferred to dropping it altogether. Thus, we propose that factorization of visual scene information is a widely used strategy in brains and DNN models thereof.
Collapse
Affiliation(s)
- Jack W Lindsey
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - Elias B Issa
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
- Department of Neuroscience, Columbia UniversityNew YorkUnited States
| |
Collapse
|
8
|
Beshkov K, Fyhn M, Hafting T, Einevoll GT. Topological structure of population activity in mouse visual cortex encodes densely sampled stimulus rotations. iScience 2024; 27:109370. [PMID: 38523791 PMCID: PMC10959658 DOI: 10.1016/j.isci.2024.109370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 10/06/2023] [Accepted: 02/26/2024] [Indexed: 03/26/2024] Open
Abstract
The primary visual cortex is one of the most well understood regions supporting the processing involved in sensory computation. Following the popularization of high-density neural recordings, it has been observed that the activity of large neural populations is often constrained to low dimensional manifolds. In this work, we quantify the structure of such neural manifolds in the visual cortex. We do this by analyzing publicly available two-photon optical recordings of mouse primary visual cortex in response to visual stimuli with a densely sampled rotation angle. Using a geodesic metric along with persistent homology, we discover that population activity in response to such stimuli generates a circular manifold, encoding the angle of rotation. Furthermore, we observe that this circular manifold is expressed differently in subpopulations of neurons with differing orientation and direction selectivity. Finally, we discuss some of the obstacles to reliably retrieving the truthful topology generated by a neural population.
Collapse
Affiliation(s)
- Kosio Beshkov
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
| | - Marianne Fyhn
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
| | - Torkel Hafting
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
- Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Gaute T. Einevoll
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
- Department of Physics, Norwegian University of Life Sciences, As, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| |
Collapse
|
9
|
Yerxa T, Feather J, Simoncelli EP, Chung S. Contrastive-Equivariant Self-Supervised Learning Improves Alignment with Primate Visual Area IT. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2024; 37:96045-96070. [PMID: 40336515 PMCID: PMC12058038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/09/2025]
Abstract
Models trained with self-supervised learning objectives have recently matched or surpassed models trained with traditional supervised object recognition in their ability to predict neural responses of object-selective neurons in the primate visual system. A self-supervised learning objective is arguably a more biologically plausible organizing principle, as the optimization does not require a large number of labeled examples. However, typical self-supervised objectives may result in network representations that are overly invariant to changes in the input. Here, we show that a representation with structured variability to input transformations is better aligned with known features of visual perception and neural computation. We introduce a novel framework for converting standard invariant SSL losses into "contrastive-equivariant" versions that encourage preservation of input transformations without supervised access to the transformation parameters. We demonstrate that our proposed method systematically increases the ability of models to predict responses in macaque inferior temporal cortex. Our results demonstrate the promise of incorporating known features of neural computation into task-optimization for building better models of visual cortex.
Collapse
Affiliation(s)
| | - Jenelle Feather
- Center for Neural Science, New York University
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation
| | - Eero P. Simoncelli
- Center for Neural Science, New York University
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation
| | - SueYeon Chung
- Center for Neural Science, New York University
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation
| |
Collapse
|
10
|
Price BH, Jensen CM, Khoudary AA, Gavornik JP. Expectation violations produce error signals in mouse V1. Cereb Cortex 2023; 33:8803-8820. [PMID: 37183176 PMCID: PMC10321125 DOI: 10.1093/cercor/bhad163] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/22/2023] [Accepted: 04/25/2023] [Indexed: 05/16/2023] Open
Abstract
Repeated exposure to visual sequences changes the form of evoked activity in the primary visual cortex (V1). Predictive coding theory provides a potential explanation for this, namely that plasticity shapes cortical circuits to encode spatiotemporal predictions and that subsequent responses are modulated by the degree to which actual inputs match these expectations. Here we use a recently developed statistical modeling technique called Model-Based Targeted Dimensionality Reduction (MbTDR) to study visually evoked dynamics in mouse V1 in the context of an experimental paradigm called "sequence learning." We report that evoked spiking activity changed significantly with training, in a manner generally consistent with the predictive coding framework. Neural responses to expected stimuli were suppressed in a late window (100-150 ms) after stimulus onset following training, whereas responses to novel stimuli were not. Substituting a novel stimulus for a familiar one led to increases in firing that persisted for at least 300 ms. Omitting predictable stimuli in trained animals also led to increased firing at the expected time of stimulus onset. Finally, we show that spiking data can be used to accurately decode time within the sequence. Our findings are consistent with the idea that plasticity in early visual circuits is involved in coding spatiotemporal information.
Collapse
Affiliation(s)
- Byron H Price
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
- Graduate Program in Neuroscience, Boston University, Boston, MA 02215, USA
| | - Cambria M Jensen
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
| | - Anthony A Khoudary
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
| | - Jeffrey P Gavornik
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
- Graduate Program in Neuroscience, Boston University, Boston, MA 02215, USA
| |
Collapse
|
11
|
Charlton JA, Młynarski WF, Bai YH, Hermundstad AM, Goris RLT. Environmental dynamics shape perceptual decision bias. PLoS Comput Biol 2023; 19:e1011104. [PMID: 37289753 DOI: 10.1371/journal.pcbi.1011104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 04/13/2023] [Indexed: 06/10/2023] Open
Abstract
To interpret the sensory environment, the brain combines ambiguous sensory measurements with knowledge that reflects context-specific prior experience. But environmental contexts can change abruptly and unpredictably, resulting in uncertainty about the current context. Here we address two questions: how should context-specific prior knowledge optimally guide the interpretation of sensory stimuli in changing environments, and do human decision-making strategies resemble this optimum? We probe these questions with a task in which subjects report the orientation of ambiguous visual stimuli that were drawn from three dynamically switching distributions, representing different environmental contexts. We derive predictions for an ideal Bayesian observer that leverages knowledge about the statistical structure of the task to maximize decision accuracy, including knowledge about the dynamics of the environment. We show that its decisions are biased by the dynamically changing task context. The magnitude of this decision bias depends on the observer's continually evolving belief about the current context. The model therefore not only predicts that decision bias will grow as the context is indicated more reliably, but also as the stability of the environment increases, and as the number of trials since the last context switch grows. Analysis of human choice data validates all three predictions, suggesting that the brain leverages knowledge of the statistical structure of environmental change when interpreting ambiguous sensory signals.
Collapse
Affiliation(s)
- Julie A Charlton
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | | | - Yoon H Bai
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
12
|
Qiu Y, Klindt DA, Szatko KP, Gonschorek D, Hoefling L, Schubert T, Busse L, Bethge M, Euler T. Efficient coding of natural scenes improves neural system identification. PLoS Comput Biol 2023; 19:e1011037. [PMID: 37093861 PMCID: PMC10159360 DOI: 10.1371/journal.pcbi.1011037] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 05/04/2023] [Accepted: 03/20/2023] [Indexed: 04/25/2023] Open
Abstract
Neural system identification aims at learning the response function of neurons to arbitrary stimuli using experimentally recorded data, but typically does not leverage normative principles such as efficient coding of natural environments. Visual systems, however, have evolved to efficiently process input from the natural environment. Here, we present a normative network regularization for system identification models by incorporating, as a regularizer, the efficient coding hypothesis, which states that neural response properties of sensory representations are strongly shaped by the need to preserve most of the stimulus information with limited resources. Using this approach, we explored if a system identification model can be improved by sharing its convolutional filters with those of an autoencoder which aims to efficiently encode natural stimuli. To this end, we built a hybrid model to predict the responses of retinal neurons to noise stimuli. This approach did not only yield a higher performance than the "stand-alone" system identification model, it also produced more biologically plausible filters, meaning that they more closely resembled neural representation in early visual systems. We found these results applied to retinal responses to different artificial stimuli and across model architectures. Moreover, our normatively regularized model performed particularly well in predicting responses of direction-of-motion sensitive retinal neurons. The benefit of natural scene statistics became marginal, however, for predicting the responses to natural movies. In summary, our results indicate that efficiently encoding environmental inputs can improve system identification models, at least for noise stimuli, and point to the benefit of probing the visual system with naturalistic stimuli.
Collapse
Affiliation(s)
- Yongrong Qiu
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, U Tübingen, Tübingen, Germany
| | - David A Klindt
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Klaudia P Szatko
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | - Dominic Gonschorek
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Research Training Group 2381, U Tübingen, Tübingen, Germany
| | - Larissa Hoefling
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | - Timm Schubert
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, Planegg-Martinsried, Germany
- Bernstein Center for Computational Neuroscience, Planegg-Martinsried, Germany
| | - Matthias Bethge
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Institute for Theoretical Physics, U Tübingen, Tübingen, Germany
| | - Thomas Euler
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| |
Collapse
|
13
|
Guidolin A, Desroches M, Victor JD, Purpura KP, Rodrigues S. Geometry of spiking patterns in early visual cortex: a topological data analytic approach. J R Soc Interface 2022; 19:20220677. [PMID: 36382589 PMCID: PMC9667368 DOI: 10.1098/rsif.2022.0677] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/21/2022] [Indexed: 11/17/2022] Open
Abstract
In the brain, spiking patterns live in a high-dimensional space of neurons and time. Thus, determining the intrinsic structure of this space presents a theoretical and experimental challenge. To address this challenge, we introduce a new framework for applying topological data analysis (TDA) to spike train data and use it to determine the geometry of spiking patterns in the visual cortex. Key to our approach is a parametrized family of distances based on the timing of spikes that quantifies the dissimilarity between neuronal responses. We applied TDA to visually driven single-unit and multiple single-unit spiking activity in macaque V1 and V2. TDA across timescales reveals a common geometry for spiking patterns in V1 and V2 which, among simple models, is most similar to that of a low-dimensional space endowed with Euclidean or hyperbolic geometry with modest curvature. Remarkably, the inferred geometry depends on timescale and is clearest for the timescales that are important for encoding contrast, orientation and spatial correlations.
Collapse
Affiliation(s)
- Andrea Guidolin
- MCEN Team, BCAM – Basque Center for Applied Mathematics, 48009 Bilbao, Basque Country, Spain
- Department of Mathematics, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden
| | - Mathieu Desroches
- MathNeuro Team, Inria at Université Côte d’Azur, 06902 Sophia Antipolis, France
| | - Jonathan D. Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY 10065, USA
| | - Keith P. Purpura
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY 10065, USA
| | - Serafim Rodrigues
- MCEN Team, BCAM – Basque Center for Applied Mathematics, 48009 Bilbao, Basque Country, Spain
- Ikerbasque – The Basque Foundation for Science, 48009 Bilbao, Basque Country, Spain
| |
Collapse
|
14
|
Price BH, Gavornik JP. Efficient Temporal Coding in the Early Visual System: Existing Evidence and Future Directions. Front Comput Neurosci 2022; 16:929348. [PMID: 35874317 PMCID: PMC9298461 DOI: 10.3389/fncom.2022.929348] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 06/13/2022] [Indexed: 01/16/2023] Open
Abstract
While it is universally accepted that the brain makes predictions, there is little agreement about how this is accomplished and under which conditions. Accurate prediction requires neural circuits to learn and store spatiotemporal patterns observed in the natural environment, but it is not obvious how such information should be stored, or encoded. Information theory provides a mathematical formalism that can be used to measure the efficiency and utility of different coding schemes for data transfer and storage. This theory shows that codes become efficient when they remove predictable, redundant spatial and temporal information. Efficient coding has been used to understand retinal computations and may also be relevant to understanding more complicated temporal processing in visual cortex. However, the literature on efficient coding in cortex is varied and can be confusing since the same terms are used to mean different things in different experimental and theoretical contexts. In this work, we attempt to provide a clear summary of the theoretical relationship between efficient coding and temporal prediction, and review evidence that efficient coding principles explain computations in the retina. We then apply the same framework to computations occurring in early visuocortical areas, arguing that data from rodents is largely consistent with the predictions of this model. Finally, we review and respond to criticisms of efficient coding and suggest ways that this theory might be used to design future experiments, with particular focus on understanding the extent to which neural circuits make predictions from efficient representations of environmental statistics.
Collapse
Affiliation(s)
| | - Jeffrey P. Gavornik
- Center for Systems Neuroscience, Graduate Program in Neuroscience, Department of Biology, Boston University, Boston, MA, United States
| |
Collapse
|