1
|
Shi Y, Zhang J, Li X, Han Y, Guan J, Li Y, Shen J, Tzvetanov T, Yang D, Luo X, Yao Y, Chu Z, Wu T, Chen Z, Miao Y, Li Y, Wang Q, Hu J, Meng J, Liao X, Zhou Y, Tao L, Ma Y, Chen J, Zhang M, Liu R, Mi Y, Bao J, Li Z, Chen X, Xue T. Non-image-forming photoreceptors improve visual orientation selectivity and image perception. Neuron 2025; 113:486-500.e13. [PMID: 39694031 DOI: 10.1016/j.neuron.2024.11.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 06/13/2024] [Accepted: 11/22/2024] [Indexed: 12/20/2024]
Abstract
It has long been a decades-old dogma that image perception is mediated solely by rods and cones, while intrinsically photosensitive retinal ganglion cells (ipRGCs) are responsible only for non-image-forming vision, such as circadian photoentrainment and pupillary light reflexes. Surprisingly, we discovered that ipRGC activation enhances the orientation selectivity of layer 2/3 neurons in the primary visual cortex (V1) of mice by both increasing preferred-orientation responses and narrowing tuning bandwidth. Mechanistically, we found that the tuning properties of V1 excitatory and inhibitory neurons are differentially influenced by ipRGC activation, leading to a reshaping of the excitatory/inhibitory balance that enhances visual cortical orientation selectivity. Furthermore, light activation of ipRGCs improves behavioral orientation discrimination in mice. Importantly, we found that specific activation of ipRGCs in human participants through visual spectrum manipulation significantly enhances visual orientation discriminability. Our study reveals a visual channel originating from "non-image-forming photoreceptors" that facilitates visual orientation feature perception.
Collapse
Affiliation(s)
- Yiming Shi
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Jiaming Zhang
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Xingyi Li
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing 400044, China
| | - Yuchong Han
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Jiangheng Guan
- Brain Research Center, Third Military Medical University, and Chongqing Institute for Brain and Intelligence, Guangyang Bay Laboratory, Chongqing 400038, China
| | - Yilin Li
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing 400044, China
| | - Jiawei Shen
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Tzvetomir Tzvetanov
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Dongyu Yang
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Xinyi Luo
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Yichuan Yao
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Zhikun Chu
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing 400044, China
| | - Tianyi Wu
- Center for Quantitative Biology, Peking University, Beijing 100871, China
| | - Zhiping Chen
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Ying Miao
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Yufei Li
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Qian Wang
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Jiaxi Hu
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Jianjun Meng
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Xiang Liao
- Brain Research Center, Third Military Medical University, and Chongqing Institute for Brain and Intelligence, Guangyang Bay Laboratory, Chongqing 400038, China
| | - Yifeng Zhou
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Louis Tao
- Center for Quantitative Biology, Peking University, Beijing 100871, China
| | - Yuqian Ma
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Jutao Chen
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Mei Zhang
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Rong Liu
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China.
| | - Yuanyuan Mi
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing 100084, China.
| | - Jin Bao
- Shenzhen Neher Neural Plasticity Laboratory, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, the Key Laboratory of Biomedical Imaging Science and System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| | - Zhong Li
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China.
| | - Xiaowei Chen
- Brain Research Center, Third Military Medical University, and Chongqing Institute for Brain and Intelligence, Guangyang Bay Laboratory, Chongqing 400038, China.
| | - Tian Xue
- Hefei National Research Center for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, Biomedical Sciences and Health Laboratory of Anhui Province, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China.
| |
Collapse
|
2
|
Boboeva V, Pezzotta A, Clopath C, Akrami A. Unifying network model links recency and central tendency biases in working memory. eLife 2024; 12:RP86725. [PMID: 38656279 DOI: 10.7554/elife.86725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2024] Open
Abstract
The central tendency bias, or contraction bias, is a phenomenon where the judgment of the magnitude of items held in working memory appears to be biased toward the average of past observations. It is assumed to be an optimal strategy by the brain and commonly thought of as an expression of the brain's ability to learn the statistical structure of sensory input. On the other hand, recency biases such as serial dependence are also commonly observed and are thought to reflect the content of working memory. Recent results from an auditory delayed comparison task in rats suggest that both biases may be more related than previously thought: when the posterior parietal cortex (PPC) was silenced, both short-term and contraction biases were reduced. By proposing a model of the circuit that may be involved in generating the behavior, we show that a volatile working memory content susceptible to shifting to the past sensory experience - producing short-term sensory history biases - naturally leads to contraction bias. The errors, occurring at the level of individual trials, are sampled from the full distribution of the stimuli and are not due to a gradual shift of the memory toward the sensory distribution's mean. Our results are consistent with a broad set of behavioral findings and provide predictions of performance across different stimulus distributions and timings, delay intervals, as well as neuronal dynamics in putative working memory areas. Finally, we validate our model by performing a set of human psychophysics experiments of an auditory parametric working memory task.
Collapse
Affiliation(s)
- Vezha Boboeva
- Sainsbury Wellcome Centre, University College London, London, United Kingdom
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Alberto Pezzotta
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
- The Francis Crick Institute, London, United Kingdom
| | - Claudia Clopath
- Sainsbury Wellcome Centre, University College London, London, United Kingdom
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Athena Akrami
- Sainsbury Wellcome Centre, University College London, London, United Kingdom
| |
Collapse
|
3
|
Zhao H, Yang S, Fung CCA. Short-term postsynaptic plasticity facilitates predictive tracking in continuous attractors. Front Comput Neurosci 2023; 17:1231924. [PMID: 38024449 PMCID: PMC10652417 DOI: 10.3389/fncom.2023.1231924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 10/12/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The N-methyl-D-aspartate receptor (NMDAR) plays a critical role in synaptic transmission and is associated with various neurological and psychiatric disorders. Recently, a novel form of postsynaptic plasticity known as NMDAR-based short-term postsynaptic plasticity (STPP) has been identified. It has been suggested that long-lasting glutamate binding to NMDAR allows for the retention of input information in brain slices up to 500 ms, leading to response facilitation. However, the impact of STPP on the dynamics of neuronal populations remains unexplored. Methods In this study, we incorporated STPP into a continuous attractor neural network (CANN) model to investigate its effects on neural information encoding in populations of neurons. Unlike short-term facilitation, a form of presynaptic plasticity, the temporally enhanced synaptic efficacy resulting from STPP destabilizes the network state of the CANN by increasing its mobility. Results Our findings demonstrate that the inclusion of STPP in the CANN model enables the network state to predictively respond to a moving stimulus. This nontrivial dynamical effect facilitates the tracking of the anticipated stimulus, as the enhanced synaptic efficacy induced by STPP enhances the system's mobility. Discussion The discovered STPP-based mechanism for sensory prediction provides valuable insights into the potential development of brain-inspired computational algorithms for prediction. By elucidating the role of STPP in neural population dynamics, this study expands our understanding of the functional implications of NMDAR-related plasticity in information processing within the brain. Conclusion The incorporation of STPP into a CANN model highlights its influence on the mobility and predictive capabilities of neural networks. These findings contribute to our knowledge of STPP-based mechanisms and their potential applications in developing computational algorithms for sensory prediction.
Collapse
Affiliation(s)
| | - Sungchil Yang
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong SAR, China
| | - Chi Chung Alan Fung
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong SAR, China
| |
Collapse
|
4
|
Yan M, Zhang WH, Wang H, Wong KYM. Bimodular continuous attractor neural networks with static and moving stimuli. Phys Rev E 2023; 107:064302. [PMID: 37464697 DOI: 10.1103/physreve.107.064302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Accepted: 05/08/2023] [Indexed: 07/20/2023]
Abstract
We investigated the dynamical behaviors of bimodular continuous attractor neural networks, each processing a modality of sensory input and interacting with each other. We found that when bumps coexist in both modules, the position of each bump is shifted towards the other input when the intermodular couplings are excitatory and is shifted away when inhibitory. When one intermodular coupling is excitatory while another is moderately inhibitory, temporally modulated population spikes can be generated. On further increase of the inhibitory coupling, momentary spikes will emerge. In the regime of bump coexistence, bump heights are primarily strengthened by excitatory intermodular couplings, but there is a lesser weakening effect due to a bump being displaced from the direct input. When bimodular networks serve as decoders of multisensory integration, we extend the Bayesian framework to show that excitatory and inhibitory couplings encode attractive and repulsive priors, respectively. At low disparity, the bump positions decode the posterior means in the Bayesian framework, whereas at high disparity, multiple steady states exist. In the regime of multiple steady states, the less stable state can be accessed if the input causing the more stable state arrives after a sufficiently long delay. When one input is moving, the bump in the corresponding module is pinned when the moving stimulus is weak, unpinned at intermediate stimulus strength, and tracks the input at strong stimulus strength, and the stimulus strengths for these transitions increase with the velocity of the moving stimulus. These results are important to understanding multisensory integration of static and dynamic stimuli.
Collapse
Affiliation(s)
- Min Yan
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
| | - Wen-Hao Zhang
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, Texas 75390, USA
- O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, Texas 75390, USA
| | - He Wang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
- Hong Kong University of Science and Technology, Shenzhen Research Institute, Shenzhen 518057, China
| | - K Y Michael Wong
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
| |
Collapse
|
5
|
Huang Y, Yu J, Leng J, Liu B, Yi Z. Continuous Recurrent Neural Networks Based on Function Satlins. Neural Process Lett 2022. [DOI: 10.1007/s11063-021-10682-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
6
|
|
7
|
Turner-Evans DB, Jensen KT, Ali S, Paterson T, Sheridan A, Ray RP, Wolff T, Lauritzen JS, Rubin GM, Bock DD, Jayaraman V. The Neuroanatomical Ultrastructure and Function of a Biological Ring Attractor. Neuron 2020; 108:145-163.e10. [PMID: 32916090 PMCID: PMC8356802 DOI: 10.1016/j.neuron.2020.08.006] [Citation(s) in RCA: 78] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 05/20/2020] [Accepted: 08/05/2020] [Indexed: 01/31/2023]
Abstract
Neural representations of head direction (HD) have been discovered in many species. Theoretical work has proposed that the dynamics associated with these representations are generated, maintained, and updated by recurrent network structures called ring attractors. We evaluated this theorized structure-function relationship by performing electron-microscopy-based circuit reconstruction and RNA profiling of identified cell types in the HD system of Drosophila melanogaster. We identified motifs that have been hypothesized to maintain the HD representation in darkness, update it when the animal turns, and tether it to visual cues. Functional studies provided support for the proposed roles of individual excitatory or inhibitory circuit elements in shaping activity. We also discovered recurrent connections between neuronal arbors with mixed pre- and postsynaptic specializations. Our results confirm that the Drosophila HD network contains the core components of a ring attractor while also revealing unpredicted structural features that might enhance the network's computational power.
Collapse
Affiliation(s)
| | - Kristopher T Jensen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
| | - Saba Ali
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Tyler Paterson
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Arlo Sheridan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Robert P Ray
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Tanya Wolff
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - J Scott Lauritzen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Gerald M Rubin
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Davi D Bock
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA; Department of Neurological Sciences, Larner College of Medicine, University of Vermont, Burlington, VT 05405, USA
| | - Vivek Jayaraman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| |
Collapse
|
8
|
Johnson JK, Geng S, Hoffman MW, Adesnik H, Wessel R. Precision multidimensional neural population code recovered from single intracellular recordings. Sci Rep 2020; 10:15997. [PMID: 32994474 PMCID: PMC7524839 DOI: 10.1038/s41598-020-72936-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Accepted: 08/20/2020] [Indexed: 11/08/2022] Open
Abstract
Neurons in sensory cortices are more naturally and deeply integrated than any current neural population recording tools (e.g. electrode arrays, fluorescence imaging). Two concepts facilitate efforts to observe population neural code with single-cell recordings. First, even the highest quality single-cell recording studies find a fraction of the stimulus information in high-dimensional population recordings. Finding any of this missing information provides proof of principle. Second, neurons and neural populations are understood as coupled nonlinear differential equations. Therefore, fitted ordinary differential equations provide a basis for single-trial single-cell stimulus decoding. We obtained intracellular recordings of fluctuating transmembrane current and potential in mouse visual cortex during stimulation with drifting gratings. We use mean deflection from baseline when comparing to prior single-cell studies because action potentials are too sparse and the deflection response to drifting grating stimuli (e.g. tuning curves) are well studied. Equation-based decoders allowed more precise single-trial stimulus discrimination than tuning-curve-base decoders. Performance varied across recorded signal types in a manner consistent with population recording studies and both classification bases evinced distinct stimulus-evoked phases of population dynamics, providing further corroboration. Naturally and deeply integrated observations of population dynamics would be invaluable. We offer proof of principle and a versatile framework.
Collapse
Affiliation(s)
| | | | | | | | - Ralf Wessel
- Washington University in St. Louis, St. Louis, USA
| |
Collapse
|
9
|
Zhong W, Lu Z, Schwab DJ, Murugan A. Nonequilibrium Statistical Mechanics of Continuous Attractors. Neural Comput 2020; 32:1033-1068. [PMID: 32343645 DOI: 10.1162/neco_a_01280] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Continuous attractors have been used to understand recent neuroscience experiments where persistent activity patterns encode internal representations of external attributes like head direction or spatial location. However, the conditions under which the emergent bump of neural activity in such networks can be manipulated by space and time-dependent external sensory or motor signals are not understood. Here, we find fundamental limits on how rapidly internal representations encoded along continuous attractors can be updated by an external signal. We apply these results to place cell networks to derive a velocity-dependent nonequilibrium memory capacity in neural networks.
Collapse
Affiliation(s)
- Weishun Zhong
- James Franck Institute, University of Chicago, Chicago, IL 60637, and Department of Physics, MIT, Cambridge, MA 02139, U.S.A.
| | - Zhiyue Lu
- James Franck Institute, University of Chicago, Chicago, IL 60637, U.S.A.
| | - David J Schwab
- Initiative for the Theoretical Sciences, CUNY Graduate Center, New York, NY 10016, and Center for the Physics of Biological Function, Princeton University and City University of New York, Princeton, NJ 08544, and New York, NY, 10016
| | - Arvind Murugan
- Department of Physics and the James Franck Institute, University of Chicago, Chicago, IL 60637, U.S.A.
| |
Collapse
|
10
|
Fung CCA, Fukai T. Discrete-Attractor-like Tracking in Continuous Attractor Neural Networks. PHYSICAL REVIEW LETTERS 2019; 122:018102. [PMID: 31012700 DOI: 10.1103/physrevlett.122.018102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 10/31/2018] [Indexed: 06/09/2023]
Abstract
Continuous attractor neural networks generate a set of smoothly connected attractor states. In memory systems of the brain, these attractor states may represent continuous pieces of information such as spatial locations and head directions of animals. However, during the replay of previous experiences, hippocampal neurons show a discontinuous sequence in which discrete transitions of the neural state are phase locked with the slow-gamma (∼30-50 Hz) oscillation. Here, we explore the underlying mechanisms of the discontinuous sequence generation. We find that a continuous attractor neural network has several phases depending on the interactions between external input and local inhibitory feedback. The discrete-attractor-like behavior naturally emerges in one of these phases without any discreteness assumption. We propose that the dynamics of continuous attractor neural networks is the key to generate discontinuous state changes phase locked to the brain rhythm.
Collapse
Affiliation(s)
- Chi Chung Alan Fung
- RIKEN Center for Brain Science, Hirosawa 2-1, Wako City, Saitama 351-0198, Japan
| | - Tomoki Fukai
- RIKEN Center for Brain Science, Hirosawa 2-1, Wako City, Saitama 351-0198, Japan
| |
Collapse
|
11
|
Wu S, Wong KYM, Fung CCA, Mi Y, Zhang W. Continuous Attractor Neural Networks: Candidate of a Canonical Model for Neural Information Representation. F1000Res 2016; 5. [PMID: 26937278 PMCID: PMC4752021 DOI: 10.12688/f1000research.7387.1] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/04/2016] [Indexed: 11/30/2022] Open
Abstract
Owing to its many computationally desirable properties, the model of continuous attractor neural networks (CANNs) has been successfully applied to describe the encoding of simple continuous features in neural systems, such as orientation, moving direction, head direction, and spatial location of objects. Recent experimental and computational studies revealed that complex features of external inputs may also be encoded by low-dimensional CANNs embedded in the high-dimensional space of neural population activity. The new experimental data also confirmed the existence of the M-shaped correlation between neuronal responses, which is a correlation structure associated with the unique dynamics of CANNs. This body of evidence, which is reviewed in this report, suggests that CANNs may serve as a canonical model for neural information representation.
Collapse
Affiliation(s)
- Si Wu
- State Key Laboratory of Cognitive Neuroscience & Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - K Y Michael Wong
- Department of Physics, Hong Kong University of Science & Technology, Clear Water Bay Peninsula, Hong Kong
| | - C C Alan Fung
- RIKEN Brain Science Institute, Wako-shi, Saitama, Japan
| | - Yuanyuan Mi
- State Key Laboratory of Cognitive Neuroscience & Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Wenhao Zhang
- State Key Laboratory of Cognitive Neuroscience & Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China; Department of Physics, Hong Kong University of Science & Technology, Clear Water Bay Peninsula, Hong Kong
| |
Collapse
|
12
|
Li J, Yang J, Yuan X, Hu Z. Continuous attractors of higher-order recurrent neural networks with infinite neurons. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2013.10.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
13
|
Yu J, Tang H, Li H. Continuous attractors of discrete-time recurrent neural networks. Neural Comput Appl 2013. [DOI: 10.1007/s00521-012-0975-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
14
|
Makin JG, Fellows MR, Sabes PN. Learning multisensory integration and coordinate transformation via density estimation. PLoS Comput Biol 2013; 9:e1003035. [PMID: 23637588 PMCID: PMC3630212 DOI: 10.1371/journal.pcbi.1003035] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2012] [Accepted: 03/03/2013] [Indexed: 11/19/2022] Open
Abstract
Sensory processing in the brain includes three key operations: multisensory integration-the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations-the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned-but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations.
Collapse
Affiliation(s)
- Joseph G Makin
- Department of Physiology and the Center for Integrative Neuroscience, University of California San Francisco, San Francisco, California, USA.
| | | | | |
Collapse
|
15
|
Yu J, Tang H, Li H. Dynamics analysis of a population decoding model. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:498-503. [PMID: 24808321 DOI: 10.1109/tnnls.2012.2236684] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Information processing in the nervous system involves the activity of large populations of neurons. It is difficult to extract information from these population codes because of the noise inherent in neuronal responses. We propose a divisive normalization model to read the population codes. The dynamics of the model are analyzed by continuous attractor theory. Under certain conditions, the model possesses continuous attractors. Moreover, the explicit expressions of the continuous attractors are provided. Simulations are employed to illustrate the theory.
Collapse
|
16
|
|
17
|
Kamimura A, Kobayashi TJ. Information processing and integration with intracellular dynamics near critical point. Front Physiol 2012; 3:203. [PMID: 22707939 PMCID: PMC3374347 DOI: 10.3389/fphys.2012.00203] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2012] [Accepted: 05/23/2012] [Indexed: 11/13/2022] Open
Abstract
Recent experimental observations suggest that cells can show relatively precise and reliable responses to external signals even though substantial noise is inevitably involved in the signals. An intriguing question is the way how cells can manage to do it. One possible way to realize such response for a cell is to evolutionary develop and optimize its intracellular signaling pathways so as to extract relevant information from the noisy signal. We recently demonstrated that certain intracellular signaling reactions could actually conduct statistically optimal information processing. In this paper, we clarify that such optimal reaction operates near bifurcation point. This result suggests that critical-like phenomena in the single-cell level may be linked to efficient information processing inside a cell. In addition, improving the performance of response in the single-cell level is not the only way for cells to realize reliable response. Another possible strategy is to integrate information of individual cells by cell-to-cell interaction such as quorum sensing. Since cell-to-cell interaction is a common phenomenon, it is equally important to investigate how cells can integrate their information by cell-to-cell interaction to realize efficient information processing in the population level. In this paper, we consider roles and benefits of cell-to-cell interaction by considering integrations of obtained information of individuals with the other cells from the viewpoint of information processing. We also demonstrate that, by introducing cell movement, spatial organizations can spontaneously emerge as a result of efficient responses of the population to external signals.
Collapse
Affiliation(s)
- Atsushi Kamimura
- Institute of Industrial Science, The University of Tokyo Tokyo, Japan
| | | |
Collapse
|
18
|
Abstract
Descending feedback connections, together with ascending feedforward ones, are the indispensable parts of the sensory pathways in the central nervous system. This study investigates the potential roles of feedback interactions in neural information processing. We consider a two-layer continuous attractor neural network (CANN), in which neurons in the first layer receive feedback inputs from those in the second one. By utilizing the intrinsic property of a CANN, we use a projection method to reduce the dimensionality of the network dynamics significantly. The simplified dynamics allows us to elucidate the effects of feedback modulation analytically. We find that positive feedback enhances the stability of the network state, leading to an improved population decoding performance, whereas negative feedback increases the mobility of the network state, inducing spontaneously moving bumps. For strong, negative feedback interaction, the network response to a moving stimulus can lead the actual stimulus position, achieving an anticipative behavior. The biological implications of these findings are discussed. The simulation results agree well with our theoretical analysis.
Collapse
Affiliation(s)
- Wenhao Zhang
- Institute of Neuroscience, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China.
| | | |
Collapse
|
19
|
Fung CCA, Wong KYM, Wang H, Wu S. Dynamical synapses enhance neural information processing: gracefulness, accuracy, and mobility. Neural Comput 2012; 24:1147-85. [PMID: 22295986 DOI: 10.1162/neco_a_00269] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Experimental data have revealed that neuronal connection efficacy exhibits two forms of short-term plasticity: short-term depression (STD) and short-term facilitation (STF). They have time constants residing between fast neural signaling and rapid learning and may serve as substrates for neural systems manipulating temporal information on relevant timescales. This study investigates the impact of STD and STF on the dynamics of continuous attractor neural networks and their potential roles in neural information processing. We find that STD endows the network with slow-decaying plateau behaviors: the network that is initially being stimulated to an active state decays to a silent state very slowly on the timescale of STD rather than on that of neuralsignaling. This provides a mechanism for neural systems to hold sensory memory easily and shut off persistent activities gracefully. With STF, we find that the network can hold a memory trace of external inputs in the facilitated neuronal interactions, which provides a way to stabilize the network response to noisy inputs, leading to improved accuracy in population decoding. Furthermore, we find that STD increases the mobility of the network states. The increased mobility enhances the tracking performance of the network in response to time-varying stimuli, leading to anticipative neural responses. In general, we find that STD and STP tend to have opposite effects on network dynamics and complementary computational advantages, suggesting that the brain may employ a strategy of weighting them differentially depending on the computational purpose.
Collapse
Affiliation(s)
- C C Alan Fung
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong, China.
| | | | | | | |
Collapse
|
20
|
How each movement changes the next: an experimental and theoretical study of fast adaptive priors in reaching. J Neurosci 2011; 31:10050-9. [PMID: 21734297 DOI: 10.1523/jneurosci.6525-10.2011] [Citation(s) in RCA: 154] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Most voluntary actions rely on neural circuits that map sensory cues onto appropriate motor responses. One might expect that for everyday movements, like reaching, this mapping would remain stable over time, at least in the absence of error feedback. Here we describe a simple and novel psychophysical phenomenon in which recent experience shapes the statistical properties of reaching, independent of any movement errors. Specifically, when recent movements are made to targets near a particular location subsequent movements to that location become less variable, but at the cost of increased bias for reaches to other targets. This process exhibits the variance-bias tradeoff that is a hallmark of Bayesian estimation. We provide evidence that this process reflects a fast, trial-by-trial learning of the prior distribution of targets. We also show that these results may reflect an emergent property of associative learning in neural circuits. We demonstrate that adding Hebbian (associative) learning to a model network for reach planning leads to a continuous modification of network connections that biases network dynamics toward activity patterns associated with recent inputs. This learning process quantitatively captures the key results of our experimental data in human subjects, including the effect that recent experience has on the variance-bias tradeoff. This network also provides a good approximation of a normative Bayesian estimator. These observations illustrate how associative learning can incorporate recent experience into ongoing computations in a statistically principled way.
Collapse
|
21
|
Moazzezi R, Dayan P. Change-Based Inference in Attractor Nets: Linear Analysis. Neural Comput 2010; 22:3036-61. [DOI: 10.1162/neco_a_00051] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
One standard interpretation of networks of cortical neurons is that they form dynamical attractors. Computations such as stimulus estimation are performed by mapping inputs to points on the networks' attractive manifolds. These points represent population codes for the stimulus values. However, this standard interpretation is hard to reconcile with the observation that the firing rates of such neurons constantly change following presentation of stimuli. We have recently suggested an alternative interpretation according to which computations are realized by systematic changes in the states of such networks over time. This way of performing computations is fast, accurate, readily learnable, and robust to various forms of noise. Here we analyze the computation of stimulus discrimination in this change-based setting, relating it directly to the computation of stimulus estimation in the conventional attractor-based view. We use a common linear approximation to compare the two methods and show that perfect performance at estimation implies chance performance at discrimination.
Collapse
Affiliation(s)
- Reza Moazzezi
- Gatsby Computational Neuroscience Unit, UCL, London, WC1N 3AR, U.K
| | - Peter Dayan
- Gatsby Computational Neuroscience Unit, UCL, London, WC1N 3AR, U.K
| |
Collapse
|
22
|
Yu J, Yi Z, Zhou J. Continuous attractors of Lotka-Volterra recurrent neural networks with infinite neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS 2010; 21:1690-1695. [PMID: 20813637 DOI: 10.1109/tnn.2010.2067224] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Continuous attractors of Lotka-Volterra recurrent neural networks (LV RNNs) with infinite neurons are studied in this brief. A continuous attractor is a collection of connected equilibria, and it has been recognized as a suitable model for describing the encoding of continuous stimuli in neural networks. The existence of the continuous attractors depends on many factors such as the connectivity and the external inputs of the network. A continuous attractor can be stable or unstable. It is shown in this brief that a LV RNN can possess multiple continuous attractors if the synaptic connections and the external inputs are Gussian-like in shape. Moreover, both stable and unstable continuous attractors can coexist in a network. Explicit expressions of the continuous attractors are calculated. Simulations are employed to illustrate the theory.
Collapse
Affiliation(s)
- Jiali Yu
- Institute for Infocomm Research, Agency for Science Technology and Research, 138632, Singapore.
| | | | | |
Collapse
|
23
|
Computational neuroscience in China. SCIENCE CHINA-LIFE SCIENCES 2010; 53:385-397. [DOI: 10.1007/s11427-010-0063-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2009] [Accepted: 01/19/2010] [Indexed: 10/19/2022]
|
24
|
Fung CCA, Wong KYM, Wu S. A Moving Bump in a Continuous Manifold: A Comprehensive Study of the Tracking Dynamics of Continuous Attractor Neural Networks. Neural Comput 2010; 22:752-92. [DOI: 10.1162/neco.2009.07-08-824] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Understanding how the dynamics of a neural network is shaped by the network structure and, consequently, how the network structure facilitates the functions implemented by the neural system is at the core of using mathematical models to elucidate brain functions. This study investigates the tracking dynamics of continuous attractor neural networks (CANNs). Due to the translational invariance of neuronal recurrent interactions, CANNs can hold a continuous family of stationary states. They form a continuous manifold in which the neural system is neutrally stable. We systematically explore how this property facilitates the tracking performance of a CANN, which is believed to have clear correspondence with brain functions. By using the wave functions of the quantum harmonic oscillator as the basis, we demonstrate how the dynamics of a CANN is decomposed into different motion modes, corresponding to distortions in the amplitude, position, width, or skewness of the network state. We then develop a perturbation approach that utilizes the dominating movement of the network's stationary states in the state space. This method allows us to approximate the network dynamics up to an arbitrary accuracy depending on the order of perturbation used. We quantify the distortions of a gaussian bump during tracking and study their effects on tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable and the reaction time for the network to catch up with an abrupt change in the stimulus.
Collapse
Affiliation(s)
- C. C. Alan Fung
- Department of Physics, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, China
| | - K. Y. Michael Wong
- Department of Physics, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, China
| | - Si Wu
- Department of Informatics, University of Sussex, Brighton BN1 9QH, U.K. and Lab of Neural Information Processing, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
25
|
Pennartz CM. Identification and integration of sensory modalities: Neural basis and relation to consciousness. Conscious Cogn 2009; 18:718-39. [DOI: 10.1016/j.concog.2009.03.003] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2008] [Revised: 03/11/2009] [Accepted: 03/16/2009] [Indexed: 12/01/2022]
|
26
|
van Duuren E, van der Plasse G, Lankelma J, Joosten RNJMA, Feenstra MGP, Pennartz CMA. Single-cell and population coding of expected reward probability in the orbitofrontal cortex of the rat. J Neurosci 2009; 29:8965-76. [PMID: 19605634 PMCID: PMC6665423 DOI: 10.1523/jneurosci.0005-09.2009] [Citation(s) in RCA: 60] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2009] [Revised: 04/29/2009] [Accepted: 05/19/2009] [Indexed: 11/21/2022] Open
Abstract
The orbitofrontal cortex (OFC) has been implicated in decision-making under uncertainty, but it is unknown how information about the probability or uncertainty of future reward is coded by single orbitofrontal neurons and ensembles. We recorded neuronal ensembles in rat OFC during an olfactory discrimination task in which different odor stimuli predicted different reward probabilities. Single-unit firing patterns correlated to the expected reward probability primarily within an immobile waiting period before reward delivery but also when the rat executed movements toward the reward site. During these pre-reward periods, a subset of OFC neurons was sensitive to differences in probability but only very rarely discriminated on the basis of reward uncertainty. In the reward period, neurons responded during presentation or omission of reward or during both types of outcome. At the population level, neurons were characterized by a wide divergence in firing-rate variability attributable to expected probability. A population analysis using template matching as reconstruction method indicated that OFC generates a distributed representation of reward probability with a weak dependence on neuronal group size. The analysis furthermore confirmed that predictive information coded by OFC populations was quantitatively related to reward probability, but not to uncertainty.
Collapse
Affiliation(s)
- Esther van Duuren
- Swammerdam Institute for Life Sciences, Center for Neuroscience, University of Amsterdam, 1098 SM Amsterdam, The Netherlands
| | | | | | | | | | | |
Collapse
|
27
|
Yu J, Yi Z, Zhang L. Representations of continuous attractors of recurrent neural networks. ACTA ACUST UNITED AC 2009; 20:368-72. [PMID: 19150791 DOI: 10.1109/tnn.2008.2010771] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A continuous attractor of a recurrent neural network (RNN) is a set of connected stable equilibrium points. Continuous attractors have been used to describe the encoding of continuous stimuli in neural networks. Dynamic behaviors of continuous attractors of RNNs exhibit interesting properties. This brief desires to derive explicit representations of continuous attractors of RNNs. Representations of continuous attractors of linear RNNs as well as linear-threshold (LT) RNNs are obtained under some conditions. These representations could be looked at as solutions of continuous attractors of the networks. Such results provide clear and complete descriptions to the continuous attractors.
Collapse
Affiliation(s)
- Jiali Yu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China.
| | | | | |
Collapse
|
28
|
van Duuren E, Lankelma J, Pennartz CMA. Population coding of reward magnitude in the orbitofrontal cortex of the rat. J Neurosci 2008; 28:8590-603. [PMID: 18716218 PMCID: PMC6671050 DOI: 10.1523/jneurosci.5549-07.2008] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2007] [Revised: 07/03/2008] [Accepted: 07/09/2008] [Indexed: 11/21/2022] Open
Abstract
Although single-cell coding of reward-related information in the orbitofrontal cortex (OFC) has been characterized to some extent, much less is known about the coding properties of orbitofrontal ensembles. We examined population coding of reward magnitude by performing ensemble recordings in rat OFC while animals learned an olfactory discrimination task in which various reinforcers were associated with predictive odor stimuli. Ensemble activity was found to represent information about reward magnitude during several trial phases, namely when animals moved to the reward site, anticipated reward during an immobile period, and received it. During the anticipation phase, Bayesian and template-matching reconstruction algorithms decoded reward size correctly from the population activity significantly above chance level (highest value of 43 and 48%, respectively; chance level, 33.3%), whereas decoding performance for the reward delivery phase was 76 and 79%, respectively. In the anticipation phase, the decoding score was only weakly dependent on the size of the neuronal group participating in reconstruction, consistent with a redundant, distributed representation of reward information. In contrast, decoding was specific for temporal segments within the structure of a trial. Decoding performance steeply increased across the first few trials for every rewarded odor, an effect that could not be explained by a nonspecific drift in response strength across trials. Finally, when population responses to a negative reinforcer (quinine) were compared with sucrose reinforcement, coding in the delivery phase appeared to be related to reward quality, and thus was not based on ingested liquid volume.
Collapse
Affiliation(s)
- Esther van Duuren
- Cognitive and Systems Neuroscience Group, Center for Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, 1098 SM Amsterdam, The Netherlands
| | - Jan Lankelma
- Cognitive and Systems Neuroscience Group, Center for Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, 1098 SM Amsterdam, The Netherlands
| | - Cyriel M. A. Pennartz
- Cognitive and Systems Neuroscience Group, Center for Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, 1098 SM Amsterdam, The Netherlands
| |
Collapse
|
29
|
Abstract
Continuous attractor is a promising model for describing the encoding of continuous stimuli in neural systems. In a continuous attractor, the stationary states of the neural system form a continuous parameter space, on which the system is neutrally stable. This property enables the neutral system to track time-varying stimuli smoothly, but it also degrades the accuracy of information retrieval, since these stationary states are easily disturbed by external noise. In this work, based on a simple model, we systematically investigate the dynamics and the computational properties of continuous attractors. In order to analyze the dynamics of a large-size network, which is otherwise extremely complicated, we develop a strategy to reduce its dimensionality by utilizing the fact that a continuous attractor can eliminate the noise components perpendicular to the attractor space very quickly. We therefore project the network dynamics onto the tangent of the attractor space and simplify it successfully as a one-dimensional Ornstein-Uhlenbeck process. Based on this simplified model, we investigate (1) the decoding error of a continuous attractor under the driving of external noisy inputs, (2) the tracking speed of a continuous attractor when external stimulus experiences abrupt changes, (3) the neural correlation structure associated with the specific dynamics of a continuous attractor, and (4) the consequence of asymmetric neural correlation on statistical population decoding. The potential implications of these results on our understanding of neural information processing are also discussed.
Collapse
Affiliation(s)
- Si Wu
- Department of Informatics, University of Sussex, Brighton BN1 9QH, U.K.
| | | | | |
Collapse
|
30
|
Moazzezi R, Dayan P. Change-based inference for invariant discrimination. NETWORK (BRISTOL, ENGLAND) 2008; 19:236-252. [PMID: 18946838 DOI: 10.1080/09548980802314917] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Under a conventional view of information processing in recurrently connected populations of neurons, computations consist in mapping inputs onto terminal attractor states of the dynamical interactions. However, there is evidence that substantial information representation and processing can occur over the course of the initial evolution of the dynamical states of such populations, a possibility that has attractive computational properties. Here, we suggest a model that explores one such property, namely, the invariance to an irrelevant feature dimension that arises from monitoring not the state of the population, but rather (a statistic of) the change in this state over time. We illustrate our proposal in the context of the bisection task, a paradigmatic example of perceptual learning for which an attractor-state recurrent model has previously been suggested. We show a change-based inference scheme that achieves near optimal performance in the task (with invariance to translation), is robust to high levels of dynamical noise and variations of the synaptic weight matrix, and indeed admits a computationally straightforward learning rule.
Collapse
Affiliation(s)
- Reza Moazzezi
- Gatsby Computational Neuroscience Unit, Alexandra House, 17 Queen Square, London, WC1N 3AR, UK.
| | | |
Collapse
|