1
|
Ye Z, Li H, Tian L, Zhou C. The effects of the post-delay epochs on working memory error reduction. PLoS Comput Biol 2025; 21:e1013083. [PMID: 40359421 DOI: 10.1371/journal.pcbi.1013083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2024] [Accepted: 04/22/2025] [Indexed: 05/15/2025] Open
Abstract
Accurate retrieval of the maintained information is crucial for working memory. This process primarily occurs during post-delay epochs, when subjects receive cues and generate responses. However, the computational and neural mechanisms that underlie these post-delay epochs to support robust memory remain poorly understood. To address this, we trained recurrent neural networks (RNNs) on a color delayed-response task, where certain colors (referred to as common colors) were more frequently presented for memorization. We found that the trained RNNs reduced memory errors for common colors by decoding a broader range of neural states into these colors through the post-delay epochs. This decoding process was driven by convergent neural dynamics and a non-dynamic, biased readout process during the post-delay epochs. Our findings highlight the importance of post-delay epochs in working memory and suggest that neural systems adapt to environmental statistics by using multiple mechanisms across task epochs.
Collapse
Affiliation(s)
- Zeyuan Ye
- Department of Physics, Hong Kong Baptist University, Hong Kong, China
- Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Hong Kong Baptist University, Hong Kong, China
- Institute of Transdisciplinary Studies, Hong Kong Baptist University, Hong KongChina
- Department of Physics, Washington University in St. Louis, St. Louis, Missouri, United States of America
| | - Haoran Li
- Department of Physics, Hong Kong Baptist University, Hong Kong, China
| | - Liang Tian
- Department of Physics, Hong Kong Baptist University, Hong Kong, China
- Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China
- Institute of Systems Medicine and Health Sciences, Hong Kong Baptist University, Hong Kong, China
| | - Changsong Zhou
- Department of Physics, Hong Kong Baptist University, Hong Kong, China
- Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Hong Kong Baptist University, Hong Kong, China
- Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China
- Life Science Imaging Centre, Hong Kong Baptist University, Hong Kong, China
| |
Collapse
|
2
|
Wu S, Huang H, Wang S, Chen G, Zhou C, Yang D. Neural heterogeneity enhances reliable neural information processing: Local sensitivity and globally input-slaved transient dynamics. SCIENCE ADVANCES 2025; 11:eadr3903. [PMID: 40173217 PMCID: PMC11963962 DOI: 10.1126/sciadv.adr3903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Accepted: 02/26/2025] [Indexed: 04/04/2025]
Abstract
Cortical neuronal activity varies over time and across repeated trials, yet consistently represents stimulus features. The dynamical mechanism underlying this reliable representation and computation remains elusive. This study uncovers a mechanism for reliable neural information processing, leveraging a biologically plausible network model incorporating neural heterogeneity. First, we investigate neuronal timescale diversity, revealing that it disrupts intrinsic coherent spatiotemporal patterns, induces firing rate heterogeneity, enhances local responsive sensitivity, and aligns network activity closely with input. The system exhibits globally input-slaved transient dynamics, essential for reliable neural information processing. Other neural heterogeneities, such as nonuniform input connections, spike threshold heterogeneity, and network in-degree heterogeneity, play similar roles, highlighting the importance of neural heterogeneity in shaping consistent stimulus representation. This mechanism offers a potentially general framework for understanding neural heterogeneity in reliable computation and informs the design of reservoir computing models endowed with liquid wave reservoirs for neuromorphic computing.
Collapse
Affiliation(s)
- Shengdun Wu
- Research Centre for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou 311100, China
| | - Haiping Huang
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, China
| | - Shengjun Wang
- Department of Physics, Shaanxi Normal University, Xi’an 710119, China
| | - Guozhang Chen
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, China
| | - Changsong Zhou
- Department of Physics, Hong Kong Baptist University, Kowloon Tong, Hong Kong, China
| | - Dongping Yang
- Research Centre for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou 311100, China
| |
Collapse
|
3
|
Xu M, Liu F, Hu Y, Li H, Wei Y, Zhong S, Pei J, Deng L. Adaptive Synaptic Scaling in Spiking Networks for Continual Learning and Enhanced Robustness. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5151-5165. [PMID: 38536699 DOI: 10.1109/tnnls.2024.3373599] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Synaptic plasticity plays a critical role in the expression power of brain neural networks. Among diverse plasticity rules, synaptic scaling presents indispensable effects on homeostasis maintenance and synaptic strength regulation. In the current modeling of brain-inspired spiking neural networks (SNN), backpropagation through time is widely adopted because it can achieve high performance using a small number of time steps. Nevertheless, the synaptic scaling mechanism has not yet been well touched. In this work, we propose an experience-dependent adaptive synaptic scaling mechanism (AS-SNN) for spiking neural networks. The learning process has two stages: First, in the forward path, adaptive short-term potentiation or depression is triggered for each synapse according to afferent stimuli intensity accumulated by presynaptic historical neural activities. Second, in the backward path, long-term consolidation is executed through gradient signals regulated by the corresponding scaling factor. This mechanism shapes the pattern selectivity of synapses and the information transfer they mediate. We theoretically prove that the proposed adaptive synaptic scaling function follows a contraction map and finally converges to an expected fixed point, in accordance with state-of-the-art results in three tasks on perturbation resistance, continual learning, and graph learning. Specifically, for the perturbation resistance and continual learning tasks, our approach improves the accuracy on the N-MNIST benchmark over the baseline by 44% and 25%, respectively. An expected firing rate callback and sparse coding can be observed in graph learning. Extensive experiments on ablation study and cost evaluation evidence the effectiveness and efficiency of our nonparametric adaptive scaling method, which demonstrates the great potential of SNN in continual learning and robust learning.
Collapse
|
4
|
Wojtak W, Coombes S, Avitabile D, Bicho E, Erlhagen W. Robust working memory in a two-dimensional continuous attractor network. Cogn Neurodyn 2024; 18:3273-3289. [PMID: 39712130 PMCID: PMC11655900 DOI: 10.1007/s11571-023-09979-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/06/2023] [Accepted: 05/01/2023] [Indexed: 12/24/2024] Open
Abstract
Continuous bump attractor networks (CANs) have been widely used in the past to explain the phenomenology of working memory (WM) tasks in which continuous-valued information has to be maintained to guide future behavior. Standard CAN models suffer from two major limitations: the stereotyped shape of the bump attractor does not reflect differences in the representational quality of WM items and the recurrent connections within the network require a biologically unrealistic level of fine tuning. We address both challenges in a two-dimensional (2D) network model formalized by two coupled neural field equations of Amari type. It combines the lateral-inhibition-type connectivity of classical CANs with a locally balanced excitatory and inhibitory feedback loop. We first use a radially symmetric connectivity to analyze the existence, stability and bifurcation structure of 2D bumps representing the conjunctive WM of two input dimensions. To address the quality of WM content, we show in model simulations that the bump amplitude reflects the temporal integration of bottom-up and top-down evidence for a specific combination of input features. This includes the network capacity to transform a stable subthreshold memory trace of a weak input into a high fidelity memory representation by an unspecific cue given retrospectively during WM maintenance. To address the fine-tuning problem, we test numerically different perturbations of the assumed radial symmetry of the connectivity function including random spatial fluctuations in the connection strength. Different to the behavior of standard CAN models, the bump does not drift in representational space but remains stationary at the input position.
Collapse
Affiliation(s)
- Weronika Wojtak
- Research Centre of Mathematics, University of Minho, Guimarães, Portugal
- Research Centre Algoritmi, University of Minho, Guimarães, Portugal
| | - Stephen Coombes
- Centre for Mathematical Medicine and Biology, School of Mathematical Sciences, University of Nottingham, Nottingham, UK
| | - Daniele Avitabile
- Department of Mathematics, Vrije Universiteit, Amsterdam, The Netherlands
- MathNeuro Team, Inria Sophia Antipolis Méditerranée Research Centre, Sophia Antipolis, France
| | - Estela Bicho
- Research Centre Algoritmi, University of Minho, Guimarães, Portugal
| | - Wolfram Erlhagen
- Research Centre of Mathematics, University of Minho, Guimarães, Portugal
| |
Collapse
|
5
|
Schönsberg F, Monasson R, Treves A. Continuous Quasi-Attractors dissolve with too much - or too little - variability. PNAS NEXUS 2024; 3:pgae525. [PMID: 39670259 PMCID: PMC11635835 DOI: 10.1093/pnasnexus/pgae525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 11/12/2024] [Indexed: 12/14/2024]
Abstract
Recent research involving bats flying in long tunnels has confirmed that hippocampal place cells can be active at multiple locations, with considerable variability in place field size and peak rate. With self-organizing recurrent networks, variability implies inhomogeneity in the synaptic weights, impeding the establishment of a continuous manifold of fixed points. Are continuous attractor neural networks still valid models for understanding spatial memory in the hippocampus, given such variability? Here, we ask what are the noise limits, in terms of an experimentally inspired parametrization of the irregularity of a single map, beyond which the notion of continuous attractor is no longer relevant. Through numerical simulations we show that (i) a continuous attractor can be approximated even when neural dynamics ultimately converge onto very few fixed points, since a quasi-attractive continuous manifold supports dynamically localized activity; (ii) excess irregularity in field size however disrupts the continuity of the manifold, while too little irregularity, with multiple fields, surprisingly prevents localized activity; and (iii) the boundaries in parameter space among these three regimes, extracted from simulations, are well matched by analytical estimates. These results lead to predict that there will be a maximum size of a 1D environment which can be retained in memory, and that the replay of spatial activity during sleep or quiet wakefulness will be for short segments of the environment.
Collapse
Affiliation(s)
- Francesca Schönsberg
- Laboratory of Physics of the Ecole Normale Supérieure, PSL and CNRS UMR8023, Sorbonne Université, Paris 75005, France
- SISSA, Scuola Internazionale Superiore di Studi Avanzati, Cognitive Neuroscience, Trieste 34136, Italy
| | - Rémi Monasson
- Laboratory of Physics of the Ecole Normale Supérieure, PSL and CNRS UMR8023, Sorbonne Université, Paris 75005, France
| | - Alessandro Treves
- SISSA, Scuola Internazionale Superiore di Studi Avanzati, Cognitive Neuroscience, Trieste 34136, Italy
- DSyNC lab, University of Agder, Kristiansand 4604, Norway
| |
Collapse
|
6
|
Noorman M, Hulse BK, Jayaraman V, Romani S, Hermundstad AM. Maintaining and updating accurate internal representations of continuous variables with a handful of neurons. Nat Neurosci 2024; 27:2207-2217. [PMID: 39363052 PMCID: PMC11537979 DOI: 10.1038/s41593-024-01766-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Accepted: 08/14/2024] [Indexed: 10/05/2024]
Abstract
Many animals rely on persistent internal representations of continuous variables for working memory, navigation, and motor control. Existing theories typically assume that large networks of neurons are required to maintain such representations accurately; networks with few neurons are thought to generate discrete representations. However, analysis of two-photon calcium imaging data from tethered flies walking in darkness suggests that their small head-direction system can maintain a surprisingly continuous and accurate representation. We thus ask whether it is possible for a small network to generate a continuous, rather than discrete, representation of such a variable. We show analytically that even very small networks can be tuned to maintain continuous internal representations, but this comes at the cost of sensitivity to noise and variations in tuning. This work expands the computational repertoire of small networks, and raises the possibility that larger networks could represent more and higher-dimensional variables than previously thought.
Collapse
Affiliation(s)
- Marcella Noorman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| | - Brad K Hulse
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Vivek Jayaraman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Sandro Romani
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| |
Collapse
|
7
|
Yang J, Zhang H, Lim S. Sensory-memory interactions via modular structure explain errors in visual working memory. eLife 2024; 13:RP95160. [PMID: 39388221 PMCID: PMC11466453 DOI: 10.7554/elife.95160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/12/2024] Open
Abstract
Errors in stimulus estimation reveal how stimulus representation changes during cognitive processes. Repulsive bias and minimum variance observed near cardinal axes are well-known error patterns typically associated with visual orientation perception. Recent experiments suggest that these errors continuously evolve during working memory, posing a challenge that neither static sensory models nor traditional memory models can address. Here, we demonstrate that these evolving errors, maintaining characteristic shapes, require network interaction between two distinct modules. Each module fulfills efficient sensory encoding and memory maintenance, which cannot be achieved simultaneously in a single-module network. The sensory module exhibits heterogeneous tuning with strong inhibitory modulation reflecting natural orientation statistics. While the memory module, operating alone, supports homogeneous representation via continuous attractor dynamics, the fully connected network forms discrete attractors with moderate drift speed and nonuniform diffusion processes. Together, our work underscores the significance of sensory-memory interaction in continuously shaping stimulus representation during working memory.
Collapse
Affiliation(s)
- Jun Yang
- Weiyang College, Tsinghua UniversityBeijingChina
| | - Hanqi Zhang
- Shanghai Frontiers Science Center of Artificial Intelligence and Deep LearningShanghaiChina
- Neural ScienceShanghaiChina
- NYU-ECNU Institute of Brain and Cognitive ScienceShanghaiChina
| | - Sukbin Lim
- Shanghai Frontiers Science Center of Artificial Intelligence and Deep LearningShanghaiChina
- Neural ScienceShanghaiChina
- NYU-ECNU Institute of Brain and Cognitive ScienceShanghaiChina
| |
Collapse
|
8
|
Chen X, Bialek W. Searching for long timescales without fine tuning. Phys Rev E 2024; 110:034407. [PMID: 39425360 DOI: 10.1103/physreve.110.034407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Accepted: 09/03/2024] [Indexed: 10/21/2024]
Abstract
Animal behavior occurs on timescales much longer than the response times of individual neurons. In many cases, it is plausible that these long timescales emerge from the recurrent dynamics of electrical activity in networks of neurons. In linear models, timescales are set by the eigenvalues of a dynamical matrix whose elements measure the strengths of synaptic connections between neurons. It is not clear to what extent these matrix elements need to be tuned to generate long timescales; in some cases, one needs not just a single long timescale but a whole range. Starting from the simplest case of random symmetric connections, we combine maximum entropy and random matrix theory methods to construct ensembles of networks, exploring the constraints required for long timescales to become generic. We argue that a single long timescale can emerge generically from realistic constraints, but a full spectrum of slow modes requires more tuning. Langevin dynamics that generates patterns of synaptic connections drawn from these ensembles involves a combination of Hebbian learning and activity-dependent synaptic scaling.
Collapse
Affiliation(s)
- Xiaowen Chen
- Joseph Henry Laboratories of Physics, and Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey 08544, USA
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, PSL Université, CNRS, Sorbonne Université, Université Paris Cité, F-75005 Paris, France
| | - William Bialek
- Joseph Henry Laboratories of Physics, and Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey 08544, USA
- Initiative for the Theoretical Sciences, The Graduate Center, City University of New York, 365 Fifth Avenue, New York, New York 10016, USA
| |
Collapse
|
9
|
Stroud JP, Duncan J, Lengyel M. The computational foundations of dynamic coding in working memory. Trends Cogn Sci 2024; 28:614-627. [PMID: 38580528 DOI: 10.1016/j.tics.2024.02.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/29/2024] [Accepted: 02/29/2024] [Indexed: 04/07/2024]
Abstract
Working memory (WM) is a fundamental aspect of cognition. WM maintenance is classically thought to rely on stable patterns of neural activities. However, recent evidence shows that neural population activities during WM maintenance undergo dynamic variations before settling into a stable pattern. Although this has been difficult to explain theoretically, neural network models optimized for WM typically also exhibit such dynamics. Here, we examine stable versus dynamic coding in neural data, classical models, and task-optimized networks. We review principled mathematical reasons for why classical models do not, while task-optimized models naturally do exhibit dynamic coding. We suggest an update to our understanding of WM maintenance, in which dynamic coding is a fundamental computational feature rather than an epiphenomenon.
Collapse
Affiliation(s)
- Jake P Stroud
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
10
|
Mahrach A, Bestue D, Qi XL, Constantinidis C, Compte A. Cholinergic neuromodulation of prefrontal attractor dynamics controls performance in spatial working memory. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.17.576071. [PMID: 38293215 PMCID: PMC10827212 DOI: 10.1101/2024.01.17.576071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
The behavioral and neural effects of the endogenous release of acetylcholine following stimulation of the Nucleus Basalis of Meynert (NB) have been recently examined (Qi et al. 2021). Counterintuitively, NB stimulation enhanced behavioral performance while broadening neural tuning in the prefrontal cortex (PFC). The mechanism by which a weaker mnemonic neural code could lead to better performance remains unclear. Here, we show that increased neural excitability in a simple continuous bump attractor model can induce broader neural tuning and decrease bump diffusion, provided neural rates are saturated. Increased memory precision in the model overrides memory accuracy, improving overall task performance. Moreover, we show that bump attractor dynamics can account for the nonuniform impact of neuromodulation on distractibility, depending on distractor distance from the target. Finally, we delve into the conditions under which bump attractor tuning and diffusion balance in biologically plausible heterogeneous network models. In these discrete bump attractor networks, we show that reducing spatial correlations or enhancing excitatory transmission can improve memory precision. Altogether, we provide a mechanistic understanding of how cholinergic neuromodulation controls spatial working memory through perturbed attractor dynamics in PFC.
Collapse
Affiliation(s)
- Alexandre Mahrach
- Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), 08036 Barcelona, Spain
| | - David Bestue
- Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), 08036 Barcelona, Spain
| | - Xue-Lian Qi
- Wake Forest School of Medicine, Winston Salem, NC 27157, USA
| | | | - Albert Compte
- Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), 08036 Barcelona, Spain
| |
Collapse
|
11
|
Gast R, Solla SA, Kennedy A. Neural heterogeneity controls computations in spiking neural networks. Proc Natl Acad Sci U S A 2024; 121:e2311885121. [PMID: 38198531 PMCID: PMC10801870 DOI: 10.1073/pnas.2311885121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 11/27/2023] [Indexed: 01/12/2024] Open
Abstract
The brain is composed of complex networks of interacting neurons that express considerable heterogeneity in their physiology and spiking characteristics. How does this neural heterogeneity influence macroscopic neural dynamics, and how might it contribute to neural computation? In this work, we use a mean-field model to investigate computation in heterogeneous neural networks, by studying how the heterogeneity of cell spiking thresholds affects three key computational functions of a neural population: the gating, encoding, and decoding of neural signals. Our results suggest that heterogeneity serves different computational functions in different cell types. In inhibitory interneurons, varying the degree of spike threshold heterogeneity allows them to gate the propagation of neural signals in a reciprocally coupled excitatory population. Whereas homogeneous interneurons impose synchronized dynamics that narrow the dynamic repertoire of the excitatory neurons, heterogeneous interneurons act as an inhibitory offset while preserving excitatory neuron function. Spike threshold heterogeneity also controls the entrainment properties of neural networks to periodic input, thus affecting the temporal gating of synaptic inputs. Among excitatory neurons, heterogeneity increases the dimensionality of neural dynamics, improving the network's capacity to perform decoding tasks. Conversely, homogeneous networks suffer in their capacity for function generation, but excel at encoding signals via multistable dynamic regimes. Drawing from these findings, we propose intra-cell-type heterogeneity as a mechanism for sculpting the computational properties of local circuits of excitatory and inhibitory spiking neurons, permitting the same canonical microcircuit to be tuned for diverse computational tasks.
Collapse
Affiliation(s)
- Richard Gast
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL60611
- Aligning Science Across Parkinson’s Collaborative Research Network, Chevy Chase, MD20815
| | - Sara A. Solla
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL60611
| | - Ann Kennedy
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL60611
- Aligning Science Across Parkinson’s Collaborative Research Network, Chevy Chase, MD20815
| |
Collapse
|
12
|
Friedenberger Z, Naud R. Dendritic excitability controls overdispersion. NATURE COMPUTATIONAL SCIENCE 2024; 4:19-28. [PMID: 38177495 DOI: 10.1038/s43588-023-00580-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 11/29/2023] [Indexed: 01/06/2024]
Abstract
The brain is an intricate assembly of intercommunicating neurons whose input-output function is only partially understood. The role of active dendrites in shaping spiking responses, in particular, is unclear. Although existing models account for active dendrites and spiking responses, they are too complex to analyze analytically and demand long stochastic simulations. Here we combine cable and renewal theory to describe how input fluctuations shape the response of neuronal ensembles with active dendrites. We found that dendritic input readily and potently controls interspike interval dispersion. This phenomenon can be understood by considering that neurons display three fundamental operating regimes: one mean-driven regime and two fluctuation-driven regimes. We show that these results are expected to appear for a wide range of dendritic properties and verify predictions of the model in experimental data. These findings have implications for the role of interspike interval dispersion in learning and for theories of attractor states.
Collapse
Affiliation(s)
- Zachary Friedenberger
- Centre for Neural Dynamics and Artificial Intelligence, University of Ottawa, Ottawa, Ontario, Canada
- Department of Physics, University of Ottawa, Ottawa, Ontario, Canada
| | - Richard Naud
- Centre for Neural Dynamics and Artificial Intelligence, University of Ottawa, Ottawa, Ontario, Canada.
- Department of Physics, University of Ottawa, Ottawa, Ontario, Canada.
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada.
| |
Collapse
|
13
|
Stroud JP, Watanabe K, Suzuki T, Stokes MG, Lengyel M. Optimal information loading into working memory explains dynamic coding in the prefrontal cortex. Proc Natl Acad Sci U S A 2023; 120:e2307991120. [PMID: 37983510 PMCID: PMC10691340 DOI: 10.1073/pnas.2307991120] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 09/29/2023] [Indexed: 11/22/2023] Open
Abstract
Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading. We find that, contrary to common assumptions, optimal loading of information into working memory involves inputs that are largely orthogonal, rather than similar, to the late delay activities observed during memory maintenance, naturally leading to the widely observed phenomenon of dynamic coding in PFC. Using a theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading. We also find that optimal information loading emerges as a general dynamical strategy in task-optimized recurrent neural networks. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics and reveals a normative principle underlying dynamic coding.
Collapse
Affiliation(s)
- Jake P. Stroud
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
| | - Kei Watanabe
- Graduate School of Frontier Biosciences, Osaka University, Osaka565-0871, Japan
| | - Takafumi Suzuki
- Center for Information and Neural Networks, National Institute of Communication and Information Technology, Osaka565-0871, Japan
| | - Mark G. Stokes
- Department of Experimental Psychology, University of Oxford, OxfordOX2 6GG, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, OxfordOX3 9DU, United Kingdom
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, BudapestH-1051, Hungary
| |
Collapse
|
14
|
Eissa TL, Kilpatrick ZP. Learning efficient representations of environmental priors in working memory. PLoS Comput Biol 2023; 19:e1011622. [PMID: 37943956 PMCID: PMC10662764 DOI: 10.1371/journal.pcbi.1011622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/21/2023] [Accepted: 10/20/2023] [Indexed: 11/12/2023] Open
Abstract
Experience shapes our expectations and helps us learn the structure of the environment. Inference models render such learning as a gradual refinement of the observer's estimate of the environmental prior. For instance, when retaining an estimate of an object's features in working memory, learned priors may bias the estimate in the direction of common feature values. Humans display such biases when retaining color estimates on short time intervals. We propose that these systematic biases emerge from modulation of synaptic connectivity in a neural circuit based on the experienced stimulus history, shaping the persistent and collective neural activity that encodes the stimulus estimate. Resulting neural activity attractors are aligned to common stimulus values. Using recently published human response data from a delayed-estimation task in which stimuli (colors) were drawn from a heterogeneous distribution that did not necessarily correspond with reported population biases, we confirm that most subjects' response distributions are better described by experience-dependent learning models than by models with fixed biases. This work suggests systematic limitations in working memory reflect efficient representations of inferred environmental structure, providing new insights into how humans integrate environmental knowledge into their cognitive strategies.
Collapse
Affiliation(s)
- Tahra L. Eissa
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, Colorado, United States of America
| | - Zachary P. Kilpatrick
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, Colorado, United States of America
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, Colorado, United States of America
| |
Collapse
|
15
|
Brennan C, Proekt A. Attractor dynamics with activity-dependent plasticity capture human working memory across time scales. COMMUNICATIONS PSYCHOLOGY 2023; 1:28. [PMID: 38764555 PMCID: PMC11101211 DOI: 10.1038/s44271-023-00027-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 09/15/2023] [Indexed: 05/21/2024]
Abstract
Most cognitive functions require the brain to maintain immediately preceding stimuli in working memory. Here, using a human working memory task with multiple delays, we test the hypothesis that working memories are stored in a discrete set of stable neuronal activity configurations called attractors. We show that while discrete attractor dynamics can approximate working memory on a single time scale, they fail to generalize across multiple timescales. This failure occurs because at longer delay intervals the responses contain more information about the stimuli than can be stored in a discrete attractor model. We present a modeling approach that combines discrete attractor dynamics with activity-dependent plasticity. This model successfully generalizes across all timescales and correctly predicts intertrial interactions. Thus, our findings suggest that discrete attractor dynamics are insufficient to model working memory and that activity-dependent plasticity improves durability of information storage in attractor systems.
Collapse
Affiliation(s)
- Connor Brennan
- University of Pennsylvania, 3160 Chestnut St., Philadelphia, PA USA
| | - Alex Proekt
- University of Pennsylvania, 3160 Chestnut St., Philadelphia, PA USA
| |
Collapse
|
16
|
Hutt A, Rich S, Valiante TA, Lefebvre J. Intrinsic neural diversity quenches the dynamic volatility of neural networks. Proc Natl Acad Sci U S A 2023; 120:e2218841120. [PMID: 37399421 PMCID: PMC10334753 DOI: 10.1073/pnas.2218841120] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 05/19/2023] [Indexed: 07/05/2023] Open
Abstract
Heterogeneity is the norm in biology. The brain is no different: Neuronal cell types are myriad, reflected through their cellular morphology, type, excitability, connectivity motifs, and ion channel distributions. While this biophysical diversity enriches neural systems' dynamical repertoire, it remains challenging to reconcile with the robustness and persistence of brain function over time (resilience). To better understand the relationship between excitability heterogeneity (variability in excitability within a population of neurons) and resilience, we analyzed both analytically and numerically a nonlinear sparse neural network with balanced excitatory and inhibitory connections evolving over long time scales. Homogeneous networks demonstrated increases in excitability, and strong firing rate correlations-signs of instability-in response to a slowly varying modulatory fluctuation. Excitability heterogeneity tuned network stability in a context-dependent way by restraining responses to modulatory challenges and limiting firing rate correlations, while enriching dynamics during states of low modulatory drive. Excitability heterogeneity was found to implement a homeostatic control mechanism enhancing network resilience to changes in population size, connection probability, strength and variability of synaptic weights, by quenching the volatility (i.e., its susceptibility to critical transitions) of its dynamics. Together, these results highlight the fundamental role played by cell-to-cell heterogeneity in the robustness of brain function in the face of change.
Collapse
Affiliation(s)
- Axel Hutt
- Université de Strasbourg, CNRS, Inria, ICube, MLMS, MIMESIS, StrasbourgF-67000, France
| | - Scott Rich
- Krembil Brain Institute, Division of Clinical and Computational Neuroscience, University Health Network, Toronto, ONM5T 0S8, Canada
| | - Taufik A. Valiante
- Krembil Brain Institute, Division of Clinical and Computational Neuroscience, University Health Network, Toronto, ONM5T 0S8, Canada
- Department of Electrical and Computer Engineering, University of Toronto, Toronto, ONM5S 3G8, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ONM5S 3G9, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, ONM5S 1A8, Canada
- Division of Neurosurgery, Department of Surgery, University of Toronto, Toronto, ONM5G 2C4, Canada
- Center for Advancing Neurotechnological Innovation to Application, University of Toronto, Toronto, ONM5G 2A2, Canada
- Max Planck-University of Toronto Center for Neural Science and Technology, University of Toronto, Toronto, ONM5S 3G8, Canada
| | - Jérémie Lefebvre
- Krembil Brain Institute, Division of Clinical and Computational Neuroscience, University Health Network, Toronto, ONM5T 0S8, Canada
- Department of Biology, University of Ottawa, Ottawa, ONK1N 6N5, Canada
- Department of Mathematics, University of Toronto, Toronto, ONM5S 2E4, Canada
| |
Collapse
|
17
|
Bachschmid-Romano L, Hatsopoulos NG, Brunel N. Interplay between external inputs and recurrent dynamics during movement preparation and execution in a network model of motor cortex. eLife 2023; 12:77690. [PMID: 37166452 PMCID: PMC10174693 DOI: 10.7554/elife.77690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 03/09/2023] [Indexed: 05/12/2023] Open
Abstract
The primary motor cortex has been shown to coordinate movement preparation and execution through computations in approximately orthogonal subspaces. The underlying network mechanisms, and the roles played by external and recurrent connectivity, are central open questions that need to be answered to understand the neural substrates of motor control. We develop a recurrent neural network model that recapitulates the temporal evolution of neuronal activity recorded from the primary motor cortex of a macaque monkey during an instructed delayed-reach task. In particular, it reproduces the observed dynamic patterns of covariation between neural activity and the direction of motion. We explore the hypothesis that the observed dynamics emerges from a synaptic connectivity structure that depends on the preferred directions of neurons in both preparatory and movement-related epochs, and we constrain the strength of both synaptic connectivity and external input parameters from data. While the model can reproduce neural activity for multiple combinations of the feedforward and recurrent connections, the solution that requires minimum external inputs is one where the observed patterns of covariance are shaped by external inputs during movement preparation, while they are dominated by strong direction-specific recurrent connectivity during movement execution. Our model also demonstrates that the way in which single-neuron tuning properties change over time can explain the level of orthogonality of preparatory and movement-related subspaces.
Collapse
Affiliation(s)
| | - Nicholas G Hatsopoulos
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, United States
- Committee on Computational Neuroscience, University of Chicago, Chicago, United States
| | - Nicolas Brunel
- Department of Neurobiology, Duke University, Durham, United States
- Department of Physics, Duke University, Durham, United States
- Duke Institute for Brain Sciences, Duke University, Durham, United States
- Center for Cognitive Neuroscience, Duke University, Durham, United States
| |
Collapse
|
18
|
Lei L, Zhang M, Li T, Dong Y, Wang DH. A spiking network model for clustering report in a visual working memory task. Front Comput Neurosci 2023; 16:1030073. [PMID: 36714529 PMCID: PMC9878295 DOI: 10.3389/fncom.2022.1030073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 12/20/2022] [Indexed: 01/15/2023] Open
Abstract
Introduction Working memory (WM) plays a key role in many cognitive processes, and great interest has been attracted by WM for many decades. Recently, it has been observed that the reports of the memorized color sampled from a uniform distribution are clustered, and the report error for the stimulus follows a Gaussian distribution. Methods Based on the well-established ring model for visuospatial WM, we constructed a spiking network model with heterogeneous connectivity and embedded short-term plasticity (STP) to investigate the neurodynamic mechanisms behind this interesting phenomenon. Results As a result, our model reproduced the clustering report given stimuli sampled from a uniform distribution and the error of the report following a Gaussian distribution. Perturbation studies showed that the heterogeneity of connectivity and STP are necessary to explain experimental observations. Conclusion Our model provides a new perspective on the phenomenon of visual WM in experiments.
Collapse
Affiliation(s)
- Lixing Lei
- School of Systems Science, Beijing Normal University, Beijing, China
| | - Mengya Zhang
- School of Systems Science, Beijing Normal University, Beijing, China
| | - Tingyu Li
- School of Systems Science, Beijing Normal University, Beijing, China
| | - Yelin Dong
- School of Systems Science, Beijing Normal University, Beijing, China
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Da-Hui Wang
- School of Systems Science, Beijing Normal University, Beijing, China
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
| |
Collapse
|
19
|
Ashhad S, Slepukhin VM, Feldman JL, Levine AJ. Microcircuit Synchronization and Heavy-Tailed Synaptic Weight Distribution Augment preBötzinger Complex Bursting Dynamics. J Neurosci 2023; 43:240-260. [PMID: 36400528 PMCID: PMC9838711 DOI: 10.1523/jneurosci.1195-22.2022] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/05/2022] [Accepted: 11/10/2022] [Indexed: 11/19/2022] Open
Abstract
The preBötzinger Complex (preBötC) encodes inspiratory time as rhythmic bursts of activity underlying each breath. Spike synchronization throughout a sparsely connected preBötC microcircuit initiates bursts that ultimately drive the inspiratory motor patterns. Using minimal microcircuit models to explore burst initiation dynamics, we examined the variability in probability and latency to burst following exogenous stimulation of a small subset of neurons, mimicking experiments. Among various physiologically plausible graphs of 1000 excitatory neurons constructed using experimentally determined synaptic and connectivity parameters, directed Erdős-Rényi graphs with a broad (lognormal) distribution of synaptic weights best captured the experimentally observed dynamics. preBötC synchronization leading to bursts was regulated by the efferent connectivity of spiking neurons that are optimally tuned to amplify modest preinspiratory activity through input convergence. Using graph-theoretic and machine learning-based analyses, we found that input convergence of efferent connectivity at the next-nearest neighbor order was a strong predictor of incipient synchronization. Our analyses revealed a crucial role of synaptic heterogeneity in imparting exceptionally robust yet flexible preBötC attractor dynamics. Given the pervasiveness of lognormally distributed synaptic strengths throughout the nervous system, we postulate that these mechanisms represent a ubiquitous template for temporal processing and decision-making computational motifs.SIGNIFICANCE STATEMENT Mammalian breathing is robust, virtually continuous throughout life, yet is inherently labile: to adapt to rapid metabolic shifts (e.g., fleeing a predator or chasing prey); for airway reflexes; and to enable nonventilatory behaviors (e.g., vocalization, breathholding, laughing). Canonical theoretical frameworks-based on pacemakers and intrinsic bursting-cannot account for the observed robustness and flexibility of the preBötzinger Complex rhythm. Experiments reveal that network synchronization is the key to initiate inspiratory bursts in each breathing cycle. We investigated preBötC synchronization dynamics using network models constructed with experimentally determined neuronal and synaptic parameters. We discovered that a fat-tailed (non-Gaussian) synaptic weight distribution-a manifestation of synaptic heterogeneity-augments neuronal synchronization and attractor dynamics in this vital rhythmogenic network, contributing to its extraordinary reliability and responsiveness.
Collapse
Affiliation(s)
- Sufyan Ashhad
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095-1763
| | - Valentin M Slepukhin
- Department of Physics & Astronomy, University of California, Los Angeles, Los Angeles, California 90095-1596
| | - Jack L Feldman
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095-1763
| | - Alex J Levine
- Department of Physics & Astronomy, University of California, Los Angeles, Los Angeles, California 90095-1596
- Department of Chemistry & Biochemistry, University of California, Los Angeles, Los Angeles, California 90095-1596
| |
Collapse
|
20
|
Brennan C, Aggarwal A, Pei R, Sussillo D, Proekt A. One dimensional approximations of neuronal dynamics reveal computational strategy. PLoS Comput Biol 2023; 19:e1010784. [PMID: 36607933 PMCID: PMC9821456 DOI: 10.1371/journal.pcbi.1010784] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 12/01/2022] [Indexed: 01/07/2023] Open
Abstract
The relationship between neuronal activity and computations embodied by it remains an open question. We develop a novel methodology that condenses observed neuronal activity into a quantitatively accurate, simple, and interpretable model and validate it on diverse systems and scales from single neurons in C. elegans to fMRI in humans. The model treats neuronal activity as collections of interlocking 1-dimensional trajectories. Despite their simplicity, these models accurately predict future neuronal activity and future decisions made by human participants. Moreover, the structure formed by interconnected trajectories-a scaffold-is closely related to the computational strategy of the system. We use these scaffolds to compare the computational strategy of primates and artificial systems trained on the same task to identify specific conditions under which the artificial agent learns the same strategy as the primate. The computational strategy extracted using our methodology predicts specific errors on novel stimuli. These results show that our methodology is a powerful tool for studying the relationship between computation and neuronal activity across diverse systems.
Collapse
Affiliation(s)
- Connor Brennan
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Adeeti Aggarwal
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Rui Pei
- Department of Psychology, Stanford University, Palo Alto, California, United States of America
| | - David Sussillo
- Stanford Neurosciences Institute, Stanford University, Palo Alto, California, United States of America
- Department of Electrical Engineering, Stanford University, Palo Alto, California, United States of America
| | - Alex Proekt
- Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
21
|
Esnaola-Acebes JM, Roxin A, Wimmer K. Flexible integration of continuous sensory evidence in perceptual estimation tasks. Proc Natl Acad Sci U S A 2022; 119:e2214441119. [PMID: 36322720 PMCID: PMC9659402 DOI: 10.1073/pnas.2214441119] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 10/05/2022] [Indexed: 11/07/2022] Open
Abstract
Temporal accumulation of evidence is crucial for making accurate judgments based on noisy or ambiguous sensory input. The integration process leading to categorical decisions is thought to rely on competition between neural populations, each encoding a discrete categorical choice. How recurrent neural circuits integrate evidence for continuous perceptual judgments is unknown. Here, we show that a continuous bump attractor network can integrate a circular feature, such as stimulus direction, nearly optimally. As required by optimal integration, the population activity of the network unfolds on a two-dimensional manifold, in which the position of the network's activity bump tracks the stimulus average, and, simultaneously, the bump amplitude tracks stimulus uncertainty. Moreover, the temporal weighting of sensory evidence by the network depends on the relative strength of the stimulus compared to the internally generated bump dynamics, yielding either early (primacy), uniform, or late (recency) weighting. The model can flexibly switch between these regimes by changing a single control parameter, the global excitatory drive. We show that this mechanism can quantitatively explain individual temporal weighting profiles of human observers, and we validate the model prediction that temporal weighting impacts reaction times. Our findings point to continuous attractor dynamics as a plausible neural mechanism underlying stimulus integration in perceptual estimation tasks.
Collapse
Affiliation(s)
- Jose M. Esnaola-Acebes
- Computational Neuroscience Group, Centre de Recerca Matemàtica, 08193 Bellaterra (Barcelona), Spain
| | - Alex Roxin
- Computational Neuroscience Group, Centre de Recerca Matemàtica, 08193 Bellaterra (Barcelona), Spain
| | - Klaus Wimmer
- Computational Neuroscience Group, Centre de Recerca Matemàtica, 08193 Bellaterra (Barcelona), Spain
| |
Collapse
|
22
|
Khona M, Fiete IR. Attractor and integrator networks in the brain. Nat Rev Neurosci 2022; 23:744-766. [DOI: 10.1038/s41583-022-00642-0] [Citation(s) in RCA: 92] [Impact Index Per Article: 30.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2022] [Indexed: 11/06/2022]
|
23
|
Wang R, Kang L. Multiple bumps can enhance robustness to noise in continuous attractor networks. PLoS Comput Biol 2022; 18:e1010547. [PMID: 36215305 PMCID: PMC9584540 DOI: 10.1371/journal.pcbi.1010547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 10/20/2022] [Accepted: 09/06/2022] [Indexed: 11/19/2022] Open
Abstract
A central function of continuous attractor networks is encoding coordinates and accurately updating their values through path integration. To do so, these networks produce localized bumps of activity that move coherently in response to velocity inputs. In the brain, continuous attractors are believed to underlie grid cells and head direction cells, which maintain periodic representations of position and orientation, respectively. These representations can be achieved with any number of activity bumps, and the consequences of having more or fewer bumps are unclear. We address this knowledge gap by constructing 1D ring attractor networks with different bump numbers and characterizing their responses to three types of noise: fluctuating inputs, spiking noise, and deviations in connectivity away from ideal attractor configurations. Across all three types, networks with more bumps experience less noise-driven deviations in bump motion. This translates to more robust encodings of linear coordinates, like position, assuming that each neuron represents a fixed length no matter the bump number. Alternatively, we consider encoding a circular coordinate, like orientation, such that the network distance between adjacent bumps always maps onto 360 degrees. Under this mapping, bump number does not significantly affect the amount of error in the coordinate readout. Our simulation results are intuitively explained and quantitatively matched by a unified theory for path integration and noise in multi-bump networks. Thus, to suppress the effects of biologically relevant noise, continuous attractor networks can employ more bumps when encoding linear coordinates; this advantage disappears when encoding circular coordinates. Our findings provide motivation for multiple bumps in the mammalian grid network.
Collapse
Affiliation(s)
- Raymond Wang
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, California, United States of America
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, Wako, Saitama, Japan
| | - Louis Kang
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, Wako, Saitama, Japan
- * E-mail:
| |
Collapse
|
24
|
Ebrahimzadeh P, Schiek M, Maistrenko Y. Mixed-mode chimera states in pendula networks. CHAOS (WOODBURY, N.Y.) 2022; 32:103118. [PMID: 36319296 DOI: 10.1063/5.0103071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 09/21/2022] [Indexed: 06/16/2023]
Abstract
We report the emergence of peculiar chimera states in networks of identical pendula with global phase-lagged coupling. The states reported include both rotating and quiescent modes, i.e., with non-zero and zero average frequencies. This kind of mixed-mode chimeras may be interpreted as images of bump states known in neuroscience in the context of modeling the working memory. We illustrate this striking phenomenon for a network of N = 100 coupled pendula, followed by a detailed description of the minimal non-trivial case of N = 3. Parameter regions for five characteristic types of the system behavior are identified, which consist of two mixed-mode chimeras with one and two rotating pendula, classical weak chimera with all three pendula rotating, synchronous rotation, and quiescent state. The network dynamics is multistable: up to four of the states can coexist in the system phase state as demonstrated through the basins of attraction. The analysis suggests that the robust mixed-mode chimera states can generically describe the complex dynamics of diverse pendula-like systems widespread in nature.
Collapse
Affiliation(s)
- P Ebrahimzadeh
- ZEA-2: Electronics Systems, Forschungszentrum Jülich GmbH, 52428 Jülich, Germany
| | - M Schiek
- ZEA-2: Electronics Systems, Forschungszentrum Jülich GmbH, 52428 Jülich, Germany
| | - Y Maistrenko
- ZEA-2: Electronics Systems, Forschungszentrum Jülich GmbH, 52428 Jülich, Germany
| |
Collapse
|
25
|
Gu J, Lim S. Unsupervised learning for robust working memory. PLoS Comput Biol 2022; 18:e1009083. [PMID: 35500033 PMCID: PMC9098088 DOI: 10.1371/journal.pcbi.1009083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 05/12/2022] [Accepted: 03/16/2022] [Indexed: 11/18/2022] Open
Abstract
Working memory is a core component of critical cognitive functions such as planning and decision-making. Persistent activity that lasts long after the stimulus offset has been considered a neural substrate for working memory. Attractor dynamics based on network interactions can successfully reproduce such persistent activity. However, it requires a fine-tuning of network connectivity, in particular, to form continuous attractors which were suggested for encoding continuous signals in working memory. Here, we investigate whether a specific form of synaptic plasticity rules can mitigate such tuning problems in two representative working memory models, namely, rate-coded and location-coded persistent activity. We consider two prominent types of plasticity rules, differential plasticity correcting the rapid activity changes and homeostatic plasticity regularizing the long-term average of activity, both of which have been proposed to fine-tune the weights in an unsupervised manner. Consistent with the findings of previous works, differential plasticity alone was enough to recover a graded-level persistent activity after perturbations in the connectivity. For the location-coded memory, differential plasticity could also recover persistent activity. However, its pattern can be irregular for different stimulus locations under slow learning speed or large perturbation in the connectivity. On the other hand, homeostatic plasticity shows a robust recovery of smooth spatial patterns under particular types of synaptic perturbations, such as perturbations in incoming synapses onto the entire or local populations. However, homeostatic plasticity was not effective against perturbations in outgoing synapses from local populations. Instead, combining it with differential plasticity recovers location-coded persistent activity for a broader range of perturbations, suggesting compensation between two plasticity rules.
Collapse
Affiliation(s)
- Jintao Gu
- Neural Science, New York University Shanghai, Shanghai, China
| | - Sukbin Lim
- Neural Science, New York University Shanghai, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
- * E-mail:
| |
Collapse
|
26
|
Constraints on Persistent Activity in a Biologically Detailed Network Model of the Prefrontal Cortex with Heterogeneities. Prog Neurobiol 2022; 215:102287. [DOI: 10.1016/j.pneurobio.2022.102287] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Revised: 02/25/2022] [Accepted: 05/04/2022] [Indexed: 11/18/2022]
|
27
|
Darshan R, Rivkind A. Learning to represent continuous variables in heterogeneous neural networks. Cell Rep 2022; 39:110612. [PMID: 35385721 DOI: 10.1016/j.celrep.2022.110612] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 02/08/2022] [Accepted: 03/11/2022] [Indexed: 12/13/2022] Open
Abstract
Animals must monitor continuous variables such as position or head direction. Manifold attractor networks-which enable a continuum of persistent neuronal states-provide a key framework to explain this monitoring ability. Neural networks with symmetric synaptic connectivity dominate this framework but are inconsistent with the diverse synaptic connectivity and neuronal representations observed in experiments. Here, we developed a theory for manifold attractors in trained neural networks, which approximates a continuum of persistent states, without assuming unrealistic symmetry. We exploit the theory to predict how asymmetries in the representation and heterogeneity in the connectivity affect the formation of the manifold via training, shape network response to stimulus, and govern mechanisms that possibly lead to destabilization of the manifold. Our work suggests that the functional properties of manifold attractors in the brain can be inferred from the overlooked asymmetries in connectivity and in the low-dimensional representation of the encoded variable.
Collapse
Affiliation(s)
- Ran Darshan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| | | |
Collapse
|
28
|
Abstract
OBJECTIVE Cognitive impairments in schizophrenia are associated with lower gamma oscillation power in the prefrontal cortex (PFC). Gamma power depends in part on excitatory drive to fast-spiking parvalbumin interneurons (PVIs). Excitatory drive to cortical neurons varies in strength, which could affect how these neurons regulate network oscillations. The authors investigated whether variability in excitatory synaptic strength across PVIs could contribute to lower prefrontal gamma power in schizophrenia. METHODS In postmortem PFC from 20 matched pairs of comparison and schizophrenia subjects, levels of vesicular glutamate transporter 1 (VGlut1) and postsynaptic density 95 (PSD95) proteins were quantified to assess variability in excitatory synaptic strength across PVIs. A computational model network was then used to simulate how variability in excitatory synaptic strength across fast-spiking (a defining feature of PVIs) interneurons (FSIs) regulates gamma power. RESULTS The variability of VGlut1 and PSD95 levels at excitatory inputs across PVIs was larger in schizophrenia relative to comparison subjects. This alteration was not influenced by schizophrenia-associated comorbid factors, was not present in monkeys chronically exposed to antipsychotic medications, and was not present in calretinin interneurons. In the model network, variability in excitatory synaptic strength across FSIs regulated gamma power by affecting network synchrony. Finally, greater synaptic variability interacted synergistically with other synaptic alterations in schizophrenia (i.e., fewer excitatory inputs to FSIs and lower inhibitory strength from FSIs) to robustly reduce gamma power. CONCLUSIONS The study findings suggest that greater variability in excitatory synaptic strength across PVIs, in combination with other modest synaptic alterations in these neurons, can markedly lower PFC gamma power in schizophrenia.
Collapse
Affiliation(s)
- Daniel W Chung
- Translational Neuroscience Program, Department of Psychiatry, University of Pittsburgh, Pittsburgh
| | - Matthew A Geramita
- Translational Neuroscience Program, Department of Psychiatry, University of Pittsburgh, Pittsburgh
| | - David A Lewis
- Translational Neuroscience Program, Department of Psychiatry, University of Pittsburgh, Pittsburgh
| |
Collapse
|
29
|
Brinkman BAW, Yan H, Maffei A, Park IM, Fontanini A, Wang J, La Camera G. Metastable dynamics of neural circuits and networks. APPLIED PHYSICS REVIEWS 2022; 9:011313. [PMID: 35284030 PMCID: PMC8900181 DOI: 10.1063/5.0062603] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 01/31/2022] [Indexed: 05/14/2023]
Abstract
Cortical neurons emit seemingly erratic trains of action potentials or "spikes," and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed "metastable" and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
Collapse
Affiliation(s)
| | - H. Yan
- State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, Jilin 130022, People's Republic of China
| | | | | | | | - J. Wang
- Authors to whom correspondence should be addressed: and
| | - G. La Camera
- Authors to whom correspondence should be addressed: and
| |
Collapse
|
30
|
Wang XJ. 50 years of mnemonic persistent activity: quo vadis? Trends Neurosci 2021; 44:888-902. [PMID: 34654556 PMCID: PMC9087306 DOI: 10.1016/j.tins.2021.09.001] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 08/27/2021] [Accepted: 09/07/2021] [Indexed: 10/20/2022]
Abstract
Half a century ago persistent spiking activity in the neocortex was discovered to be a neural substrate of working memory. Since then scientists have sought to understand this core cognitive function across biological and computational levels. Studies are reviewed here that cumulatively lend support to a synaptic theory of recurrent circuits for mnemonic persistent activity that depends on various cellular and network substrates and is mathematically described by a multiple-attractor network model. Crucially, a mnemonic attractor state of the brain is consistent with temporal variations and heterogeneity across neurons in a subspace of population activity. Persistent activity should be broadly understood as a contrast to decaying transients. Mechanisms in the absence of neural firing ('activity-silent state') are suitable for passive short-term memory but not for working memory - which is characterized by executive control for filtering out distractors, limited capacity, and internal manipulation of information.
Collapse
Affiliation(s)
- Xiao-Jing Wang
- Center for Neural Science, New York University, 4 Washington Place, New York, NY 20003, USA.
| |
Collapse
|
31
|
Mittal D, Narayanan R. Resonating neurons stabilize heterogeneous grid-cell networks. eLife 2021; 10:66804. [PMID: 34328415 PMCID: PMC8357421 DOI: 10.7554/elife.66804] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Accepted: 07/29/2021] [Indexed: 01/02/2023] Open
Abstract
A central theme that governs the functional design of biological networks is their ability to sustain stable function despite widespread parametric variability. Here, we investigated the impact of distinct forms of biological heterogeneities on the stability of a two-dimensional continuous attractor network (CAN) implicated in grid-patterned activity generation. We show that increasing degrees of biological heterogeneities progressively disrupted the emergence of grid-patterned activity and resulted in progressively large perturbations in low-frequency neural activity. We postulated that targeted suppression of low-frequency perturbations could ameliorate heterogeneity-induced disruptions of grid-patterned activity. To test this, we introduced intrinsic resonance, a physiological mechanism to suppress low-frequency activity, either by adding an additional high-pass filter (phenomenological) or by incorporating a slow negative feedback loop (mechanistic) into our model neurons. Strikingly, CAN models with resonating neurons were resilient to the incorporation of heterogeneities and exhibited stable grid-patterned firing. We found CAN models with mechanistic resonators to be more effective in targeted suppression of low-frequency activity, with the slow kinetics of the negative feedback loop essential in stabilizing these networks. As low-frequency perturbations (1/f noise) are pervasive across biological systems, our analyses suggest a universal role for mechanisms that suppress low-frequency activity in stabilizing heterogeneous biological networks.
Collapse
Affiliation(s)
- Divyansh Mittal
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
| |
Collapse
|
32
|
Flores-Valle A, Gonçalves PJ, Seelig JD. Integration of sleep homeostasis and navigation in Drosophila. PLoS Comput Biol 2021; 17:e1009088. [PMID: 34252086 PMCID: PMC8297946 DOI: 10.1371/journal.pcbi.1009088] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 07/22/2021] [Accepted: 05/17/2021] [Indexed: 11/25/2022] Open
Abstract
During sleep, the brain undergoes dynamic and structural changes. In Drosophila, such changes have been observed in the central complex, a brain area important for sleep control and navigation. The connectivity of the central complex raises the question about how navigation, and specifically the head direction system, can operate in the face of sleep related plasticity. To address this question, we develop a model that integrates sleep homeostasis and head direction. We show that by introducing plasticity, the head direction system can function in a stable way by balancing plasticity in connected circuits that encode sleep pressure. With increasing sleep pressure, the head direction system nevertheless becomes unstable and a sleep phase with a different plasticity mechanism is introduced to reset network connectivity. The proposed integration of sleep homeostasis and head direction circuits captures features of their neural dynamics observed in flies and mice.
Collapse
Affiliation(s)
- Andres Flores-Valle
- Center of Advanced European Studies and Research (caesar), Bonn, Germany
- International Max Planck Research School for Brain and Behavior, Bonn, Germany
| | - Pedro J. Gonçalves
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar), Bonn, Germany
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of Munich, Munich, Germany
| | - Johannes D. Seelig
- Center of Advanced European Studies and Research (caesar), Bonn, Germany
| |
Collapse
|
33
|
Kim E, Bari BA, Cohen JY. Subthreshold basis for reward-predictive persistent activity in mouse prefrontal cortex. Cell Rep 2021; 35:109082. [PMID: 33951442 PMCID: PMC8167820 DOI: 10.1016/j.celrep.2021.109082] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 11/30/2020] [Accepted: 04/13/2021] [Indexed: 11/30/2022] Open
Abstract
Nervous systems maintain information internally using persistent activity changes. The mechanisms by which this activity arises are incompletely understood. We study prefrontal cortex (PFC) in mice performing behaviors in which stimuli predicted rewards at different delays with different probabilities. We measure membrane potential (Vm) from pyramidal neurons across layers. Reward-predictive persistent firing increases arise due to sustained increases in mean and variance of Vm and are terminated by reward or via centrally generated mechanisms based on reward expectation. Other neurons show persistent decreases in firing rates, maintained by persistent hyperpolarization that is robust to intracellular perturbation. Persistent activity is layer (L)- and cell-type-specific. Neurons with persistent depolarization are primarily located in upper L5, whereas those with persistent hyperpolarization are mostly found in lower L5. L2/3 neurons do not show persistent activity. Thus, reward-predictive persistent activity in PFC is spatially organized and conveys information about internal state via synaptic mechanisms. Kim et al. show sustained changes in membrane potential and firing rates in mouse frontal cortex leading up to an expected reward. These dynamics rely on underlying changes in mean and variance, directly testing prior theoretical studies. Neurons showing increased and decreased activity changes are located in different cortical layers.
Collapse
Affiliation(s)
- Eunyoung Kim
- The Solomon H. Snyder Department of Neuroscience, Brain Science Institute, Kavli Neuroscience Discovery Institute, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Bilal A Bari
- The Solomon H. Snyder Department of Neuroscience, Brain Science Institute, Kavli Neuroscience Discovery Institute, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jeremiah Y Cohen
- The Solomon H. Snyder Department of Neuroscience, Brain Science Institute, Kavli Neuroscience Discovery Institute, The Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
34
|
Bant JS, Hardcastle K, Ocko SA, Giocomo LM. Topography in the Bursting Dynamics of Entorhinal Neurons. Cell Rep 2021; 30:2349-2359.e7. [PMID: 32075768 DOI: 10.1016/j.celrep.2020.01.057] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 11/28/2019] [Accepted: 01/17/2020] [Indexed: 12/18/2022] Open
Abstract
Medial entorhinal cortex contains neural substrates for representing space. These substrates include grid cells that fire in repeating locations and increase in scale progressively along the dorsal-to-ventral entorhinal axis, with the physical distance between grid firing nodes increasing from tens of centimeters to several meters in rodents. Whether the temporal scale of grid cell spiking dynamics shows a similar dorsal-to-ventral organization remains unknown. Here, we report the presence of a dorsal-to-ventral gradient in the temporal spiking dynamics of grid cells in behaving mice. This gradient in bursting supports the emergence of a dorsal grid cell population with a high signal-to-noise ratio. In vitro recordings combined with a computational model point to a role for gradients in non-inactivating sodium conductances in supporting the bursting gradient in vivo. Taken together, these results reveal a complementary organization in the temporal and intrinsic properties of entorhinal cells.
Collapse
Affiliation(s)
- Jason S Bant
- Department of Neurobiology, Stanford University School of Medicine, Stanford CA 94305, USA
| | - Kiah Hardcastle
- Department of Neurobiology, Stanford University School of Medicine, Stanford CA 94305, USA
| | - Samuel A Ocko
- Department of Applied Physics, Stanford University, Stanford CA 94305, USA
| | - Lisa M Giocomo
- Department of Neurobiology, Stanford University School of Medicine, Stanford CA 94305, USA.
| |
Collapse
|
35
|
Brito KVP, Matias FS. Neuronal heterogeneity modulates phase synchronization between unidirectionally coupled populations with excitation-inhibition balance. Phys Rev E 2021; 103:032415. [PMID: 33862693 DOI: 10.1103/physreve.103.032415] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 03/02/2021] [Indexed: 11/07/2022]
Abstract
Several experiments and models have highlighted the importance of neuronal heterogeneity in brain dynamics and function. However, how such a cell-to-cell diversity can affect cortical computation, synchronization, and neuronal communication is still under debate. Previous studies have focused on the effect of neuronal heterogeneity in one neuronal population. Here we are specifically interested in the effect of neuronal variability on the phase relations between two populations, which can be related to different cortical communication hypotheses. It has been recently shown that two spiking neuron populations unidirectionally connected in a sender-receiver configuration can exhibit anticipated synchronization (AS), which is characterized by a negative phase lag. This phenomenon has been reported in electrophysiological data of nonhuman primates and human EEG during a visual discrimination cognitive task. In experiments, the unidirectional coupling could be accessed by Granger causality and can be accompanied by either positive or negative phase difference between cortical areas. Here we propose a model of two coupled populations in which the neuronal heterogeneity can determine the dynamical relation between the sender and the receiver and can reproduce phase relations reported in experiments. Depending on the distribution of parameters characterizing the neuronal firing patterns, the system can exhibit both AS and the usual delayed synchronization regime (DS, with positive phase) as well as a zero-lag synchronization regime and phase bistability between AS and DS. Furthermore, we show that our network can present diversity in their phase relations maintaining the excitation-inhibition balance.
Collapse
Affiliation(s)
- Katiele V P Brito
- Instituto de Física, Universidade Federal de Alagoas, Maceió, Alagoas 57072-970, Brazil
| | - Fernanda S Matias
- Instituto de Física, Universidade Federal de Alagoas, Maceió, Alagoas 57072-970, Brazil
| |
Collapse
|
36
|
Berberian N, Ross M, Chartier S. Embodied working memory during ongoing input streams. PLoS One 2021; 16:e0244822. [PMID: 33400724 PMCID: PMC7785253 DOI: 10.1371/journal.pone.0244822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 12/16/2020] [Indexed: 11/18/2022] Open
Abstract
Sensory stimuli endow animals with the ability to generate an internal representation. This representation can be maintained for a certain duration in the absence of previously elicited inputs. The reliance on an internal representation rather than purely on the basis of external stimuli is a hallmark feature of higher-order functions such as working memory. Patterns of neural activity produced in response to sensory inputs can continue long after the disappearance of previous inputs. Experimental and theoretical studies have largely invested in understanding how animals faithfully maintain sensory representations during ongoing reverberations of neural activity. However, these studies have focused on preassigned protocols of stimulus presentation, leaving out by default the possibility of exploring how the content of working memory interacts with ongoing input streams. Here, we study working memory using a network of spiking neurons with dynamic synapses subject to short-term and long-term synaptic plasticity. The formal model is embodied in a physical robot as a companion approach under which neuronal activity is directly linked to motor output. The artificial agent is used as a methodological tool for studying the formation of working memory capacity. To this end, we devise a keyboard listening framework to delineate the context under which working memory content is (1) refined, (2) overwritten or (3) resisted by ongoing new input streams. Ultimately, this study takes a neurorobotic perspective to resurface the long-standing implication of working memory in flexible cognition.
Collapse
Affiliation(s)
- Nareg Berberian
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| | - Matt Ross
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| | - Sylvain Chartier
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
37
|
Turner-Evans DB, Jensen KT, Ali S, Paterson T, Sheridan A, Ray RP, Wolff T, Lauritzen JS, Rubin GM, Bock DD, Jayaraman V. The Neuroanatomical Ultrastructure and Function of a Biological Ring Attractor. Neuron 2020; 108:145-163.e10. [PMID: 32916090 PMCID: PMC8356802 DOI: 10.1016/j.neuron.2020.08.006] [Citation(s) in RCA: 81] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 05/20/2020] [Accepted: 08/05/2020] [Indexed: 01/31/2023]
Abstract
Neural representations of head direction (HD) have been discovered in many species. Theoretical work has proposed that the dynamics associated with these representations are generated, maintained, and updated by recurrent network structures called ring attractors. We evaluated this theorized structure-function relationship by performing electron-microscopy-based circuit reconstruction and RNA profiling of identified cell types in the HD system of Drosophila melanogaster. We identified motifs that have been hypothesized to maintain the HD representation in darkness, update it when the animal turns, and tether it to visual cues. Functional studies provided support for the proposed roles of individual excitatory or inhibitory circuit elements in shaping activity. We also discovered recurrent connections between neuronal arbors with mixed pre- and postsynaptic specializations. Our results confirm that the Drosophila HD network contains the core components of a ring attractor while also revealing unpredicted structural features that might enhance the network's computational power.
Collapse
Affiliation(s)
| | - Kristopher T Jensen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
| | - Saba Ali
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Tyler Paterson
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Arlo Sheridan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Robert P Ray
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Tanya Wolff
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - J Scott Lauritzen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Gerald M Rubin
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Davi D Bock
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA; Department of Neurological Sciences, Larner College of Medicine, University of Vermont, Burlington, VT 05405, USA
| | - Vivek Jayaraman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| |
Collapse
|
38
|
Natale JL, Hentschel HGE, Nemenman I. Precise spatial memory in local random networks. Phys Rev E 2020; 102:022405. [PMID: 32942429 DOI: 10.1103/physreve.102.022405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 06/16/2020] [Indexed: 11/07/2022]
Abstract
Self-sustained, elevated neuronal activity persisting on timescales of 10 s or longer is thought to be vital for aspects of working memory, including brain representations of real space. Continuous-attractor neural networks, one of the most well-known modeling frameworks for persistent activity, have been able to model crucial aspects of such spatial memory. These models tend to require highly structured or regular synaptic architectures. In contrast, we study numerical simulations of a geometrically embedded model with a local, but otherwise random, connectivity profile; imposing a global regulation of our system's mean firing rate produces localized, finely spaced discrete attractors that effectively span a two-dimensional manifold. We demonstrate how the set of attracting states can reliably encode a representation of the spatial locations at which the system receives external input, thereby accomplishing spatial memory via attractor dynamics without synaptic fine-tuning or regular structure. We then measure the network's storage capacity numerically and find that the statistics of retrievable positions are also equivalent to a full tiling of the plane, something hitherto achievable only with (approximately) translationally invariant synapses, and which may be of interest in modeling such biological phenomena as visuospatial working memory in two dimensions.
Collapse
Affiliation(s)
- Joseph L Natale
- Department of Physics, Emory University, Atlanta, Georgia 30322, USA
| | | | - Ilya Nemenman
- Department of Physics, Department of Biology, and Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia 30322, USA
| |
Collapse
|
39
|
Capogna M, Castillo PE, Maffei A. The ins and outs of inhibitory synaptic plasticity: Neuron types, molecular mechanisms and functional roles. Eur J Neurosci 2020; 54:6882-6901. [PMID: 32663353 DOI: 10.1111/ejn.14907] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 06/30/2020] [Accepted: 07/08/2020] [Indexed: 01/05/2023]
Abstract
GABAergic interneurons are highly diverse, and their synaptic outputs express various forms of plasticity. Compelling evidence indicates that activity-dependent changes of inhibitory synaptic transmission play a significant role in regulating neural circuits critically involved in learning and memory and circuit refinement. Here, we provide an updated overview of inhibitory synaptic plasticity with a focus on the hippocampus and neocortex. To illustrate the diversity of inhibitory interneurons, we discuss the case of two highly divergent interneuron types, parvalbumin-expressing basket cells and neurogliaform cells, which support unique roles on circuit dynamics. We also present recent progress on the molecular mechanisms underlying long-term, activity-dependent plasticity of fast inhibitory transmission. Lastly, we discuss the role of inhibitory synaptic plasticity in neuronal circuits' function. The emerging picture is that inhibitory synaptic transmission in the CNS is extremely diverse, undergoes various mechanistically distinct forms of plasticity and contributes to a much more refined computational role than initially thought. Both the remarkable diversity of inhibitory interneurons and the various forms of plasticity expressed by GABAergic synapses provide an amazingly rich inhibitory repertoire that is central to a variety of complex neural circuit functions, including memory.
Collapse
Affiliation(s)
- Marco Capogna
- Department of Biomedicine, Danish National Research Foundation Center of Excellence PROMEMO, Aarhus University, Aarhus, Denmark
| | - Pablo E Castillo
- Dominck P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA.,Department of Psychiatry and Behavioral Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Arianna Maffei
- Center for Neural Circuit Dynamics and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|
40
|
Chandran P, Gopal R, Chandrasekar VK, Athavan N. Chimera-like states induced by additional dynamic nonlocal wirings. CHAOS (WOODBURY, N.Y.) 2020; 30:063106. [PMID: 32611102 DOI: 10.1063/1.5144929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 05/07/2020] [Indexed: 06/11/2023]
Abstract
We investigate the existence of chimera-like states in a small-world network of chaotically oscillating identical Rössler systems with an addition of randomly switching nonlocal links. By varying the small-world coupling strength, we observe no chimera-like state either in the absence of nonlocal wirings or with static nonlocal wirings. When we give an additional nonlocal wiring to randomly selected nodes and if we allow the random selection of nodes to change with time, we observe the onset of chimera-like states. Upon increasing the number of randomly selected nodes gradually, we find that the incoherent window keeps on shrinking, whereas the chimera-like window widens up. Moreover, the system attains a completely synchronized state comparatively sooner for a lower coupling strength. Also, we show that one can induce chimera-like states by a suitable choice of switching times, coupling strengths, and a number of nonlocal links. We extend the above-mentioned randomized injection of nonlocal wirings for the cases of globally coupled Rössler oscillators and a small-world network of coupled FitzHugh-Nagumo oscillators and obtain similar results.
Collapse
Affiliation(s)
- P Chandran
- Department of Physics, H. H. The Rajah's College (affiliated to Bharathidasan University), Pudukkottai 622 001, Tamil Nadu, India
| | - R Gopal
- Centre for Nonlinear Science & Engineering, School of Electrical & Electronics Engineering, SASTRA Deemed University, Thanjavur 613 401, Tamil Nadu, India
| | - V K Chandrasekar
- Centre for Nonlinear Science & Engineering, School of Electrical & Electronics Engineering, SASTRA Deemed University, Thanjavur 613 401, Tamil Nadu, India
| | - N Athavan
- Department of Physics, H. H. The Rajah's College (affiliated to Bharathidasan University), Pudukkottai 622 001, Tamil Nadu, India
| |
Collapse
|
41
|
A computational model for grid maps in neural populations. J Comput Neurosci 2020; 48:149-159. [PMID: 32125562 DOI: 10.1007/s10827-020-00742-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2019] [Revised: 02/06/2020] [Accepted: 02/11/2020] [Indexed: 10/24/2022]
Abstract
Grid cells in the entorhinal cortex, together with head direction, place, speed and border cells, are major contributors to the organization of spatial representations in the brain. In this work we introduce a novel theoretical and algorithmic framework able to explain the optimality of hexagonal grid-like response patterns. We show that this pattern is a result of minimal variance encoding of neurons together with maximal robustness to neurons' noise and minimal number of encoding neurons. The novelty lies in the formulation of the encoding problem considering neurons as an overcomplete basis (a frame) where the position information is encoded. Through the modern Frame Theory language, specifically that of tight and equiangular frames, we provide new insights about the optimality of hexagonal grid receptive fields. The proposed model is based on the well-accepted and tested hypothesis of Hebbian learning, providing a simplified cortical-based framework that does not require the presence of velocity-driven oscillations (oscillatory model) or translational symmetries in the synaptic connections (attractor model). We moreover demonstrate that the proposed encoding mechanism naturally explains axis alignment of neighbor grid cells and maps shifts, rotations and scaling of the stimuli onto the shape of grid cells' receptive fields, giving a straightforward explanation of the experimental evidence of grid cells remapping under transformations of environmental cues.
Collapse
|
42
|
Mishra P, Narayanan R. Heterogeneities in intrinsic excitability and frequency-dependent response properties of granule cells across the blades of the rat dentate gyrus. J Neurophysiol 2020; 123:755-772. [PMID: 31913748 PMCID: PMC7052640 DOI: 10.1152/jn.00443.2019] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Revised: 12/25/2019] [Accepted: 01/07/2020] [Indexed: 12/18/2022] Open
Abstract
The dentate gyrus (DG), the input gate to the hippocampus proper, is anatomically segregated into three different sectors, namely, the suprapyramidal blade, the crest region, and the infrapyramidal blade. Although there are well-established differences between these sectors in terms of neuronal morphology, connectivity patterns, and activity levels, differences in electrophysiological properties of granule cells within these sectors have remained unexplored. Here, employing somatic whole cell patch-clamp recordings from the rat DG, we demonstrate that granule cells in these sectors manifest considerable heterogeneities in their intrinsic excitability, temporal summation, action potential characteristics, and frequency-dependent response properties. Across sectors, these neurons showed positive temporal summation of their responses to inputs mimicking excitatory postsynaptic currents and showed little to no sag in their voltage responses to pulse currents. Consistently, the impedance amplitude profile manifested low-pass characteristics and the impedance phase profile lacked positive phase values at all measured frequencies and voltages and for all sectors. Granule cells in all sectors exhibited class I excitability, with broadly linear firing rate profiles, and granule cells in the crest region fired significantly fewer action potentials compared with those in the infrapyramidal blade. Finally, we found weak pairwise correlations across the 18 different measurements obtained individually from each of the three sectors, providing evidence that these measurements are indeed reporting distinct aspects of neuronal physiology. Together, our analyses show that granule cells act as integrators of afferent information and emphasize the need to account for the considerable physiological heterogeneities in assessing their roles in information encoding and processing.NEW & NOTEWORTHY We employed whole cell patch-clamp recordings from granule cells in the three subregions of the rat dentate gyrus to demonstrate considerable heterogeneities in their intrinsic excitability, temporal summation, action potential characteristics, and frequency-dependent response properties. Across sectors, granule cells did not express membrane potential resonance, and their impedance profiles lacked inductive phase leads at all measured frequencies. Our analyses also show that granule cells manifest class I excitability characteristics, categorizing them as integrators of afferent information.
Collapse
Affiliation(s)
- Poonam Mishra
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
| |
Collapse
|
43
|
Pereira U, Brunel N. Unsupervised Learning of Persistent and Sequential Activity. Front Comput Neurosci 2020; 13:97. [PMID: 32009924 PMCID: PMC6978734 DOI: 10.3389/fncom.2019.00097] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 12/23/2019] [Indexed: 11/25/2022] Open
Abstract
Two strikingly distinct types of activity have been observed in various brain structures during delay periods of delayed response tasks: Persistent activity (PA), in which a sub-population of neurons maintains an elevated firing rate throughout an entire delay period; and Sequential activity (SA), in which sub-populations of neurons are activated sequentially in time. It has been hypothesized that both types of dynamics can be “learned” by the relevant networks from the statistics of their inputs, thanks to mechanisms of synaptic plasticity. However, the necessary conditions for a synaptic plasticity rule and input statistics to learn these two types of dynamics in a stable fashion are still unclear. In particular, it is unclear whether a single learning rule is able to learn both types of activity patterns, depending on the statistics of the inputs driving the network. Here, we first characterize the complete bifurcation diagram of a firing rate model of multiple excitatory populations with an inhibitory mechanism, as a function of the parameters characterizing its connectivity. We then investigate how an unsupervised temporally asymmetric Hebbian plasticity rule shapes the dynamics of the network. Consistent with previous studies, we find that for stable learning of PA and SA, an additional stabilization mechanism is necessary. We show that a generalized version of the standard multiplicative homeostatic plasticity (Renart et al., 2003; Toyoizumi et al., 2014) stabilizes learning by effectively masking excitatory connections during stimulation and unmasking those connections during retrieval. Using the bifurcation diagram derived for fixed connectivity, we study analytically the temporal evolution and the steady state of the learned recurrent architecture as a function of parameters characterizing the external inputs. Slow changing stimuli lead to PA, while fast changing stimuli lead to SA. Our network model shows how a network with plastic synapses can stably and flexibly learn PA and SA in an unsupervised manner.
Collapse
Affiliation(s)
- Ulises Pereira
- Department of Statistics, The University of Chicago, Chicago, IL, United States
| | - Nicolas Brunel
- Department of Statistics, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, Duke University, Durham, NC, United States.,Department of Physics, Duke University, Durham, NC, United States
| |
Collapse
|
44
|
Abstract
Many animals use an internal sense of direction to guide their movements through the world. Neurons selective to head direction are thought to support this directional sense and have been found in a diverse range of species, from insects to primates, highlighting their evolutionary importance. Across species, most head-direction networks share four key properties: a unique representation of direction at all times, persistent activity in the absence of movement, integration of angular velocity to update the representation, and the use of directional cues to correct drift. The dynamics of theorized network structures called ring attractors elegantly account for these properties, but their relationship to brain circuits is unclear. Here, we review experiments in rodents and flies that offer insights into potential neural implementations of ring attractor networks. We suggest that a theory-guided search across model systems for biological mechanisms that enable such dynamics would uncover general principles underlying head-direction circuit function.
Collapse
Affiliation(s)
- Brad K Hulse
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA; ,
| | - Vivek Jayaraman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA; ,
| |
Collapse
|
45
|
Diffusion modeling of interference and decay in auditory short-term memory. Exp Brain Res 2019; 237:1899-1905. [DOI: 10.1007/s00221-019-05533-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2018] [Accepted: 03/27/2019] [Indexed: 10/26/2022]
|
46
|
Panichello MF, DePasquale B, Pillow JW, Buschman TJ. Error-correcting dynamics in visual working memory. Nat Commun 2019; 10:3366. [PMID: 31358740 PMCID: PMC6662698 DOI: 10.1038/s41467-019-11298-3] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2018] [Accepted: 06/30/2019] [Indexed: 11/11/2022] Open
Abstract
Working memory is critical to cognition, decoupling behavior from the immediate world. Yet, it is imperfect; internal noise introduces errors into memory representations. Such errors have been shown to accumulate over time and increase with the number of items simultaneously held in working memory. Here, we show that discrete attractor dynamics mitigate the impact of noise on working memory. These dynamics pull memories towards a few stable representations in mnemonic space, inducing a bias in memory representations but reducing the effect of random diffusion. Model-based and model-free analyses of human and monkey behavior show that discrete attractor dynamics account for the distribution, bias, and precision of working memory reports. Furthermore, attractor dynamics are adaptive. They increase in strength as noise increases with memory load and experiments in humans show these dynamics adapt to the statistics of the environment, such that memories drift towards contextually-predicted values. Together, our results suggest attractor dynamics mitigate errors in working memory by counteracting noise and integrating contextual information into memories. Neural representations in working memory are susceptible to internal noise, which scales with memory load. Here, the authors show that attractor dynamics mitigate the influence of internal noise by pulling memories towards a few stable representations.
Collapse
Affiliation(s)
- Matthew F Panichello
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, 08540, USA
| | - Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, 08540, USA
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, 08540, USA.,Department of Psychology, Princeton University, Princeton, NJ, 08540, USA
| | - Timothy J Buschman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, 08540, USA. .,Department of Psychology, Princeton University, Princeton, NJ, 08540, USA.
| |
Collapse
|
47
|
Tanaka H, Nelson DR. Non-Hermitian quasilocalization and ring attractor neural networks. Phys Rev E 2019; 99:062406. [PMID: 31330749 DOI: 10.1103/physreve.99.062406] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Indexed: 11/07/2022]
Abstract
Eigenmodes of a broad class of "sparse" random matrices, with interactions concentrated near the diagonal, exponentially localize in space, as initially discovered in 1957 by Anderson for quantum systems. Anderson localization plays ubiquitous roles in varieties of problems from electrons in solids to mechanical and optical systems. However, its implications in neuroscience (where the connections can be strongly asymmetric) have been largely unexplored, mainly because synaptic connectivity matrices of neural systems are often "dense," which makes the eigenmodes spatially extended. Here we explore roles that Anderson localization could be playing in neural networks by focusing on "spatially structured" disorder in synaptic connectivity matrices. Recently neuroscientists have experimentally confirmed that the local excitation and global inhibition (LEGI) ring attractor model can functionally represent head direction cells in Drosophila melanogaster central brain. We first study a non-Hermitian (i.e., asymmetric) tight-binding model with disorder and then establish a connection to the LEGI ring attractor model. We discover that (1) principal eigenvectors of the LEGI ring attractor networks with structured nearest-neighbor disorder are "quasilocalized," even with fully dense inhibitory connections; and (2) the quasilocalized eigenvectors play dominant roles in the early time neural dynamics, and the location of the principal quasilocalized eigenvectors predicts an initial location of the "bump of activity" representing, for example, a head direction of an insect. Our investigations open up venues for explorations at the intersection between the theory of Anderson localization and neural networks with spatially structured disorder.
Collapse
Affiliation(s)
- Hidenori Tanaka
- Department of Applied Physics, Stanford University, Stanford, California 94305, USA.,School of Engineering and Applied Sciences and Kavli Institute for Bionano Science and Technology, Harvard University, Cambridge, Massachusetts 02138, USA
| | - David R Nelson
- Departments of Physics and Molecular and Cellular Biology, Harvard University, Cambridge, Massachusetts 02138, USA
| |
Collapse
|
48
|
Soures N, Kudithipudi D. Deep Liquid State Machines With Neural Plasticity for Video Activity Recognition. Front Neurosci 2019; 13:686. [PMID: 31333404 PMCID: PMC6621912 DOI: 10.3389/fnins.2019.00686] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 06/17/2019] [Indexed: 11/13/2022] Open
Abstract
Real-world applications such as first-person video activity recognition require intelligent edge devices. However, size, weight, and power constraints of the embedded platforms cannot support resource intensive state-of-the-art algorithms. Machine learning lite algorithms, such as reservoir computing, with shallow 3-layer networks are computationally frugal as only the output layer is trained. By reducing network depth and plasticity, reservoir computing minimizes computational power and complexity, making the algorithms optimal for edge devices. However, as a trade-off for their frugal nature, reservoir computing sacrifices computational power compared to state-of-the-art methods. A good compromise between reservoir computing and fully supervised networks are the proposed deep-LSM networks. The deep-LSM is a deep spiking neural network which captures dynamic information over multiple time-scales with a combination of randomly connected layers and unsupervised layers. The deep-LSM processes the captured dynamic information through an attention modulated readout layer to perform classification. We demonstrate that the deep-LSM achieves an average of 84.78% accuracy on the DogCentric video activity recognition task, beating state-of-the-art. The deep-LSM also shows up to 91.13% memory savings and up to 91.55% reduction in synaptic operations when compared to similar recurrent neural network models. Based on these results we claim that the deep-LSM is capable of overcoming limitations of traditional reservoir computing, while maintaining the low computational cost associated with reservoir computing.
Collapse
Affiliation(s)
- Nicholas Soures
- Neuromorphic AI Laboratory, Rochester Institute of Technology, Rochester, NY, United States
| | - Dhireesha Kudithipudi
- Neuromorphic AI Laboratory, Rochester Institute of Technology, Rochester, NY, United States
| |
Collapse
|
49
|
Yim MY, Cai X, Wang XJ. Transforming the Choice Outcome to an Action Plan in Monkey Lateral Prefrontal Cortex: A Neural Circuit Model. Neuron 2019; 103:520-532.e5. [PMID: 31230761 DOI: 10.1016/j.neuron.2019.05.032] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Revised: 02/14/2019] [Accepted: 05/21/2019] [Indexed: 11/28/2022]
Abstract
In economic decisions, we make a good-based choice first, then we transform the outcome into an action to obtain the good. To elucidate the network mechanisms for such transformation, we constructed a neural circuit model consisting of modules representing choice, integration of choice with target locations, and the final action plan. We examined three scenarios regarding how the final action plan could emerge in the neural circuit and compared their implications with experimental data. Our model with heterogeneous connectivity predicts the coexistence of three types of neurons with distinct functions, confirmed by analyzing the neural activity in the lateral prefrontal cortex (LPFC) of behaving monkeys. We obtained a much more distinct classification of functional neuron types in the ventral than the dorsal region of LPFC, suggesting that the action plan is initially generated in ventral LPFC. Our model offers a biologically plausible neural circuit architecture that implements good-to-action transformation during economic choice.
Collapse
Affiliation(s)
- Man Yi Yim
- New York University Shanghai, Shanghai, 200122, China; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, 200062, China; Present address: Center for Theoretical and Computational Neuroscience and Department of Neuroscience, University of Texas at Austin, Austin, TX 78712, USA
| | - Xinying Cai
- New York University Shanghai, Shanghai, 200122, China; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, 200062, China; Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, 200062, China.
| | - Xiao-Jing Wang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, 200062, China; Center for Neural Science, New York University, New York, NY 10003, USA; Shanghai Research Center for Brain Science and Brain-Inspired Intelligence, Zhangjiang Laboratory, Shanghai 201210, China.
| |
Collapse
|
50
|
Seeholzer A, Deger M, Gerstner W. Stability of working memory in continuous attractor networks under the control of short-term plasticity. PLoS Comput Biol 2019; 15:e1006928. [PMID: 31002672 PMCID: PMC6493776 DOI: 10.1371/journal.pcbi.1006928] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Revised: 05/01/2019] [Accepted: 03/04/2019] [Indexed: 12/02/2022] Open
Abstract
Continuous attractor models of working-memory store continuous-valued information in continuous state-spaces, but are sensitive to noise processes that degrade memory retention. Short-term synaptic plasticity of recurrent synapses has previously been shown to affect continuous attractor systems: short-term facilitation can stabilize memory retention, while short-term depression possibly increases continuous attractor volatility. Here, we present a comprehensive description of the combined effect of both short-term facilitation and depression on noise-induced memory degradation in one-dimensional continuous attractor models. Our theoretical description, applicable to rate models as well as spiking networks close to a stationary state, accurately describes the slow dynamics of stored memory positions as a combination of two processes: (i) diffusion due to variability caused by spikes; and (ii) drift due to random connectivity and neuronal heterogeneity. We find that facilitation decreases both diffusion and directed drifts, while short-term depression tends to increase both. Using mutual information, we evaluate the combined impact of short-term facilitation and depression on the ability of networks to retain stable working memory. Finally, our theory predicts the sensitivity of continuous working memory to distractor inputs and provides conditions for stability of memory. The ability to transiently memorize positions in the visual field is crucial for behavior. Models and experiments have shown that such memories can be maintained in networks of cortical neurons with a continuum of possible activity states, that reflects the continuum of positions in the environment. However, the accuracy of positions stored in such networks will degrade over time due to the noisiness of neuronal signaling and imperfections of the biological substrate. Previous work in simplified models has shown that synaptic short-term plasticity could stabilize this degradation by dynamically up- or down-regulating the strength of synaptic connections, thereby “pinning down” memorized positions. Here, we present a general theory that accurately predicts the extent of this “pinning down” by short-term plasticity in a broad class of biologically plausible network models, thereby untangling the interplay of varying biological sources of noise with short-term plasticity. Importantly, our work provides a novel theoretical link from the microscopic substrate of working memory—neurons and synaptic connections—to observable behavioral correlates, for example the susceptibility to distracting stimuli.
Collapse
Affiliation(s)
- Alexander Seeholzer
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Moritz Deger
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- * E-mail:
| |
Collapse
|