1
|
Yin H, Sun X, Yang K, Lan Y, Lu Z. Regulation of dentate gyrus pattern separation by hilus ectopic granule cells. Cogn Neurodyn 2025; 19:10. [PMID: 39801911 PMCID: PMC11718051 DOI: 10.1007/s11571-024-10204-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 09/11/2024] [Accepted: 10/10/2024] [Indexed: 01/16/2025] Open
Abstract
The dentate gyrus (DG) in hippocampus is reported to perform pattern separation, converting similar inputs into different outputs and thus avoiding memory interference. Previous studies have found that human and mice with epilepsy have significant pattern separation defects and a portion of adult-born granule cells (abGCs) migrate abnormally into the hilus, forming hilus ectopic granule cells (HEGCs). For the lack of relevant pathophysiological experiments, how HEGCs affect pattern separation remains unclear. Therefore, in this paper, we will construct the DG neuronal circuit and focus on discussing effects of HEGCs on pattern separation numerically. The obtained results showed that HEGCs impaired pattern separation efficiency since the sparse firing of granule cells (GCs) was destroyed. We provided new insights into the underlining mechanisms of HEGCs impairing pattern separation through analyzing two excitatory circuits: GC-HEGC-GC and GC-Mossy cell (MC)-GC, both of which involve the participation of HEGCs within the DG. It is revealed that the recurrent excitatory circuit GC-HEGC-GC formed by HEGCs mossy fiber sprouting significantly enhanced GCs activity, consequently disrupted pattern separation. However, another excitatory circuit had negligible effects on pattern separation due to the direct and indirect influences of MCs on GCs, which in turn led to the GCs sparse firing. Thus, HEGCs impair DG pattern separation mainly through the GC-HEGC-GC circuit and therefore ablating HEGCs may be one of the effective ways to improve pattern separation in patients with epilepsy.
Collapse
Affiliation(s)
- Haibin Yin
- School of Science, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People’s Republic of China
- Key Laboratory of Mathematics and Information Networks, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People’s Republic of China
| | - Xiaojuan Sun
- School of Science, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People’s Republic of China
- Key Laboratory of Mathematics and Information Networks, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People’s Republic of China
| | - Kai Yang
- School of Science, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People’s Republic of China
- Key Laboratory of Mathematics and Information Networks, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People’s Republic of China
| | - Yueheng Lan
- School of Science, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People’s Republic of China
- Key Laboratory of Mathematics and Information Networks, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People’s Republic of China
| | - Zeying Lu
- State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People’s Republic of China
| |
Collapse
|
2
|
D'Angelo E, Antonietti A, Geminiani A, Gambosi B, Alessandro C, Buttarazzi E, Pedrocchi A, Casellato C. Linking cellular-level phenomena to brain architecture: the case of spiking cerebellar controllers. Neural Netw 2025; 188:107538. [PMID: 40344928 DOI: 10.1016/j.neunet.2025.107538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 04/20/2025] [Accepted: 04/22/2025] [Indexed: 05/11/2025]
Abstract
Linking cellular-level phenomena to brain architecture and behavior is a holy grail for theoretical and computational neuroscience. Advances in neuroinformatics have recently allowed scientists to embed spiking neural networks of the cerebellum with realistic neuron models and multiple synaptic plasticity rules into sensorimotor controllers. By minimizing the distance (error) between the desired and the actual sensory state, and exploiting the sensory prediction, the cerebellar network acquires knowledge about the body-environment interaction and generates corrective signals. In doing so, the cerebellum implements a generalized computational algorithm, allowing it "to learn to predict the timing between correlated events" in a rich set of behavioral contexts. Plastic changes evolve trial by trial and are distributed over multiple synapses, regulating the timing of neuronal discharge and fine-tuning high-speed movements on the millisecond timescale. Thus, spiking cerebellar built-in controllers, among various computational approaches to studying cerebellar function, are helping to reveal the cellular-level substrates of network learning and signal coding, opening new frontiers for predictive computing and autonomous learning in robots.
Collapse
Affiliation(s)
- Egidio D'Angelo
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia Italy.
| | - Alberto Antonietti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano Italy.
| | - Alice Geminiani
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia Italy; current address, Neuroscience Program, Champalimaud Center for the Unknown, Lisboa Portugal
| | - Benedetta Gambosi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano Italy
| | | | - Emiliano Buttarazzi
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia Italy
| | - Alessandra Pedrocchi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano Italy
| | - Claudia Casellato
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia Italy.
| |
Collapse
|
3
|
Mangili L, Wissing C, Narain D. Fast implicit and slow explicit learning of temporal context. Sci Rep 2025; 15:16343. [PMID: 40348889 PMCID: PMC12065811 DOI: 10.1038/s41598-025-01664-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Accepted: 05/07/2025] [Indexed: 05/14/2025] Open
Abstract
One is seldom aware of the anticipatory and preemptive feats that the eyeblink system achieves in daily life but it frequently protects the eye from projectiles gone awry and insects on apparent collision courses. This poor awareness is why predictive eyeblinks are considered a form of implicit learning. In motor neuroscience, implicit learning is considered to be slow and, eyeblink conditioning, in particular, is believed to be a rigid and inflexible cerebellar-dependent behavior. In cognitive neuroscience, however, implicit and automatic processes are thought to be rapidly acquired. Here we show that the eyeblink system is, in fact, capable of remarkable cognitive flexibility and can learn on more rapid timescales than previously expected. In a task where we yoked contextual learning of predictive eyeblinks and manual responses in humans, well-timed eyeblink responses flexibly adjusted to external context on each trial. The temporal precision of the predictive eyeblinks exceeded that of manual response times. Learning of the well-timed eyeblink responses was also more rapid than that for the manual response times. This pattern persevered with the use of a cognitive strategy, which seemed to accelerate both types of learning. These results suggest that behaviors associated with the cerebellar cortex that were previously believed to be inflexible and largely implicit, can demonstrate rapid and precise context-dependent temporal control.
Collapse
Affiliation(s)
- Luca Mangili
- Dept. of Neuroscience, Erasmus University Medical Center, Rotterdam, The Netherlands
- Donders Center for Neuroscience, Radboud University, Nijmegen, The Netherlands
| | - Charlotte Wissing
- Dept. of Neuroscience, Erasmus University Medical Center, Rotterdam, The Netherlands
- Donders Center for Neuroscience, Radboud University, Nijmegen, The Netherlands
| | - Devika Narain
- Dept. of Neuroscience, Erasmus University Medical Center, Rotterdam, The Netherlands.
- Donders Center for Neuroscience, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
4
|
Rajeswaran P, Payeur A, Lajoie G, Orsborn AL. Assistive sensory-motor perturbations influence learned neural representations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.03.20.585972. [PMID: 38562772 PMCID: PMC10983972 DOI: 10.1101/2024.03.20.585972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using Brain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex activity as monkeys practiced BCI with a decoder that adapted to improve or maintain performance over days. Over time, task-relevant information became concentrated in fewer neurons, unlike with fixed decoders. At the population level, task information also became largely confined to a few neural modes that accounted for an unexpectedly small fraction of the population variance. A neural network model suggests the adaptive decoders directly contribute to forming these more compact neural representations. Our findings show that assistive decoders manipulate error information used for long-term learning computations like credit assignment, which informs our understanding of motor learning and has implications for designing real-world BCIs.
Collapse
Affiliation(s)
| | - Alexandre Payeur
- Université de Montréal, Department of Mathematics and Statistics, Montréal (QC), Canada, H3C 3J7
- Mila - Québec Artificial Intelligence Institute, Montréal (QC), Canada, H2S 3H1
| | - Guillaume Lajoie
- Université de Montréal, Department of Mathematics and Statistics, Montréal (QC), Canada, H3C 3J7
- Mila - Québec Artificial Intelligence Institute, Montréal (QC), Canada, H2S 3H1
| | - Amy L. Orsborn
- University of Washington, Bioengineering, Seattle, 98115, USA
- University of Washington, Electrical and Computer Engineering, Seattle, 98115, USA
- Washington National Primate Research Center, Seattle, Washington, 98115, USA
| |
Collapse
|
5
|
Sinha A, Gleeson P, Marin B, Dura-Bernal S, Panagiotou S, Crook S, Cantarelli M, Cannon RC, Davison AP, Gurnani H, Silver RA. The NeuroML ecosystem for standardized multi-scale modeling in neuroscience. eLife 2025; 13:RP95135. [PMID: 39792574 PMCID: PMC11723582 DOI: 10.7554/elife.95135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2025] Open
Abstract
Data-driven models of neurons and circuits are important for understanding how the properties of membrane conductances, synapses, dendrites, and the anatomical connectivity between neurons generate the complex dynamical behaviors of brain circuits in health and disease. However, the inherent complexity of these biological processes makes the construction and reuse of biologically detailed models challenging. A wide range of tools have been developed to aid their construction and simulation, but differences in design and internal representation act as technical barriers to those who wish to use data-driven models in their research workflows. NeuroML, a model description language for computational neuroscience, was developed to address this fragmentation in modeling tools. Since its inception, NeuroML has evolved into a mature community standard that encompasses a wide range of model types and approaches in computational neuroscience. It has enabled the development of a large ecosystem of interoperable open-source software tools for the creation, visualization, validation, and simulation of data-driven models. Here, we describe how the NeuroML ecosystem can be incorporated into research workflows to simplify the construction, testing, and analysis of standardized models of neural systems, and supports the FAIR (Findability, Accessibility, Interoperability, and Reusability) principles, thus promoting open, transparent and reproducible science.
Collapse
Affiliation(s)
- Ankur Sinha
- Department of Neuroscience, Physiology and Pharmacology, University College LondonLondonUnited Kingdom
| | - Padraig Gleeson
- Department of Neuroscience, Physiology and Pharmacology, University College LondonLondonUnited Kingdom
| | - Bóris Marin
- Universidade Federal do ABCSão Bernardo do CampoBrazil
| | - Salvador Dura-Bernal
- SUNY Downstate Medical CenterBrooklynUnited States
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric ResearchOrangeburgUnited States
| | | | | | | | | | | | | | - Robin Angus Silver
- Department of Neuroscience, Physiology and Pharmacology, University College LondonLondonUnited Kingdom
| |
Collapse
|
6
|
Pemberton J, Chadderton P, Costa RP. Cerebellar-driven cortical dynamics can enable task acquisition, switching and consolidation. Nat Commun 2024; 15:10913. [PMID: 39738061 PMCID: PMC11686095 DOI: 10.1038/s41467-024-55315-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Accepted: 12/06/2024] [Indexed: 01/01/2025] Open
Abstract
The brain must maintain a stable world model while rapidly adapting to the environment, but the underlying mechanisms are not known. Here, we posit that cortico-cerebellar loops play a key role in this process. We introduce a computational model of cerebellar networks that learn to drive cortical networks with task-outcome predictions. First, using sensorimotor tasks, we show that cerebellar feedback in the presence of stable cortical networks is sufficient for rapid task acquisition and switching. Next, we demonstrate that, when trained in working memory tasks, the cerebellum can also underlie the maintenance of cognitive-specific dynamics in the cortex, explaining a range of optogenetic and behavioural observations. Finally, using our model, we introduce a systems consolidation theory in which task information is gradually transferred from the cerebellum to the cortex. In summary, our findings suggest that cortico-cerebellar loops are an important component of task acquisition, switching, and consolidation in the brain.
Collapse
Affiliation(s)
- Joseph Pemberton
- Computational Neuroscience Unit, Intelligent Systems Labs, Faculty of Engineering, University of Bristol, Bristol, UK.
- Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, UK.
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA.
| | - Paul Chadderton
- School of Physiology, Pharmacology and Neuroscience, Faculty of Life Sciences, University of Bristol, Bristol, UK
| | - Rui Ponte Costa
- Computational Neuroscience Unit, Intelligent Systems Labs, Faculty of Engineering, University of Bristol, Bristol, UK.
- Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, UK.
| |
Collapse
|
7
|
Kong E, Zabeh E, Liao Z, Mihaila TS, Wilson C, Santhirasegaran C, Peterka DS, Losonczy A, Geiller T. Recurrent Connectivity Shapes Spatial Coding in Hippocampal CA3 Subregions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.07.622379. [PMID: 39574766 PMCID: PMC11581023 DOI: 10.1101/2024.11.07.622379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/30/2024]
Abstract
Stable and flexible neural representations of space in the hippocampus are crucial for navigating complex environments. However, how these distinct representations emerge from the underlying local circuit architecture remains unknown. Using two-photon imaging of CA3 subareas during active behavior, we reveal opposing coding strategies within specific CA3 subregions, with proximal neurons demonstrating stable and generalized representations and distal neurons showing dynamic and context-specific activity. We show in artificial neural network models that varying the recurrence level causes these differences in coding properties to emerge. We confirmed the contribution of recurrent connectivity to functional heterogeneity by characterizing the representational geometry of neural recordings and comparing it with theoretical predictions of neural manifold dimensionality. Our results indicate that local circuit organization, particularly recurrent connectivity among excitatory neurons, plays a key role in shaping complementary spatial representations within the hippocampus.
Collapse
Affiliation(s)
- Eunji Kong
- Department of Neuroscience, Columbia University, New York, NY, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| | - Erfan Zabeh
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | - Zhenrui Liao
- Department of Neuroscience, Columbia University, New York, NY, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| | - Tiberiu S Mihaila
- Department of Neuroscience, Columbia University, New York, NY, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| | - Caroline Wilson
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| | - Charan Santhirasegaran
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| | - Darcy S Peterka
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| | - Attila Losonczy
- Department of Neuroscience, Columbia University, New York, NY, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| | - Tristan Geiller
- Department of Neuroscience, Yale University, New Haven, CT, USA
- Wu Tsai Institute, Yale University, New Haven, CT, USA
| |
Collapse
|
8
|
Jun S, Park H, Kim M, Kang S, Kim T, Kim D, Yamamoto Y, Tanaka-Yamamoto K. Increased understanding of complex neuronal circuits in the cerebellar cortex. Front Cell Neurosci 2024; 18:1487362. [PMID: 39497921 PMCID: PMC11532081 DOI: 10.3389/fncel.2024.1487362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2024] [Accepted: 09/27/2024] [Indexed: 11/07/2024] Open
Abstract
The prevailing belief has been that the fundamental structures of cerebellar neuronal circuits, consisting of a few major neuron types, are simple and well understood. Given that the cerebellum has long been known to be crucial for motor behaviors, these simple yet organized circuit structures seemed beneficial for theoretical studies proposing neural mechanisms underlying cerebellar motor functions and learning. On the other hand, experimental studies using advanced techniques have revealed numerous structural properties that were not traditionally defined. These include subdivided neuronal types and their circuit structures, feedback pathways from output Purkinje cells, and the multidimensional organization of neuronal interactions. With the recent recognition of the cerebellar involvement in non-motor functions, it is possible that these newly identified structural properties, which are potentially capable of generating greater complexity than previously recognized, are associated with increased information capacity. This, in turn, could contribute to the wide range of cerebellar functions. However, it remains largely unknown how such structural properties contribute to cerebellar neural computations through the regulation of neuronal activity or synaptic transmissions. To promote further research into cerebellar circuit structures and their functional significance, we aim to summarize the newly identified structural properties of the cerebellar cortex and discuss future research directions concerning cerebellar circuit structures and their potential functions.
Collapse
Affiliation(s)
- Soyoung Jun
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Heeyoun Park
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Muwoong Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
- Division of Bio-Medical Science and Technology, KIST School, Korea University of Science and Technology (UST), Seoul, Republic of Korea
| | - Seulgi Kang
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
- Division of Bio-Medical Science and Technology, KIST School, Korea University of Science and Technology (UST), Seoul, Republic of Korea
| | - Taehyeong Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
- Department of Integrated Biomedical and Life Sciences, Korea University, Seoul, Republic of Korea
| | - Daun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
- Department of Life Science, Korea University, Seoul, Republic of Korea
| | - Yukio Yamamoto
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Keiko Tanaka-Yamamoto
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| |
Collapse
|
9
|
Gilbert M, Rasmussen A. Gap Junctions May Have A Computational Function In The Cerebellum: A Hypothesis. CEREBELLUM (LONDON, ENGLAND) 2024; 23:1903-1915. [PMID: 38499814 PMCID: PMC11489243 DOI: 10.1007/s12311-024-01680-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/29/2024] [Indexed: 03/20/2024]
Abstract
In the cerebellum, granule cells make parallel fibre contact on (and excite) Golgi cells and Golgi cells inhibit granule cells, forming an open feedback loop. Parallel fibres excite Golgi cells synaptically, each making a single contact. Golgi cells inhibit granule cells in a structure called a glomerulus almost exclusively by GABA spillover acting through extrasynaptic GABAA receptors. Golgi cells are connected dendritically by gap junctions. It has long been suspected that feedback contributes to homeostatic regulation of parallel fibre signals activity, causing the fraction of the population that are active to be maintained at a low level. We present a detailed neurophysiological and computationally-rendered model of functionally grouped Golgi cells which can infer the density of parallel fibre signals activity and convert it into proportional modulation of inhibition of granule cells. The conversion is unlearned and not actively computed; rather, output is simply the computational effect of cell morphology and network architecture. Unexpectedly, the conversion becomes more precise at low density, suggesting that self-regulation is attracted to sparse code, because it is stable. A computational function of gap junctions may not be confined to the cerebellum.
Collapse
Affiliation(s)
- Mike Gilbert
- School of Psychology, College of Life and Environmental Sciences, University of Birmingham, B15 2TT, Birmingham, UK.
| | - Anders Rasmussen
- Department of Experimental Medical Science, Lund University, BMC F10, 22184, Lund, Sweden
| |
Collapse
|
10
|
Wang Z, Yang K, Sun X. Effect of adult hippocampal neurogenesis on pattern separation and its applications. Cogn Neurodyn 2024; 18:1-14. [PMID: 39568526 PMCID: PMC11564429 DOI: 10.1007/s11571-024-10110-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 03/13/2024] [Accepted: 03/22/2024] [Indexed: 11/22/2024] Open
Abstract
Adult hippocampal neurogenesis (AHN) is considered essential in memory formation. The dentate gyrus neural network containing newborn dentate gyrus granule cells at the critical period (4-6 weeks) have been widely discussed in neurophysiological and behavioral experiments. However, how newborn dentate gyrus granule cells at this critical period influence pattern separation of dentate gyrus remains unclear. To address this issue, we propose a biologically related dentate gyrus neural network model with AHN. By Leveraging this model, we find pattern separation is enhanced at the medium level of neurogenesis (5% of mature granule cells). This is because the sparse firing of mature granule cells is increased. We can understand this change from the following two aspects. On one hand, newborn granule cells compete with mature granule cells for inputs from the entorhinal cortex, thereby weakening the firing of mature granule cells. On the other hand, newborn granule cells effectively enhance the feedback inhibition level of the network by promoting the firing of interneurons (Mossy cells and Basket cells) and then indirectly regulating the sparse firing of mature granule cells. To verify the validity of the model for pattern separation, we apply the proposed model to a similar concept separation task and reveal that our model outperforms the original model counterparts in this task.
Collapse
Affiliation(s)
- Zengbin Wang
- School of Science, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People's Republic of China
| | - Kai Yang
- School of Science, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People's Republic of China
| | - Xiaojuan Sun
- School of Science, Beijing University of Posts and Telecommunications, #10 Xitucheng Road, Beijing, 100876 People's Republic of China
| |
Collapse
|
11
|
Kumari S, Narayanan R. Ion-channel degeneracy and heterogeneities in the emergence of signature physiological characteristics of dentate gyrus granule cells. J Neurophysiol 2024; 132:991-1013. [PMID: 39110941 DOI: 10.1152/jn.00071.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 07/24/2024] [Accepted: 08/07/2024] [Indexed: 09/19/2024] Open
Abstract
Complex systems are neither fully determined nor completely random. Biological complex systems, including single neurons, manifest intermediate regimes of randomness that recruit integration of specific combinations of functionally specialized subsystems. Such emergence of biological function provides the substrate for the expression of degeneracy, the ability of disparate combinations of subsystems to yield similar function. Here, we present evidence for the expression of degeneracy in morphologically realistic models of dentate gyrus granule cells (GCs) through functional integration of disparate ion-channel combinations. We performed a 45-parameter randomized search spanning 16 active and passive ion channels, each biophysically constrained by their gating kinetics and localization profiles, to search for valid GC models. Valid models were those that satisfied 17 sub- and suprathreshold cellular-scale electrophysiological measurements from rat GCs. A vast majority (>99%) of the 15,000 random models were not electrophysiologically valid, demonstrating that arbitrarily random ion-channel combinations would not yield GC functions. The 141 valid models (0.94% of 15,000) manifested heterogeneities in and cross-dependencies across local and propagating electrophysiological measurements, which matched with their respective biological counterparts. Importantly, these valid models were widespread throughout the parametric space and manifested weak cross-dependencies across different parameters. These observations together showed that GC physiology could neither be obtained by entirely random ion-channel combinations nor is there an entirely determined single parametric combination that satisfied all constraints. The complexity, the heterogeneities in measurement and parametric spaces, and degeneracy associated with GC physiology should be rigorously accounted for while assessing GCs and their robustness under physiological and pathological conditions.NEW & NOTEWORTHY A recent study from our laboratory had demonstrated pronounced heterogeneities in a set of 17 electrophysiological measurements obtained from a large population of rat hippocampal granule cells. Here, we demonstrate the manifestation of ion-channel degeneracy in a heterogeneous population of morphologically realistic conductance-based granule cell models that were validated against these measurements and their cross-dependencies. Our analyses show that single neurons are complex entities whose functions emerge through intricate interactions among several functionally specialized subsystems.
Collapse
Affiliation(s)
- Sanjna Kumari
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
| |
Collapse
|
12
|
Garcia-Garcia MG, Kapoor A, Akinwale O, Takemaru L, Kim TH, Paton C, Litwin-Kumar A, Schnitzer MJ, Luo L, Wagner MJ. A cerebellar granule cell-climbing fiber computation to learn to track long time intervals. Neuron 2024; 112:2749-2764.e7. [PMID: 38870929 PMCID: PMC11343686 DOI: 10.1016/j.neuron.2024.05.019] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Revised: 03/31/2024] [Accepted: 05/16/2024] [Indexed: 06/15/2024]
Abstract
In classical cerebellar learning, Purkinje cells (PkCs) associate climbing fiber (CF) error signals with predictive granule cells (GrCs) that were active just prior (∼150 ms). The cerebellum also contributes to behaviors characterized by longer timescales. To investigate how GrC-CF-PkC circuits might learn seconds-long predictions, we imaged simultaneous GrC-CF activity over days of forelimb operant conditioning for delayed water reward. As mice learned reward timing, numerous GrCs developed anticipatory activity ramping at different rates until reward delivery, followed by widespread time-locked CF spiking. Relearning longer delays further lengthened GrC activations. We computed CF-dependent GrC→PkC plasticity rules, demonstrating that reward-evoked CF spikes sufficed to grade many GrC synapses by anticipatory timing. We predicted and confirmed that PkCs could thereby continuously ramp across seconds-long intervals from movement to reward. Learning thus leads to new GrC temporal bases linking predictors to remote CF reward signals-a strategy well suited for learning to track the long intervals common in cognitive domains.
Collapse
Affiliation(s)
- Martha G Garcia-Garcia
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA
| | - Akash Kapoor
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA
| | - Oluwatobi Akinwale
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA
| | - Lina Takemaru
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA
| | - Tony Hyun Kim
- Department of Biology and Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - Casey Paton
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA
| | - Mark J Schnitzer
- Department of Biology and Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Liqun Luo
- Department of Biology and Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - Mark J Wagner
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA.
| |
Collapse
|
13
|
Toth J, Sidleck B, Lombardi O, Hou T, Eldo A, Kerlin M, Zeng X, Saeed D, Agarwal P, Leonard D, Andrino L, Inbar T, Malina M, Insanally MN. Dynamic gating of perceptual flexibility by non-classically responsive cortical neurons. RESEARCH SQUARE 2024:rs.3.rs-4650869. [PMID: 39108496 PMCID: PMC11302693 DOI: 10.21203/rs.3.rs-4650869/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
The ability to flexibly respond to sensory cues in dynamic environments is essential to adaptive auditory-guided behaviors. Cortical spiking responses during behavior are highly diverse, ranging from reliable trial-averaged responses to seemingly random firing patterns. While the reliable responses of 'classically responsive' cells have been extensively studied for decades, the contribution of irregular spiking 'non-classically responsive' cells to behavior has remained underexplored despite their prevalence. Here, we show that flexible auditory behavior results from interactions between local auditory cortical circuits comprised of heterogeneous responses and inputs from secondary motor cortex. Strikingly, non-classically responsive neurons in auditory cortex were preferentially recruited during learning, specifically during rapid learning phases when the greatest gains in behavioral performance occur. Population-level decoding revealed that during rapid learning mixed ensembles comprised of both classically and non-classically responsive cells encode significantly more task information than homogenous ensembles of either type and emerge as a functional unit critical for learning. Optogenetically silencing inputs from secondary motor cortex selectively modulated non-classically responsive cells in the auditory cortex and impaired reversal learning by preventing the remapping of a previously learned stimulus-reward association. Top-down inputs orchestrated highly correlated non-classically responsive ensembles in sensory cortex providing a unique task-relevant manifold for learning. Thus, non-classically responsive cells in sensory cortex are preferentially recruited by top-down inputs to enable neural and behavioral flexibility.
Collapse
Affiliation(s)
- Jade Toth
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Blake Sidleck
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Olivia Lombardi
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Tiange Hou
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Abraham Eldo
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Madelyn Kerlin
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Xiangjian Zeng
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Danyall Saeed
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Priya Agarwal
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Dylan Leonard
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Luz Andrino
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Tal Inbar
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Michael Malina
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Michele N. Insanally
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213
| |
Collapse
|
14
|
Thornton-Kolbe EM, Ahmed M, Gordon FR, Sieriebriennikov B, Williams DL, Kurmangaliyev YZ, Clowney EJ. Spatial constraints and cell surface molecule depletion structure a randomly connected learning circuit. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.17.603956. [PMID: 39071296 PMCID: PMC11275898 DOI: 10.1101/2024.07.17.603956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
The brain can represent almost limitless objects to "categorize an unlabeled world" (Edelman, 1989). This feat is supported by expansion layer circuit architectures, in which neurons carrying information about discrete sensory channels make combinatorial connections onto much larger postsynaptic populations. Combinatorial connections in expansion layers are modeled as randomized sets. The extent to which randomized wiring exists in vivo is debated, and how combinatorial connectivity patterns are generated during development is not understood. Non-deterministic wiring algorithms could program such connectivity using minimal genomic information. Here, we investigate anatomic and transcriptional patterns and perturb partner availability to ask how Kenyon cells, the expansion layer neurons of the insect mushroom body, obtain combinatorial input from olfactory projection neurons. Olfactory projection neurons form their presynaptic outputs in an orderly, predictable, and biased fashion. We find that Kenyon cells accept spatially co-located but molecularly heterogeneous inputs from this orderly map, and ask how Kenyon cell surface molecule expression impacts partner choice. Cell surface immunoglobulins are broadly depleted in Kenyon cells, and we propose that this allows them to form connections with molecularly heterogeneous partners. This model can explain how developmentally identical neurons acquire diverse wiring identities.
Collapse
Affiliation(s)
- Emma M. Thornton-Kolbe
- Neurosciences Graduate Program, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Maria Ahmed
- Department of Molecular, Cellular, and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | - Finley R. Gordon
- Department of Molecular, Cellular, and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | | | - Donnell L. Williams
- Department of Molecular, Cellular, and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | | | - E. Josephine Clowney
- Department of Molecular, Cellular, and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
- Michigan Neuroscience Institute, Ann Arbor, MI, USA
| |
Collapse
|
15
|
Fernández JG, Keemink S, van Gerven M. Gradient-free training of recurrent neural networks using random perturbations. Front Neurosci 2024; 18:1439155. [PMID: 39050673 PMCID: PMC11267880 DOI: 10.3389/fnins.2024.1439155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Accepted: 06/25/2024] [Indexed: 07/27/2024] Open
Abstract
Recurrent neural networks (RNNs) hold immense potential for computations due to their Turing completeness and sequential processing capabilities, yet existing methods for their training encounter efficiency challenges. Backpropagation through time (BPTT), the prevailing method, extends the backpropagation (BP) algorithm by unrolling the RNN over time. However, this approach suffers from significant drawbacks, including the need to interleave forward and backward phases and store exact gradient information. Furthermore, BPTT has been shown to struggle to propagate gradient information for long sequences, leading to vanishing gradients. An alternative strategy to using gradient-based methods like BPTT involves stochastically approximating gradients through perturbation-based methods. This learning approach is exceptionally simple, necessitating only forward passes in the network and a global reinforcement signal as feedback. Despite its simplicity, the random nature of its updates typically leads to inefficient optimization, limiting its effectiveness in training neural networks. In this study, we present a new approach to perturbation-based learning in RNNs whose performance is competitive with BPTT, while maintaining the inherent advantages over gradient-based learning. To this end, we extend the recently introduced activity-based node perturbation (ANP) method to operate in the time domain, leading to more efficient learning and generalization. We subsequently conduct a range of experiments to validate our approach. Our results show similar performance, convergence time and scalability when compared to BPTT, strongly outperforming standard node perturbation and weight perturbation methods. These findings suggest that perturbation-based learning methods offer a versatile alternative to gradient-based methods for training RNNs which can be ideally suited for neuromorphic computing applications.
Collapse
Affiliation(s)
- Jesús García Fernández
- Department of Machine Learning and Neural Computing, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | | | | |
Collapse
|
16
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
17
|
Lin TF, Busch SE, Hansel C. Intrinsic and synaptic determinants of receptive field plasticity in Purkinje cells of the mouse cerebellum. Nat Commun 2024; 15:4645. [PMID: 38821918 PMCID: PMC11143328 DOI: 10.1038/s41467-024-48373-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 04/28/2024] [Indexed: 06/02/2024] Open
Abstract
Non-synaptic (intrinsic) plasticity of membrane excitability contributes to aspects of memory formation, but it remains unclear whether it merely facilitates synaptic long-term potentiation or plays a permissive role in determining the impact of synaptic weight increase. We use tactile stimulation and electrical activation of parallel fibers to probe intrinsic and synaptic contributions to receptive field plasticity in awake mice during two-photon calcium imaging of cerebellar Purkinje cells. Repetitive activation of both stimuli induced response potentiation that is impaired in mice with selective deficits in either synaptic or intrinsic plasticity. Spatial analysis of calcium signals demonstrated that intrinsic, but not synaptic plasticity, enhances the spread of dendritic parallel fiber response potentiation. Simultaneous dendrite and axon initial segment recordings confirm these dendritic events affect axonal output. Our findings support the hypothesis that intrinsic plasticity provides an amplification mechanism that exerts a permissive control over the impact of long-term potentiation on neuronal responsiveness.
Collapse
Affiliation(s)
- Ting-Feng Lin
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, IL, USA
| | - Silas E Busch
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, IL, USA
| | - Christian Hansel
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, IL, USA.
| |
Collapse
|
18
|
Shu WC, Jackson MB. Intrinsic and Synaptic Contributions to Repetitive Spiking in Dentate Granule Cells. J Neurosci 2024; 44:e0716232024. [PMID: 38503495 PMCID: PMC11063872 DOI: 10.1523/jneurosci.0716-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 03/03/2024] [Accepted: 03/05/2024] [Indexed: 03/21/2024] Open
Abstract
Repetitive firing of granule cells (GCs) in the dentate gyrus (DG) facilitates synaptic transmission to the CA3 region. This facilitation can gate and amplify the flow of information through the hippocampus. High-frequency bursts in the DG are linked to behavior and plasticity, but GCs do not readily burst. Under normal conditions, a single shock to the perforant path in a hippocampal slice typically drives a GC to fire a single spike, and only occasionally more than one spike is seen. Repetitive spiking in GCs is not robust, and the mechanisms are poorly understood. Here, we used a hybrid genetically encoded voltage sensor to image voltage changes evoked by cortical inputs in many mature GCs simultaneously in hippocampal slices from male and female mice. This enabled us to study relatively infrequent double and triple spikes. We found GCs are relatively homogeneous and their double spiking behavior is cell autonomous. Blockade of GABA type A receptors increased multiple spikes and prolonged the interspike interval, indicating inhibitory interneurons limit repetitive spiking and set the time window for successive spikes. Inhibiting synaptic glutamate release showed that recurrent excitation mediated by hilar mossy cells contributes to, but is not necessary for, multiple spiking. Blockade of T-type Ca2+ channels did not reduce multiple spiking but prolonged interspike intervals. Imaging voltage changes in different GC compartments revealed that second spikes can be initiated in either dendrites or somata. Thus, pharmacological and biophysical experiments reveal roles for both synaptic circuitry and intrinsic excitability in GC repetitive spiking.
Collapse
Affiliation(s)
- Wen-Chi Shu
- Department of Neuroscience and Biophysics Program, University of Wisconsin-Madison, Wisconsin 53705
| | - Meyer B Jackson
- Department of Neuroscience and Biophysics Program, University of Wisconsin-Madison, Wisconsin 53705
| |
Collapse
|
19
|
Fleming EA, Field GD, Tadross MR, Hull C. Local synaptic inhibition mediates cerebellar granule cell pattern separation and enables learned sensorimotor associations. Nat Neurosci 2024; 27:689-701. [PMID: 38321293 PMCID: PMC11288180 DOI: 10.1038/s41593-023-01565-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 12/21/2023] [Indexed: 02/08/2024]
Abstract
The cerebellar cortex has a key role in generating predictive sensorimotor associations. To do so, the granule cell layer is thought to establish unique sensorimotor representations for learning. However, how this is achieved and how granule cell population responses contribute to behavior have remained unclear. To address these questions, we have used in vivo calcium imaging and granule cell-specific pharmacological manipulation of synaptic inhibition in awake, behaving mice. These experiments indicate that inhibition sparsens and thresholds sensory responses, limiting overlap between sensory ensembles and preventing spiking in many granule cells that receive excitatory input. Moreover, inhibition can be recruited in a stimulus-specific manner to powerfully decorrelate multisensory ensembles. Consistent with these results, granule cell inhibition is required for accurate cerebellum-dependent sensorimotor behavior. These data thus reveal key mechanisms for granule cell layer pattern separation beyond those envisioned by classical models.
Collapse
Affiliation(s)
| | - Greg D Field
- Department of Neurobiology, Duke University Medical School, Durham, NC, USA
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles, CA, USA
| | - Michael R Tadross
- Department of Neurobiology, Duke University Medical School, Durham, NC, USA
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Court Hull
- Department of Neurobiology, Duke University Medical School, Durham, NC, USA.
| |
Collapse
|
20
|
Kang L, Toyoizumi T. Distinguishing examples while building concepts in hippocampal and artificial networks. Nat Commun 2024; 15:647. [PMID: 38245502 PMCID: PMC10799871 DOI: 10.1038/s41467-024-44877-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 01/03/2024] [Indexed: 01/22/2024] Open
Abstract
The hippocampal subfield CA3 is thought to function as an auto-associative network that stores experiences as memories. Information from these experiences arrives directly from the entorhinal cortex as well as indirectly through the dentate gyrus, which performs sparsification and decorrelation. The computational purpose for these dual input pathways has not been firmly established. We model CA3 as a Hopfield-like network that stores both dense, correlated encodings and sparse, decorrelated encodings. As more memories are stored, the former merge along shared features while the latter remain distinct. We verify our model's prediction in rat CA3 place cells, which exhibit more distinct tuning during theta phases with sparser activity. Finally, we find that neural networks trained in multitask learning benefit from a loss term that promotes both correlated and decorrelated representations. Thus, the complementary encodings we have found in CA3 can provide broad computational advantages for solving complex tasks.
Collapse
Affiliation(s)
- Louis Kang
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako-shi, Saitama, 351-0198, Japan.
- Graduate School of Informatics, Kyoto University, 36-1 Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan.
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako-shi, Saitama, 351-0198, Japan
- Graduate School of Information Science and Technology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| |
Collapse
|
21
|
Bruel A, Abadía I, Collin T, Sakr I, Lorach H, Luque NR, Ros E, Ijspeert A. The spinal cord facilitates cerebellar upper limb motor learning and control; inputs from neuromusculoskeletal simulation. PLoS Comput Biol 2024; 20:e1011008. [PMID: 38166093 PMCID: PMC10786408 DOI: 10.1371/journal.pcbi.1011008] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 01/12/2024] [Accepted: 12/12/2023] [Indexed: 01/04/2024] Open
Abstract
Complex interactions between brain regions and the spinal cord (SC) govern body motion, which is ultimately driven by muscle activation. Motor planning or learning are mainly conducted at higher brain regions, whilst the SC acts as a brain-muscle gateway and as a motor control centre providing fast reflexes and muscle activity regulation. Thus, higher brain areas need to cope with the SC as an inherent and evolutionary older part of the body dynamics. Here, we address the question of how SC dynamics affects motor learning within the cerebellum; in particular, does the SC facilitate cerebellar motor learning or constitute a biological constraint? We provide an exploratory framework by integrating biologically plausible cerebellar and SC computational models in a musculoskeletal upper limb control loop. The cerebellar model, equipped with the main form of cerebellar plasticity, provides motor adaptation; whilst the SC model implements stretch reflex and reciprocal inhibition between antagonist muscles. The resulting spino-cerebellar model is tested performing a set of upper limb motor tasks, including external perturbation studies. A cerebellar model, lacking the implemented SC model and directly controlling the simulated muscles, was also tested in the same. The performances of the spino-cerebellar and cerebellar models were then compared, thus allowing directly addressing the SC influence on cerebellar motor adaptation and learning, and on handling external motor perturbations. Performance was assessed in both joint and muscle space, and compared with kinematic and EMG recordings from healthy participants. The differences in cerebellar synaptic adaptation between both models were also studied. We conclude that the SC facilitates cerebellar motor learning; when the SC circuits are in the loop, faster convergence in motor learning is achieved with simpler cerebellar synaptic weight distributions. The SC is also found to improve robustness against external perturbations, by better reproducing and modulating muscle cocontraction patterns.
Collapse
Affiliation(s)
- Alice Bruel
- Biorobotics Laboratory, EPFL, Lausanne, Switzerland
| | - Ignacio Abadía
- Research Centre for Information and Communication Technologies, Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| | | | - Icare Sakr
- NeuroRestore, EPFL, Lausanne, Switzerland
| | | | - Niceto R. Luque
- Research Centre for Information and Communication Technologies, Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| | - Eduardo Ros
- Research Centre for Information and Communication Technologies, Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| | | |
Collapse
|
22
|
Farrell M, Recanatesi S, Shea-Brown E. From lazy to rich to exclusive task representations in neural networks and neural codes. Curr Opin Neurobiol 2023; 83:102780. [PMID: 37757585 DOI: 10.1016/j.conb.2023.102780] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 08/04/2023] [Accepted: 08/16/2023] [Indexed: 09/29/2023]
Abstract
Neural circuits-both in the brain and in "artificial" neural network models-learn to solve a remarkable variety of tasks, and there is a great current opportunity to use neural networks as models for brain function. Key to this endeavor is the ability to characterize the representations formed by both artificial and biological brains. Here, we investigate this potential through the lens of recently developing theory that characterizes neural networks as "lazy" or "rich" depending on the approach they use to solve tasks: lazy networks solve tasks by making small changes in connectivity, while rich networks solve tasks by significantly modifying weights throughout the network (including "hidden layers"). We further elucidate rich networks through the lens of compression and "neural collapse", ideas that have recently been of significant interest to neuroscience and machine learning. We then show how these ideas apply to a domain of increasing importance to both fields: extracting latent structures through self-supervised learning.
Collapse
Affiliation(s)
- Matthew Farrell
- John A. Paulson School of Engineering and Applied Sciences, Harvard University and Center for Brain Science, Harvard University, United States
| | - Stefano Recanatesi
- Applied Mathematics, Physiology and Biophysics, and Computational Neuroscience Center, University of Washington, United States
| | - Eric Shea-Brown
- Applied Mathematics, Physiology and Biophysics, and Computational Neuroscience Center, University of Washington, United States.
| |
Collapse
|
23
|
Kang L, Toyoizumi T. Hopfield-like network with complementary encodings of memories. Phys Rev E 2023; 108:054410. [PMID: 38115467 DOI: 10.1103/physreve.108.054410] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 08/28/2023] [Indexed: 12/21/2023]
Abstract
We present a Hopfield-like autoassociative network for memories representing examples of concepts. Each memory is encoded by two activity patterns with complementary properties. The first is dense and correlated across examples within concepts, and the second is sparse and exhibits no correlation among examples. The network stores each memory as a linear combination of its encodings. During retrieval, the network recovers sparse or dense patterns with a high or low activity threshold, respectively. As more memories are stored, the dense representation at low threshold shifts from examples to concepts, which are learned from accumulating common example features. Meanwhile, the sparse representation at high threshold maintains distinctions between examples due to the high capacity of sparse, decorrelated patterns. Thus, a single network can retrieve memories at both example and concept scales and perform heteroassociation between them. We obtain our results by deriving macroscopic mean-field equations that yield capacity formulas for sparse examples, dense examples, and dense concepts. We also perform simulations that verify our theoretical results and explicitly demonstrate the capabilities of the network.
Collapse
Affiliation(s)
- Louis Kang
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan
- Graduate School of Informatics, Kyoto University, 36-1 Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan
- Graduate School of Information Science and Technology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| |
Collapse
|
24
|
Zang Y, De Schutter E. Recent data on the cerebellum require new models and theories. Curr Opin Neurobiol 2023; 82:102765. [PMID: 37591124 DOI: 10.1016/j.conb.2023.102765] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 07/22/2023] [Accepted: 07/23/2023] [Indexed: 08/19/2023]
Abstract
The cerebellum has been a popular topic for theoretical studies because its structure was thought to be simple. Since David Marr and James Albus related its function to motor skill learning and proposed the Marr-Albus cerebellar learning model, this theory has guided and inspired cerebellar research. In this review, we summarize the theoretical progress that has been made within this framework of error-based supervised learning. We discuss the experimental progress that demonstrates more complicated molecular and cellular mechanisms in the cerebellum as well as new cell types and recurrent connections. We also cover its involvement in diverse non-motor functions and evidence of other forms of learning. Finally, we highlight the need to explain these new experimental findings into an integrated cerebellar model that can unify its diverse computational functions.
Collapse
Affiliation(s)
- Yunliang Zang
- Academy of Medical Engineering and Translational Medicine, Medical Faculty, Tianjin University, Tianjin 300072, China; Volen Center and Biology Department, Brandeis University, Waltham, MA 02454, USA.
| | - Erik De Schutter
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Japan. https://twitter.com/DeschutterOIST
| |
Collapse
|
25
|
Müller-Komorowska D, Kuru B, Beck H, Braganza O. Phase information is conserved in sparse, synchronous population-rate-codes via phase-to-rate recoding. Nat Commun 2023; 14:6106. [PMID: 37777512 PMCID: PMC10543394 DOI: 10.1038/s41467-023-41803-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 09/19/2023] [Indexed: 10/02/2023] Open
Abstract
Neural computation is often traced in terms of either rate- or phase-codes. However, most circuit operations will simultaneously affect information across both coding schemes. It remains unclear how phase and rate coded information is transmitted, in the face of continuous modification at consecutive processing stages. Here, we study this question in the entorhinal cortex (EC)- dentate gyrus (DG)- CA3 system using three distinct computational models. We demonstrate that DG feedback inhibition leverages EC phase information to improve rate-coding, a computation we term phase-to-rate recoding. Our results suggest that it i) supports the conservation of phase information within sparse rate-codes and ii) enhances the efficiency of plasticity in downstream CA3 via increased synchrony. Given the ubiquity of both phase-coding and feedback circuits, our results raise the question whether phase-to-rate recoding is a recurring computational motif, which supports the generation of sparse, synchronous population-rate-codes in areas beyond the DG.
Collapse
Affiliation(s)
- Daniel Müller-Komorowska
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, 904-0495, Japan.
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany.
| | - Baris Kuru
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany
| | - Heinz Beck
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany
- Deutsches Zentrum für Neurodegenerative Erkrankungen e.V, Bonn, Germany
| | - Oliver Braganza
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany.
- Institute for Socio-Economics, University of Duisburg-Essen, Duisburg, Germany.
| |
Collapse
|
26
|
Xie M, Muscinelli SP, Decker Harris K, Litwin-Kumar A. Task-dependent optimal representations for cerebellar learning. eLife 2023; 12:e82914. [PMID: 37671785 PMCID: PMC10541175 DOI: 10.7554/elife.82914] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 09/05/2023] [Indexed: 09/07/2023] Open
Abstract
The cerebellar granule cell layer has inspired numerous theoretical models of neural representations that support learned behaviors, beginning with the work of Marr and Albus. In these models, granule cells form a sparse, combinatorial encoding of diverse sensorimotor inputs. Such sparse representations are optimal for learning to discriminate random stimuli. However, recent observations of dense, low-dimensional activity across granule cells have called into question the role of sparse coding in these neurons. Here, we generalize theories of cerebellar learning to determine the optimal granule cell representation for tasks beyond random stimulus discrimination, including continuous input-output transformations as required for smooth motor control. We show that for such tasks, the optimal granule cell representation is substantially denser than predicted by classical theories. Our results provide a general theory of learning in cerebellum-like systems and suggest that optimal cerebellar representations are task-dependent.
Collapse
Affiliation(s)
- Marjorie Xie
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Samuel P Muscinelli
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Kameron Decker Harris
- Department of Computer Science, Western Washington UniversityBellinghamUnited States
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| |
Collapse
|
27
|
Jeon I, Kim T. Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network. Front Comput Neurosci 2023; 17:1092185. [PMID: 37449083 PMCID: PMC10336230 DOI: 10.3389/fncom.2023.1092185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.
Collapse
Affiliation(s)
| | - Taegon Kim
- Brain Science Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
28
|
Xu Z, Geron E, Pérez-Cuesta LM, Bai Y, Gan WB. Generalized extinction of fear memory depends on co-allocation of synaptic plasticity in dendrites. Nat Commun 2023; 14:503. [PMID: 36720872 PMCID: PMC9889816 DOI: 10.1038/s41467-023-35805-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 01/03/2023] [Indexed: 02/02/2023] Open
Abstract
Memories can be modified by new experience in a specific or generalized manner. Changes in synaptic connections are crucial for memory storage, but it remains unknown how synaptic changes associated with different memories are distributed within neuronal circuits and how such distributions affect specific or generalized modification by novel experience. Here we show that fear conditioning with two different auditory stimuli (CS) and footshocks (US) induces dendritic spine elimination mainly on different dendritic branches of layer 5 pyramidal neurons in the mouse motor cortex. Subsequent fear extinction causes CS-specific spine formation and extinction of freezing behavior. In contrast, spine elimination induced by fear conditioning with >2 different CS-USs often co-exists on the same dendritic branches. Fear extinction induces CS-nonspecific spine formation and generalized fear extinction. Moreover, activation of somatostatin-expressing interneurons increases the occurrence of spine elimination induced by different CS-USs on the same dendritic branches and facilitates the generalization of fear extinction. These findings suggest that specific or generalized modification of existing memories by new experience depends on whether synaptic changes induced by previous experiences are segregated or co-exist at the level of individual dendritic branches.
Collapse
Affiliation(s)
- Zhiwei Xu
- Institute of Neurological and Psychiatric Disorders, Shenzhen Bay Laboratory, Shenzhen, 518132, China
- Peking University Shenzhen Graduate School, Shenzhen, 518055, China
| | - Erez Geron
- Skirball Institute, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, 10016, USA
| | - Luis M Pérez-Cuesta
- Skirball Institute, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, 10016, USA
| | - Yang Bai
- Peking University Shenzhen Graduate School, Shenzhen, 518055, China
| | - Wen-Biao Gan
- Institute of Neurological and Psychiatric Disorders, Shenzhen Bay Laboratory, Shenzhen, 518132, China.
- Peking University Shenzhen Graduate School, Shenzhen, 518055, China.
| |
Collapse
|
29
|
Nguyen TM, Thomas LA, Rhoades JL, Ricchi I, Yuan XC, Sheridan A, Hildebrand DGC, Funke J, Regehr WG, Lee WCA. Structured cerebellar connectivity supports resilient pattern separation. Nature 2023; 613:543-549. [PMID: 36418404 PMCID: PMC10324966 DOI: 10.1038/s41586-022-05471-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 10/20/2022] [Indexed: 11/25/2022]
Abstract
The cerebellum is thought to help detect and correct errors between intended and executed commands1,2 and is critical for social behaviours, cognition and emotion3-6. Computations for motor control must be performed quickly to correct errors in real time and should be sensitive to small differences between patterns for fine error correction while being resilient to noise7. Influential theories of cerebellar information processing have largely assumed random network connectivity, which increases the encoding capacity of the network's first layer8-13. However, maximizing encoding capacity reduces the resilience to noise7. To understand how neuronal circuits address this fundamental trade-off, we mapped the feedforward connectivity in the mouse cerebellar cortex using automated large-scale transmission electron microscopy and convolutional neural network-based image segmentation. We found that both the input and output layers of the circuit exhibit redundant and selective connectivity motifs, which contrast with prevailing models. Numerical simulations suggest that these redundant, non-random connectivity motifs increase the resilience to noise at a negligible cost to the overall encoding capacity. This work reveals how neuronal network structure can support a trade-off between encoding capacity and redundancy, unveiling principles of biological network architecture with implications for the design of artificial neural networks.
Collapse
Affiliation(s)
- Tri M Nguyen
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Logan A Thomas
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Biophysics Graduate Group, University of California Berkeley, Berkeley, CA, USA
| | - Jeff L Rhoades
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Program in Neuroscience, Division of Medical Sciences, Graduate School of Arts and Sciences, Harvard University, Cambridge, MA, USA
| | - Ilaria Ricchi
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Xintong Cindy Yuan
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Program in Neuroscience, Division of Medical Sciences, Graduate School of Arts and Sciences, Harvard University, Cambridge, MA, USA
| | - Arlo Sheridan
- HHMI Janelia Research Campus, Ashburn, VA, USA
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - David G C Hildebrand
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Laboratory of Neural Systems, The Rockefeller University, New York, NY, USA
| | - Jan Funke
- HHMI Janelia Research Campus, Ashburn, VA, USA
| | - Wade G Regehr
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Wei-Chung Allen Lee
- F. M. Kirby Neurobiology Center, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
30
|
Gilmer JI, Farries MA, Kilpatrick Z, Delis I, Cohen JD, Person AL. An emergent temporal basis set robustly supports cerebellar time-series learning. J Neurophysiol 2023; 129:159-176. [PMID: 36416445 PMCID: PMC9990911 DOI: 10.1152/jn.00312.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 11/14/2022] [Accepted: 11/17/2022] [Indexed: 11/24/2022] Open
Abstract
The cerebellum is considered a "learning machine" essential for time interval estimation underlying motor coordination and other behaviors. Theoretical work has proposed that the cerebellum's input recipient structure, the granule cell layer (GCL), performs pattern separation of inputs that facilitates learning in Purkinje cells (P-cells). However, the relationship between input reformatting and learning has remained debated, with roles emphasized for pattern separation features from sparsification to decorrelation. We took a novel approach by training a minimalist model of the cerebellar cortex to learn complex time-series data from time-varying inputs, typical during movements. The model robustly produced temporal basis sets from these inputs, and the resultant GCL output supported better learning of temporally complex target functions than mossy fibers alone. Learning was optimized at intermediate threshold levels, supporting relatively dense granule cell activity, yet the key statistical features in GCL population activity that drove learning differed from those seen previously for classification tasks. These findings advance testable hypotheses for mechanisms of temporal basis set formation and predict that moderately dense population activity optimizes learning.NEW & NOTEWORTHY During movement, mossy fiber inputs to the cerebellum relay time-varying information with strong intrinsic relationships to ongoing movement. Are such mossy fibers signals sufficient to support Purkinje signals and learning? In a model, we show how the GCL greatly improves Purkinje learning of complex, temporally dynamic signals relative to mossy fibers alone. Learning-optimized GCL population activity was moderately dense, which retained intrinsic input variance while also performing pattern separation.
Collapse
Affiliation(s)
- Jesse I Gilmer
- Neuroscience Graduate Program, University of Colorado School of Medicine, Aurora, Colorado
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado
| | - Michael A Farries
- Knoebel Institute for Healthy Aging, University of Denver, Denver, Colorado
| | - Zachary Kilpatrick
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, Colorado
| | - Ioannis Delis
- School of Biomedical Sciences, University of Leeds, Leeds, United Kingdom
| | - Jeremy D Cohen
- University of North Carolina Neuroscience Center, Chapel Hill, North Carolina
| | - Abigail L Person
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado
| |
Collapse
|
31
|
Barri A, Wiechert MT, Jazayeri M, DiGregorio DA. Synaptic basis of a sub-second representation of time in a neural circuit model. Nat Commun 2022; 13:7902. [PMID: 36550115 PMCID: PMC9780315 DOI: 10.1038/s41467-022-35395-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
Temporal sequences of neural activity are essential for driving well-timed behaviors, but the underlying cellular and circuit mechanisms remain elusive. We leveraged the well-defined architecture of the cerebellum, a brain region known to support temporally precise actions, to explore theoretically whether the experimentally observed diversity of short-term synaptic plasticity (STP) at the input layer could generate neural dynamics sufficient for sub-second temporal learning. A cerebellar circuit model equipped with dynamic synapses produced a diverse set of transient granule cell firing patterns that provided a temporal basis set for learning precisely timed pauses in Purkinje cell activity during simulated delay eyelid conditioning and Bayesian interval estimation. The learning performance across time intervals was influenced by the temporal bandwidth of the temporal basis, which was determined by the input layer synaptic properties. The ubiquity of STP throughout the brain positions it as a general, tunable cellular mechanism for sculpting neural dynamics and fine-tuning behavior.
Collapse
Affiliation(s)
- A. Barri
- grid.508487.60000 0004 7885 7602Institut Pasteur, Université Paris Cité, Synapse and Circuit Dynamics Laboratory, CNRS UMR 3571 Paris, France
| | - M. T. Wiechert
- grid.508487.60000 0004 7885 7602Institut Pasteur, Université Paris Cité, Synapse and Circuit Dynamics Laboratory, CNRS UMR 3571 Paris, France
| | - M. Jazayeri
- grid.116068.80000 0001 2341 2786McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA USA ,grid.116068.80000 0001 2341 2786Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA USA
| | - D. A. DiGregorio
- grid.508487.60000 0004 7885 7602Institut Pasteur, Université Paris Cité, Synapse and Circuit Dynamics Laboratory, CNRS UMR 3571 Paris, France
| |
Collapse
|
32
|
Bae H, Park SY, Kim SJ, Kim CE. Cerebellum as a kernel machine: A novel perspective on expansion recoding in granule cell layer. Front Comput Neurosci 2022; 16:1062392. [PMID: 36618271 PMCID: PMC9815768 DOI: 10.3389/fncom.2022.1062392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Sensorimotor information provided by mossy fibers (MF) is mapped to high-dimensional space by a huge number of granule cells (GrC) in the cerebellar cortex's input layer. Significant studies have demonstrated the computational advantages and primary contributor of this expansion recoding. Here, we propose a novel perspective on the expansion recoding where each GrC serve as a kernel basis function, thereby the cerebellum can operate like a kernel machine that implicitly use high dimensional (even infinite) feature spaces. We highlight that the generation of kernel basis function is indeed biologically plausible scenario, considering that the key idea of kernel machine is to memorize important input patterns. We present potential regimes for developing kernels under constrained resources and discuss the advantages and disadvantages of each regime using various simulation settings.
Collapse
Affiliation(s)
- Hyojin Bae
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| | - Sa-Yoon Park
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| | - Sang Jeong Kim
- Department of Physiology, Seoul National University College of Medicine, Seoul, South Korea
| | - Chang-Eop Kim
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| |
Collapse
|
33
|
Khalil AJ, Mansvelder HD, Witter L. Mesodiencephalic junction GABAergic inputs are processed separately from motor cortical inputs in the basilar pons. iScience 2022; 25:104641. [PMID: 35800775 PMCID: PMC9254490 DOI: 10.1016/j.isci.2022.104641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 04/13/2022] [Accepted: 06/14/2022] [Indexed: 11/21/2022] Open
Abstract
The basilar pontine nuclei (bPN) are known to receive excitatory input from the entire neocortex and constitute the main source of mossy fibers to the cerebellum. Various potential inhibitory afferents have been described, but their origin, synaptic plasticity, and network function have remained elusive. Here we identify the mesodiencephalic junction (MDJ) as a prominent source of monosynaptic GABAergic inputs to the bPN. We found no evidence that these inputs converge with motor cortex (M1) inputs at the single neuron or at the local network level. Tracing the inputs to GABAergic MDJ neurons revealed inputs to these neurons from neocortical areas. Additionally, we observed little short-term synaptic facilitation or depression in afferents from the MDJ, enabling MDJ inputs to carry sign-inversed neocortical inputs. Thus, our results show a prominent source of GABAergic inhibition to the bPN that could enrich input to the cerebellar granule cell layer.
Collapse
Affiliation(s)
- Ayoub J. Khalil
- Department of Integrative Neurophysiology, Amsterdam Neuroscience, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, 1081HV Amsterdam, the Netherlands
| | - Huibert D. Mansvelder
- Department of Integrative Neurophysiology, Amsterdam Neuroscience, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, 1081HV Amsterdam, the Netherlands
| | - Laurens Witter
- Department of Integrative Neurophysiology, Amsterdam Neuroscience, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, 1081HV Amsterdam, the Netherlands
- Department for Developmental Origins of Disease, Wilhelmina Children’s Hospital and Brain Center, University Medical Center Utrecht, 3584 EA Utrecht, the Netherlands
| |
Collapse
|
34
|
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00498-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
35
|
Sheng J, Zhang L, Liu C, Liu J, Feng J, Zhou Y, Hu H, Xue G. Higher-dimensional neural representations predict better episodic memory. SCIENCE ADVANCES 2022; 8:eabm3829. [PMID: 35442734 PMCID: PMC9020666 DOI: 10.1126/sciadv.abm3829] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
Episodic memory enables humans to encode and later vividly retrieve information about our rich experiences, yet the neural representations that support this mental capacity are poorly understood. Using a large fMRI dataset (n = 468) of face-name associative memory tasks and principal component analysis to examine neural representational dimensionality (RD), we found that the human brain maintained a high-dimensional representation of faces through hierarchical representation within and beyond the face-selective regions. Critically, greater RD was associated with better subsequent memory performance both within and across participants, and this association was specific to episodic memory but not general cognitive abilities. Furthermore, the frontoparietal activities could suppress the shared low-dimensional fluctuations and reduce the correlations of local neural responses, resulting in greater RD. RD was not associated with the degree of item-specific pattern similarity, and it made complementary contributions to episodic memory. These results provide a mechanistic understanding of the role of RD in supporting accurate episodic memory.
Collapse
|
36
|
Kumar MG, Tan C, Libedinsky C, Yen SC, Tan AYY. A Nonlinear Hidden Layer Enables Actor-Critic Agents to Learn Multiple Paired Association Navigation. Cereb Cortex 2022; 32:3917-3936. [PMID: 35034127 DOI: 10.1093/cercor/bhab456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/05/2021] [Accepted: 11/06/2021] [Indexed: 11/15/2022] Open
Abstract
Navigation to multiple cued reward locations has been increasingly used to study rodent learning. Though deep reinforcement learning agents have been shown to be able to learn the task, they are not biologically plausible. Biologically plausible classic actor-critic agents have been shown to learn to navigate to single reward locations, but which biologically plausible agents are able to learn multiple cue-reward location tasks has remained unclear. In this computational study, we show versions of classic agents that learn to navigate to a single reward location, and adapt to reward location displacement, but are not able to learn multiple paired association navigation. The limitation is overcome by an agent in which place cell and cue information are first processed by a feedforward nonlinear hidden layer with synapses to the actor and critic subject to temporal difference error-modulated plasticity. Faster learning is obtained when the feedforward layer is replaced by a recurrent reservoir network.
Collapse
Affiliation(s)
- M Ganesh Kumar
- Integrative Sciences and Engineering Programme, NUS Graduate School, National University of Singapore, Singapore 119077, Singapore
- The N.1 Institute for Health, National University of Singapore, Singapore 117456, Singapore
- Innovation and Design Programme, Faculty of Engineering, National University of Singapore, Singapore 117579, Singapore
| | - Cheston Tan
- Institute for Infocomm Research, Agency for Science, Technology and Research, Singapore 138632, Singapore
| | - Camilo Libedinsky
- Integrative Sciences and Engineering Programme, NUS Graduate School, National University of Singapore, Singapore 119077, Singapore
- The N.1 Institute for Health, National University of Singapore, Singapore 117456, Singapore
- Department of Psychology, National University of Singapore, Singapore 117570, Singapore
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore 138673, Singapore
| | - Shih-Cheng Yen
- Integrative Sciences and Engineering Programme, NUS Graduate School, National University of Singapore, Singapore 119077, Singapore
- The N.1 Institute for Health, National University of Singapore, Singapore 117456, Singapore
- Innovation and Design Programme, Faculty of Engineering, National University of Singapore, Singapore 117579, Singapore
| | - Andrew Y Y Tan
- Department of Physiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117593, Singapore
- Healthy Longevity Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 119228, Singapore
- Cardiovascular Disease Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 119228, Singapore
- Neurobiology Programme, Life Sciences Institute, National University of Singapore, Singapore 119077, Singapore
| |
Collapse
|
37
|
Prisco L, Deimel SH, Yeliseyeva H, Fiala A, Tavosanis G. The anterior paired lateral neuron normalizes odour-evoked activity in the Drosophila mushroom body calyx. eLife 2021; 10:e74172. [PMID: 34964714 PMCID: PMC8741211 DOI: 10.7554/elife.74172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 12/28/2021] [Indexed: 11/25/2022] Open
Abstract
To identify and memorize discrete but similar environmental inputs, the brain needs to distinguish between subtle differences of activity patterns in defined neuronal populations. The Kenyon cells (KCs) of the Drosophila adult mushroom body (MB) respond sparsely to complex olfactory input, a property that is thought to support stimuli discrimination in the MB. To understand how this property emerges, we investigated the role of the inhibitory anterior paired lateral (APL) neuron in the input circuit of the MB, the calyx. Within the calyx, presynaptic boutons of projection neurons (PNs) form large synaptic microglomeruli (MGs) with dendrites of postsynaptic KCs. Combining electron microscopy (EM) data analysis and in vivo calcium imaging, we show that APL, via inhibitory and reciprocal synapses targeting both PN boutons and KC dendrites, normalizes odour-evoked representations in MGs of the calyx. APL response scales with the PN input strength and is regionalized around PN input distribution. Our data indicate that the formation of a sparse code by the KCs requires APL-driven normalization of their MG postsynaptic responses. This work provides experimental insights on how inhibition shapes sensory information representation in a higher brain centre, thereby supporting stimuli discrimination and allowing for efficient associative memory formation.
Collapse
Affiliation(s)
- Luigi Prisco
- Dynamics of neuronal circuits, German Center for Neurodegenerative Diseases (DZNE)BonnGermany
| | | | - Hanna Yeliseyeva
- Dynamics of neuronal circuits, German Center for Neurodegenerative Diseases (DZNE)BonnGermany
| | - André Fiala
- Department of Molecular Neurobiology of Behavior, University of GöttingenGöttingenGermany
| | - Gaia Tavosanis
- Dynamics of neuronal circuits, German Center for Neurodegenerative Diseases (DZNE)BonnGermany
- LIMES, Rheinische Friedrich Wilhelms Universität BonnBonnGermany
| |
Collapse
|
38
|
Gilbert M. The Shape of Data: a Theory of the Representation of Information in the Cerebellar Cortex. THE CEREBELLUM 2021; 21:976-986. [PMID: 34902112 PMCID: PMC9596575 DOI: 10.1007/s12311-021-01352-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/28/2021] [Indexed: 11/30/2022]
Abstract
This paper presents a model of rate coding in the cerebellar cortex. The pathway of input to output of the cerebellum forms an anatomically repeating, functionally modular network, whose basic wiring is preserved across vertebrate taxa. Each network is bisected centrally by a functionally defined cell group, a microzone, which forms part of the cerebellar circuit. Input to a network may be from tens of thousands of concurrently active mossy fibres. The model claims to quantify the conversion of input rates into the code received by a microzone. Recoding on entry converts input rates into an internal code which is homogenised in the functional equivalent of an imaginary plane, occupied by the centrally positioned microzone. Homogenised means the code exists in any random sample of parallel fibre signals over a minimum number. The nature of the code and the regimented architecture of the cerebellar cortex mean that the threshold can be represented by space so that the threshold can be met by the physical dimensions of the Purkinje cell dendritic arbour and planar interneuron networks. As a result, the whole population of a microzone receives the same code. This is part of a mechanism which orchestrates functionally indivisible behaviour of the cerebellar circuit and is necessary for coordinated control of the output cells of the circuit. In this model, fine control of Purkinje cells is by input rates to the system and not by learning so that it is in conflict with the for-years-dominant supervised learning model.
Collapse
Affiliation(s)
- Mike Gilbert
- School of Psychology, University of Birmingham, Birmingham, UK.
| |
Collapse
|
39
|
Guzman SJ, Schlögl A, Espinoza C, Zhang X, Suter BA, Jonas P. How connectivity rules and synaptic properties shape the efficacy of pattern separation in the entorhinal cortex-dentate gyrus-CA3 network. NATURE COMPUTATIONAL SCIENCE 2021; 1:830-842. [PMID: 38217181 DOI: 10.1038/s43588-021-00157-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Accepted: 10/12/2021] [Indexed: 01/15/2024]
Abstract
Pattern separation is a fundamental brain computation that converts small differences in input patterns into large differences in output patterns. Several synaptic mechanisms of pattern separation have been proposed, including code expansion, inhibition and plasticity; however, which of these mechanisms play a role in the entorhinal cortex (EC)-dentate gyrus (DG)-CA3 circuit, a classical pattern separation circuit, remains unclear. Here we show that a biologically realistic, full-scale EC-DG-CA3 circuit model, including granule cells (GCs) and parvalbumin-positive inhibitory interneurons (PV+-INs) in the DG, is an efficient pattern separator. Both external gamma-modulated inhibition and internal lateral inhibition mediated by PV+-INs substantially contributed to pattern separation. Both local connectivity and fast signaling at GC-PV+-IN synapses were important for maximum effectiveness. Similarly, mossy fiber synapses with conditional detonator properties contributed to pattern separation. By contrast, perforant path synapses with Hebbian synaptic plasticity and direct EC-CA3 connection shifted the network towards pattern completion. Our results demonstrate that the specific properties of cells and synapses optimize higher-order computations in biological networks and might be useful to improve the deep learning capabilities of technical networks.
Collapse
Affiliation(s)
- S Jose Guzman
- IST Austria, Klosterneuburg, Austria
- Institute of Molecular Biotechnology, Vienna, Austria
| | | | - Claudia Espinoza
- IST Austria, Klosterneuburg, Austria
- Medical University of Austria, Division of Cognitive Neurobiology, Vienna, Austria
| | - Xiaomin Zhang
- IST Austria, Klosterneuburg, Austria
- Brain Research Institute, University of Zürich, Zurich, Switzerland
| | | | | |
Collapse
|
40
|
Gilbert M. Gating by Memory: a Theory of Learning in the Cerebellum. THE CEREBELLUM 2021; 21:926-943. [PMID: 34757585 DOI: 10.1007/s12311-021-01325-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/24/2021] [Indexed: 11/30/2022]
Abstract
This paper presents a model of learning by the cerebellar circuit. In the traditional and dominant learning model, training teaches finely graded parallel fibre synaptic weights which modify transmission to Purkinje cells and to interneurons that inhibit Purkinje cells. Following training, input in a learned pattern drives a training-modified response. The function is that the naive response to input rates is displaced by a learned one, trained under external supervision. In the proposed model, there is no weight-controlled graduated balance of excitation and inhibition of Purkinje cells. Instead, the balance has two functional states-a switch-at synaptic, whole cell and microzone level. The paper is in two parts. The first is a detailed physiological argument for the synaptic learning function. The second uses the function in a computational simulation of pattern memory. Against expectation, this generates a predictable outcome from input chaos (real-world variables). Training always forces synaptic weights away from the middle and towards the limits of the range, causing them to polarise, so that transmission is either robust or blocked. All conditions teach the same outcome, such that all learned patterns receive the same, rather than a bespoke, effect on transmission. In this model, the function of learning is gating-that is, to select patterns that trigger output merely, and not to modify output. The outcome is memory-operated gate activation which operates a two-state balance of weight-controlled transmission. Group activity of parallel fibres also simultaneously contains a second code contained in collective rates, which varies independently of the pattern code. A two-state response to the pattern code allows faithful, and graduated, control of Purkinje cell firing by the rate code, at gated times.
Collapse
Affiliation(s)
- Mike Gilbert
- School of Psychology, University of Birmingham, Birmingham, UK.
| |
Collapse
|
41
|
Biane C, Rückerl F, Abrahamsson T, Saint-Cloment C, Mariani J, Shigemoto R, DiGregorio DA, Sherrard RM, Cathala L. Developmental emergence of two-stage nonlinear synaptic integration in cerebellar interneurons. eLife 2021; 10:65954. [PMID: 34730085 PMCID: PMC8565927 DOI: 10.7554/elife.65954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 09/28/2021] [Indexed: 11/13/2022] Open
Abstract
Synaptic transmission, connectivity, and dendritic morphology mature in parallel during brain development and are often disrupted in neurodevelopmental disorders. Yet how these changes influence the neuronal computations necessary for normal brain function are not well understood. To identify cellular mechanisms underlying the maturation of synaptic integration in interneurons, we combined patch-clamp recordings of excitatory inputs in mouse cerebellar stellate cells (SCs), three-dimensional reconstruction of SC morphology with excitatory synapse location, and biophysical modeling. We found that postnatal maturation of postsynaptic strength was homogeneously reduced along the somatodendritic axis, but dendritic integration was always sublinear. However, dendritic branching increased without changes in synapse density, leading to a substantial gain in distal inputs. Thus, changes in synapse distribution, rather than dendrite cable properties, are the dominant mechanism underlying the maturation of neuronal computation. These mechanisms favor the emergence of a spatially compartmentalized two-stage integration model promoting location-dependent integration within dendritic subunits.
Collapse
Affiliation(s)
- Celia Biane
- Sorbonne Université et CNRS UMR 8256, Adaptation Biologique et Vieillissement, Paris, France
| | - Florian Rückerl
- Institut Pasteur, Université de Paris, CNRS UMR 3571, Unit of Synapse and Circuit Dynamics, Paris, France
| | - Therese Abrahamsson
- Institut Pasteur, Université de Paris, CNRS UMR 3571, Unit of Synapse and Circuit Dynamics, Paris, France
| | - Cécile Saint-Cloment
- Institut Pasteur, Université de Paris, CNRS UMR 3571, Unit of Synapse and Circuit Dynamics, Paris, France
| | - Jean Mariani
- Sorbonne Université et CNRS UMR 8256, Adaptation Biologique et Vieillissement, Paris, France
| | - Ryuichi Shigemoto
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - David A DiGregorio
- Institut Pasteur, Université de Paris, CNRS UMR 3571, Unit of Synapse and Circuit Dynamics, Paris, France
| | - Rachel M Sherrard
- Sorbonne Université et CNRS UMR 8256, Adaptation Biologique et Vieillissement, Paris, France
| | - Laurence Cathala
- Sorbonne Université et CNRS UMR 8256, Adaptation Biologique et Vieillissement, Paris, France.,Paris Brain Institute, CNRS UMR 7225 - Inserm U1127 - Sorbonne Université Groupe Hospitalier Pitié Salpêtrière, Paris, France
| |
Collapse
|
42
|
Lee JM, Devaraj V, Jeong NN, Lee Y, Kim YJ, Kim T, Yi SH, Kim WG, Choi EJ, Kim HM, Chang CL, Mao C, Oh JW. Neural mechanism mimetic selective electronic nose based on programmed M13 bacteriophage. Biosens Bioelectron 2021; 196:113693. [PMID: 34700263 DOI: 10.1016/j.bios.2021.113693] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/30/2021] [Accepted: 10/02/2021] [Indexed: 01/03/2023]
Abstract
The electronic nose is a reliable practical sensor device that mimics olfactory organs. Although numerous studies have demonstrated excellence in detecting various target substances with the help of ideal models, biomimetic approaches still suffer in practical realization because of the inability to mimic the signal processing performed by olfactory neural systems. Herein, we propose an electronic nose based on the programable surface chemistry of M13 bacteriophage, inspired by the neural mechanism of the mammalian olfactory system. The neural pattern separation (NPS) was devised to apply the pattern separation that operates in the memory and learning process of the brain to the electronic nose. We demonstrate an electronic nose in a portable device form, distinguishing polycyclic aromatic compounds (harmful in living environment) in an atomic-level resolution (97.5% selectivity rate) for the first time. Our results provide practical methodology and inspiration for the second-generation electronic nose development toward the performance of detection dogs (K9).
Collapse
Affiliation(s)
- Jong-Min Lee
- Bio-IT Fusion Technology Research Institute, Pusan National University, Busan, 46241, South Korea; School of Nano Convergence Technology, Hallym University, Chuncheon, Gangwon-do, 24252, South Korea
| | - Vasanthan Devaraj
- Bio-IT Fusion Technology Research Institute, Pusan National University, Busan, 46241, South Korea
| | - Na-Na Jeong
- Department of Public Health Science, Graduate School of Korea University, Seoul, 02841, South Korea
| | - Yujin Lee
- Department of Nano Fusion Technology, Pusan National University, Busan, 46241, South Korea
| | - Ye-Ji Kim
- Department of Nano Fusion Technology, Pusan National University, Busan, 46241, South Korea
| | - Taehyeong Kim
- Finance·Fishery·Manufacture Industrial Mathematics Center on Big Data and Department of Mathematics, Pusan National University, Busan, 46241, South Korea
| | - Seung Heon Yi
- Finance·Fishery·Manufacture Industrial Mathematics Center on Big Data and Department of Mathematics, Pusan National University, Busan, 46241, South Korea
| | - Won-Geun Kim
- Bio-IT Fusion Technology Research Institute, Pusan National University, Busan, 46241, South Korea
| | - Eun Jung Choi
- Bio-IT Fusion Technology Research Institute, Pusan National University, Busan, 46241, South Korea
| | - Hyun-Min Kim
- Finance·Fishery·Manufacture Industrial Mathematics Center on Big Data and Department of Mathematics, Pusan National University, Busan, 46241, South Korea.
| | - Chulhun L Chang
- Department of Laboratory Medicine, College of Medicine, Pusan National University, Yangsan, 50612, South Korea.
| | - Chuanbin Mao
- Department of Chemistry and Biochemistry, University of Oklahoma, Norman, OK, 73019, United States.
| | - Jin-Woo Oh
- Bio-IT Fusion Technology Research Institute, Pusan National University, Busan, 46241, South Korea; Department of Nano Fusion Technology, Pusan National University, Busan, 46241, South Korea.
| |
Collapse
|
43
|
Jazayeri M, Ostojic S. Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity. Curr Opin Neurobiol 2021; 70:113-120. [PMID: 34537579 PMCID: PMC8688220 DOI: 10.1016/j.conb.2021.08.002] [Citation(s) in RCA: 82] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 08/11/2021] [Accepted: 08/12/2021] [Indexed: 11/16/2022]
Abstract
The ongoing exponential rise in recording capacity calls for new approaches for analysing and interpreting neural data. Effective dimensionality has emerged as an important property of neural activity across populations of neurons, yet different studies rely on different definitions and interpretations of this quantity. Here, we focus on intrinsic and embedding dimensionality, and discuss how they might reveal computational principles from data. Reviewing recent works, we propose that the intrinsic dimensionality reflects information about the latent variables encoded in collective activity while embedding dimensionality reveals the manner in which this information is processed. We conclude by highlighting the role of network models as an ideal substrate for testing more specifically various hypotheses on the computational principles reflected through intrinsic and embedding dimensionality.
Collapse
Affiliation(s)
- Mehrdad Jazayeri
- McGovern Institute for Brain Research, Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, 75005, Paris, France.
| |
Collapse
|
44
|
Li BX, Dong GH, Li HL, Zhang JS, Bing YH, Chu CP, Cui SB, Qiu DL. Chronic Ethanol Exposure Enhances Facial Stimulation-Evoked Mossy Fiber-Granule Cell Synaptic Transmission via GluN2A Receptors in the Mouse Cerebellar Cortex. Front Syst Neurosci 2021; 15:657884. [PMID: 34408633 PMCID: PMC8365521 DOI: 10.3389/fnsys.2021.657884] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 07/08/2021] [Indexed: 11/16/2022] Open
Abstract
Sensory information is transferred to the cerebellar cortex via the mossy fiber–granule cell (MF–GC) pathway, which participates in motor coordination and motor learning. We previously reported that chronic ethanol exposure from adolescence facilitated the sensory-evoked molecular layer interneuron–Purkinje cell synaptic transmission in adult mice in vivo. Herein, we investigated the effect of chronic ethanol exposure from adolescence on facial stimulation-evoked MF–GC synaptic transmission in the adult mouse cerebellar cortex using electrophysiological recording techniques and pharmacological methods. Chronic ethanol exposure from adolescence induced an enhancement of facial stimulation-evoked MF–GC synaptic transmission in the cerebellar cortex of adult mice. The application of an N-methyl-D-aspartate receptor (NMDAR) antagonist, D-APV (250 μM), induced stronger depression of facial stimulation-evoked MF–GC synaptic transmission in chronic ethanol-exposed mice compared with that in control mice. Chronic ethanol exposure-induced facilitation of facial stimulation evoked by MF–GC synaptic transmission was abolished by a selective GluN2A antagonist, PEAQX (10 μM), but was unaffected by the application of a selective GluN2B antagonist, TCN-237 (10 μM), or a type 1 metabotropic glutamate receptor blocker, JNJ16259685 (10 μM). These results indicate that chronic ethanol exposure from adolescence enhances facial stimulation-evoked MF–GC synaptic transmission via GluN2A, which suggests that chronic ethanol exposure from adolescence impairs the high-fidelity transmission capability of sensory information in the cerebellar cortex by enhancing the NMDAR-mediated components of MF–GC synaptic transmission in adult mice in vivo.
Collapse
Affiliation(s)
- Bing-Xue Li
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Physiology and Pathophysiology, College of Medicine, Yanbian University, Yanji, China
| | - Guang-Hui Dong
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Neurology, Affiliated Hospital of Yanbian University, Yanji, China
| | - Hao-Long Li
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Physiology and Pathophysiology, College of Medicine, Yanbian University, Yanji, China
| | - Jia-Song Zhang
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Physiology and Pathophysiology, College of Medicine, Yanbian University, Yanji, China
| | - Yan-Hua Bing
- Brain Science Research Center, Yanbian University, Yanji, China
| | - Chun-Ping Chu
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Physiology and Pathophysiology, College of Medicine, Yanbian University, Yanji, China
| | - Song-Biao Cui
- Department of Neurology, Affiliated Hospital of Yanbian University, Yanji, China
| | - De-Lai Qiu
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Physiology and Pathophysiology, College of Medicine, Yanbian University, Yanji, China
| |
Collapse
|
45
|
Lanore F, Cayco-Gajic NA, Gurnani H, Coyle D, Silver RA. Cerebellar granule cell axons support high-dimensional representations. Nat Neurosci 2021; 24:1142-1150. [PMID: 34168340 PMCID: PMC7611462 DOI: 10.1038/s41593-021-00873-x] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 05/13/2021] [Indexed: 02/05/2023]
Abstract
In classical theories of cerebellar cortex, high-dimensional sensorimotor representations are used to separate neuronal activity patterns, improving associative learning and motor performance. Recent experimental studies suggest that cerebellar granule cell (GrC) population activity is low-dimensional. To examine sensorimotor representations from the point of view of downstream Purkinje cell 'decoders', we used three-dimensional acousto-optic lens two-photon microscopy to record from hundreds of GrC axons. Here we show that GrC axon population activity is high dimensional and distributed with little fine-scale spatial structure during spontaneous behaviors. Moreover, distinct behavioral states are represented along orthogonal dimensions in neuronal activity space. These results suggest that the cerebellar cortex supports high-dimensional representations and segregates behavioral state-dependent computations into orthogonal subspaces, as reported in the neocortex. Our findings match the predictions of cerebellar pattern separation theories and suggest that the cerebellum and neocortex use population codes with common features, despite their vastly different circuit structures.
Collapse
Affiliation(s)
- Frederic Lanore
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
- University of Bordeaux, CNRS, Interdisciplinary Institute for Neuroscience, IINS, UMR 5297, Bordeaux, France
| | - N Alex Cayco-Gajic
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
- Group for Neural Theory, Laboratoire de neurosciences cognitives et computationnelles, Département d'études cognitives, École normale supérieure, INSERM U960, Université Paris Sciences et Lettres, Paris, France
| | - Harsha Gurnani
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
| | - Diccon Coyle
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
| | - R Angus Silver
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK.
| |
Collapse
|
46
|
Why Does the Neocortex Need the Cerebellum for Working Memory? J Neurosci 2021; 41:6368-6370. [PMID: 34321336 DOI: 10.1523/jneurosci.0701-21.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 04/28/2021] [Accepted: 05/03/2021] [Indexed: 11/21/2022] Open
|
47
|
Kita K, Albergaria C, Machado AS, Carey MR, Müller M, Delvendahl I. GluA4 facilitates cerebellar expansion coding and enables associative memory formation. eLife 2021; 10:65152. [PMID: 34219651 PMCID: PMC8291978 DOI: 10.7554/elife.65152] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Accepted: 07/01/2021] [Indexed: 01/17/2023] Open
Abstract
AMPA receptors (AMPARs) mediate excitatory neurotransmission in the central nervous system (CNS) and their subunit composition determines synaptic efficacy. Whereas AMPAR subunits GluA1–GluA3 have been linked to particular forms of synaptic plasticity and learning, the functional role of GluA4 remains elusive. Here, we demonstrate a crucial function of GluA4 for synaptic excitation and associative memory formation in the cerebellum. Notably, GluA4-knockout mice had ~80% reduced mossy fiber to granule cell synaptic transmission. The fidelity of granule cell spike output was markedly decreased despite attenuated tonic inhibition and increased NMDA receptor-mediated transmission. Computational network modeling incorporating these changes revealed that deletion of GluA4 impairs granule cell expansion coding, which is important for pattern separation and associative learning. On a behavioral level, while locomotor coordination was generally spared, GluA4-knockout mice failed to form associative memories during delay eyeblink conditioning. These results demonstrate an essential role for GluA4-containing AMPARs in cerebellar information processing and associative learning.
Collapse
Affiliation(s)
- Katarzyna Kita
- Department of Molecular Life Sciences, University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| | - Catarina Albergaria
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Ana S Machado
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Megan R Carey
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Martin Müller
- Department of Molecular Life Sciences, University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| | - Igor Delvendahl
- Department of Molecular Life Sciences, University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| |
Collapse
|
48
|
Gurnani H, Silver RA. Multidimensional population activity in an electrically coupled inhibitory circuit in the cerebellar cortex. Neuron 2021; 109:1739-1753.e8. [PMID: 33848473 PMCID: PMC8153252 DOI: 10.1016/j.neuron.2021.03.027] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 01/20/2021] [Accepted: 03/20/2021] [Indexed: 01/05/2023]
Abstract
Inhibitory neurons orchestrate the activity of excitatory neurons and play key roles in circuit function. Although individual interneurons have been studied extensively, little is known about their properties at the population level. Using random-access 3D two-photon microscopy, we imaged local populations of cerebellar Golgi cells (GoCs), which deliver inhibition to granule cells. We show that population activity is organized into multiple modes during spontaneous behaviors. A slow, network-wide common modulation of GoC activity correlates with the level of whisking and locomotion, while faster (<1 s) differential population activity, arising from spatially mixed heterogeneous GoC responses, encodes more precise information. A biologically detailed GoC circuit model reproduced the common population mode and the dimensionality observed experimentally, but these properties disappeared when electrical coupling was removed. Our results establish that local GoC circuits exhibit multidimensional activity patterns that could be used for inhibition-mediated adaptive gain control and spatiotemporal patterning of downstream granule cells.
Collapse
Affiliation(s)
- Harsha Gurnani
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London WC1E 6BT, UK
| | - R Angus Silver
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London WC1E 6BT, UK.
| |
Collapse
|
49
|
Farrell M, Recanatesi S, Reid RC, Mihalas S, Shea-Brown E. Autoencoder networks extract latent variables and encode these variables in their connectomes. Neural Netw 2021; 141:330-343. [PMID: 33957382 DOI: 10.1016/j.neunet.2021.03.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 03/02/2021] [Accepted: 03/08/2021] [Indexed: 11/30/2022]
Abstract
Advances in electron microscopy and data processing techniques are leading to increasingly large and complete microscale connectomes. At the same time, advances in artificial neural networks have produced model systems that perform comparably rich computations with perfectly specified connectivity. This raises an exciting scientific opportunity for the study of both biological and artificial neural networks: to infer the underlying circuit function from the structure of its connectivity. A potential roadblock, however, is that - even with well constrained neural dynamics - there are in principle many different connectomes that could support a given computation. Here, we define a tractable setting in which the problem of inferring circuit function from circuit connectivity can be analyzed in detail: the function of input compression and reconstruction, in an autoencoder network with a single hidden layer. Here, in general there is substantial ambiguity in the weights that can produce the same circuit function, because largely arbitrary changes to input weights can be undone by applying the inverse modifications to the output weights. However, we use mathematical arguments and simulations to show that adding simple, biologically motivated regularization of connectivity resolves this ambiguity in an interesting way: weights are constrained such that the latent variable structure underlying the inputs can be extracted from the weights by using nonlinear dimensionality reduction methods.
Collapse
Affiliation(s)
- Matthew Farrell
- Applied Mathematics Department, University of Washington, Seattle, WA, United States of America; Computational Neuroscience Center, University of Washington, Seattle, WA, United States of America.
| | - Stefano Recanatesi
- Computational Neuroscience Center, University of Washington, Seattle, WA, United States of America
| | - R Clay Reid
- Allen Institute for Brain Science, Seattle, WA, United States of America
| | - Stefan Mihalas
- Allen Institute for Brain Science, Seattle, WA, United States of America
| | - Eric Shea-Brown
- Applied Mathematics Department, University of Washington, Seattle, WA, United States of America; Computational Neuroscience Center, University of Washington, Seattle, WA, United States of America; Allen Institute for Brain Science, Seattle, WA, United States of America
| |
Collapse
|
50
|
Raman DV, O'Leary T. Frozen algorithms: how the brain's wiring facilitates learning. Curr Opin Neurobiol 2021; 67:207-214. [PMID: 33508698 PMCID: PMC8202511 DOI: 10.1016/j.conb.2020.12.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 12/21/2020] [Accepted: 12/30/2020] [Indexed: 12/03/2022]
Abstract
Synapses and neural connectivity are plastic and shaped by experience. But to what extent does connectivity itself influence the ability of a neural circuit to learn? Insights from optimization theory and AI shed light on how learning can be implemented in neural circuits. Though abstract in their nature, learning algorithms provide a principled set of hypotheses on the necessary ingredients for learning in neural circuits. These include the kinds of signals and circuit motifs that enable learning from experience, as well as an appreciation of the constraints that make learning challenging in a biological setting. Remarkably, some simple connectivity patterns can boost the efficiency of relatively crude learning rules, showing how the brain can use anatomy to compensate for the biological constraints of known synaptic plasticity mechanisms. Modern connectomics provides rich data for exploring this principle, and may reveal how brain connectivity is constrained by the requirement to learn efficiently.
Collapse
Affiliation(s)
- Dhruva V Raman
- Department of Engineering, University of Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, United Kingdom.
| |
Collapse
|