1
|
Kuśmierz Ł, Pereira-Obilinovic U, Lu Z, Mastrovito D, Mihalas S. Hierarchy of Chaotic Dynamics in Random Modular Networks. PHYSICAL REVIEW LETTERS 2025; 134:148402. [PMID: 40279616 DOI: 10.1103/physrevlett.134.148402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Accepted: 02/21/2025] [Indexed: 04/27/2025]
Abstract
We introduce a model of randomly connected neural populations and study its dynamics by means of the dynamical mean-field theory and simulations. Our analysis uncovers a rich phase diagram, featuring high- and low-dimensional chaotic phases, separated by a crossover region characterized by low values of the maximal Lyapunov exponent and participation ratio dimension, but with high values of the Lyapunov dimension that change significantly across the region. Counterintuitively, chaos can be attenuated by either adding noise to strongly modular connectivity or by introducing modularity into random connectivity. Extending the model to include a multilevel, hierarchical connectivity reveals that a loose balance between activities across levels drives the system towards the edge of chaos.
Collapse
Affiliation(s)
| | | | - Zhixin Lu
- Allen Institute, Seattle, Washington, USA
| | | | | |
Collapse
|
2
|
Shao Y, Dahmen D, Recanatesi S, Shea-Brown E, Ostojic S. Identifying the impact of local connectivity patterns on dynamics in excitatory-inhibitory networks. ARXIV 2025:arXiv:2411.06802v3. [PMID: 39650608 PMCID: PMC11623704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/11/2024]
Abstract
Networks of excitatory and inhibitory (EI) neurons form a canonical circuit in the brain. Seminal theoretical results on dynamics of such networks are based on the assumption that synaptic strengths depend on the type of neurons they connect, but are otherwise statistically independent. Recent synaptic physiology datasets however highlight the prominence of specific connectivity patterns that go well beyond what is expected from independent connections. While decades of influential research have demonstrated the strong role of the basic EI cell type structure, to which extent additional connectivity features influence dynamics remains to be fully determined. Here we examine the effects of pairwise connectivity motifs on the linear dynamics in EI networks using an analytical framework that approximates the connectivity in terms of low-rank structures. This low-rank approximation is based on a mathematical derivation of the dominant eigenvalues of the connectivity matrix and predicts the impact on responses to external inputs of connectivity motifs and their interactions with cell-type structure. Our results reveal that a particular pattern of connectivity, chain motifs, have a much stronger impact on dominant eigenmodes than other pairwise motifs. An overrepresentation of chain motifs induces a strong positive eigenvalue in inhibition-dominated networks and generates a potential instability that requires revisiting the classical excitation-inhibition balance criteria. Examining effects of external inputs, we show that chain motifs can on their own induce paradoxical responses where an increased input to inhibitory neurons leads to a decrease in their activity due to the recurrent feedback. These findings have direct implications for the interpretation of experiments in which responses to optogenetic perturbations are measured and used to infer the dynamical regime of cortical circuits.
Collapse
|
3
|
Clark DG, Beiran M. Structure of activity in multiregion recurrent neural networks. Proc Natl Acad Sci U S A 2025; 122:e2404039122. [PMID: 40053363 PMCID: PMC11912375 DOI: 10.1073/pnas.2404039122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 02/07/2025] [Indexed: 03/12/2025] Open
Abstract
Neural circuits comprise multiple interconnected regions, each with complex dynamics. The interplay between local and global activity is thought to underlie computational flexibility, yet the structure of multiregion neural activity and its origins in synaptic connectivity remain poorly understood. We investigate recurrent neural networks with multiple regions, each containing neurons with random and structured connections. Inspired by experimental evidence of communication subspaces, we use low-rank connectivity between regions to enable selective activity routing. These networks exhibit high-dimensional fluctuations within regions and low-dimensional signal transmission between them. Using dynamical mean-field theory, with cross-region currents as order parameters, we show that regions act as both generators and transmitters of activity-roles that are often in tension. Taming within-region activity can be crucial for effective signal routing. Unlike previous models that suppressed neural activity to control signal flow, our model achieves routing by exciting different high-dimensional activity patterns through connectivity structure and nonlinear dynamics. Our analysis of this disordered system offers insights into multiregion neural data and trained neural networks.
Collapse
Affiliation(s)
- David G. Clark
- Zuckerman Institute, Columbia University, New York, NY10027
- Kavli Institute for Brain Science, Columbia University, New York, NY10027
| | - Manuel Beiran
- Zuckerman Institute, Columbia University, New York, NY10027
- Kavli Institute for Brain Science, Columbia University, New York, NY10027
| |
Collapse
|
4
|
Baron JW. Path-integral approach to sparse non-Hermitian random matrices. Phys Rev E 2025; 111:034217. [PMID: 40247566 DOI: 10.1103/physreve.111.034217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Accepted: 03/06/2025] [Indexed: 04/19/2025]
Abstract
The theory of large random matrices has proved an invaluable tool for the study of systems with disordered interactions in many quite disparate research areas. Widely applicable results, such as the celebrated elliptic law for dense random matrices, allow one to deduce the statistical properties of the interactions in a complex dynamical system that permit stability. However, such simple and universal results have so far proved difficult to come by in the case of sparse random matrices. Here, we perform an expansion in the inverse connectivity, and thus derive general modified versions of the classic elliptic and semicircle laws, taking into account the sparse correction. This is accomplished using a dynamical approach, which maps the hermitized resolvent of a random matrix onto the response functions of a linear dynamical system. The response functions are then evaluated using a path integral formalism, enabling one to construct Feynman diagrams, which facilitate the perturbative analysis. Additionally, in order to demonstrate the broad utility of the path integral framework, we derive a generic non-Hermitian generalization of the Marchenko-Pastur law, and we also show how one can handle non-negligible higher-order statistics (i.e., non-Gaussian statistics) in dense ensembles.
Collapse
Affiliation(s)
- Joseph W Baron
- Sorbonne Université, Université PSL, Laboratoire de Physique de l'Ecole Normale Supèrieure, ENS, CNRS, Université de Paris, F-75005 Paris, France
| |
Collapse
|
5
|
Rueda-Alaña E, Senovilla-Ganzo R, Grillo M, Vázquez E, Marco-Salas S, Gallego-Flores T, Ordeñana-Manso A, Ftara A, Escobar L, Benguría A, Quintas A, Dopazo A, Rábano M, Vivanco MDM, Aransay AM, Garrigos D, Toval Á, Ferrán JL, Nilsson M, Encinas-Pérez JM, De Pittà M, García-Moreno F. Evolutionary convergence of sensory circuits in the pallium of amniotes. Science 2025; 387:eadp3411. [PMID: 39946453 DOI: 10.1126/science.adp3411] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 11/20/2024] [Indexed: 04/23/2025]
Abstract
The amniote pallium contains sensory circuits that are structurally and functionally equivalent, yet their evolutionary relationship remains unresolved. We used birthdating analysis, single-cell RNA and spatial transcriptomics, and mathematical modeling to compare the development and evolution of known pallial circuits across birds (chick), lizards (gecko), and mammals (mouse). We reveal that neurons within these circuits' stations are generated at varying developmental times and brain regions across species and found an early developmental divergence in the transcriptomic progression of glutamatergic neurons. Our research highlights developmental distinctions and functional similarities in the sensory circuit between birds and mammals, suggesting the convergence of high-order sensory processing across amniote lineages.
Collapse
Affiliation(s)
- Eneritz Rueda-Alaña
- Achucarro Basque Center for Neuroscience, Scientific Park of the University of the Basque Country (UPV/EHU), Leioa, Spain
- Department of Neuroscience, Faculty of Medicine and Odontology, UPV/EHU, Barrio Sarriena s/n, Leioa, Bizkaia, Spain
| | - Rodrigo Senovilla-Ganzo
- Achucarro Basque Center for Neuroscience, Scientific Park of the University of the Basque Country (UPV/EHU), Leioa, Spain
- Department of Neuroscience, Faculty of Medicine and Odontology, UPV/EHU, Barrio Sarriena s/n, Leioa, Bizkaia, Spain
| | - Marco Grillo
- Science for Life Laboratory, Department of Biophysics and Biochemistry, Stockholm University, Solna, Sweden
| | - Enrique Vázquez
- Genomics Unit, Centro Nacional de Investigaciones Cardiovasculares (CNIC), Madrid, Spain
| | - Sergio Marco-Salas
- Science for Life Laboratory, Department of Biophysics and Biochemistry, Stockholm University, Solna, Sweden
| | - Tatiana Gallego-Flores
- Achucarro Basque Center for Neuroscience, Scientific Park of the University of the Basque Country (UPV/EHU), Leioa, Spain
| | - Aitor Ordeñana-Manso
- Achucarro Basque Center for Neuroscience, Scientific Park of the University of the Basque Country (UPV/EHU), Leioa, Spain
| | - Artemis Ftara
- Achucarro Basque Center for Neuroscience, Scientific Park of the University of the Basque Country (UPV/EHU), Leioa, Spain
| | - Laura Escobar
- Achucarro Basque Center for Neuroscience, Scientific Park of the University of the Basque Country (UPV/EHU), Leioa, Spain
| | - Alberto Benguría
- Genomics Unit, Centro Nacional de Investigaciones Cardiovasculares (CNIC), Madrid, Spain
| | - Ana Quintas
- Genomics Unit, Centro Nacional de Investigaciones Cardiovasculares (CNIC), Madrid, Spain
| | - Ana Dopazo
- Genomics Unit, Centro Nacional de Investigaciones Cardiovasculares (CNIC), Madrid, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Cardiovasculares (CIBERCV), Madrid, Spain
| | - Miriam Rábano
- Center for Cooperative Research in Biosciences (CIC bioGUNE), Basque Research and Technology Alliance (BRTA), Bizkaia Technology Park, Derio, Spain
| | - María dM Vivanco
- Center for Cooperative Research in Biosciences (CIC bioGUNE), Basque Research and Technology Alliance (BRTA), Bizkaia Technology Park, Derio, Spain
| | - Ana María Aransay
- Center for Cooperative Research in Biosciences (CIC bioGUNE), Basque Research and Technology Alliance (BRTA), Bizkaia Technology Park, Derio, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (CIBERehd), Madrid, Spain
| | - Daniel Garrigos
- Department of Human Anatomy, Medical School, University of Murcia and Murcia Arrixaca Institute for Biomedical Research, Murcia, Spain
| | - Ángel Toval
- Department of Human Anatomy, Medical School, University of Murcia and Murcia Arrixaca Institute for Biomedical Research, Murcia, Spain
| | - José Luis Ferrán
- Department of Human Anatomy, Medical School, University of Murcia and Murcia Arrixaca Institute for Biomedical Research, Murcia, Spain
| | - Mats Nilsson
- Science for Life Laboratory, Department of Biophysics and Biochemistry, Stockholm University, Solna, Sweden
| | - Juan Manuel Encinas-Pérez
- Achucarro Basque Center for Neuroscience, Scientific Park of the University of the Basque Country (UPV/EHU), Leioa, Spain
- Department of Neuroscience, Faculty of Medicine and Odontology, UPV/EHU, Barrio Sarriena s/n, Leioa, Bizkaia, Spain
- IKERBASQUE Foundation, Bilbao, Spain
| | - Maurizio De Pittà
- Department of Neuroscience, Faculty of Medicine and Odontology, UPV/EHU, Barrio Sarriena s/n, Leioa, Bizkaia, Spain
- Basque Center for Applied Mathematics, Bilbao, Spain
- Computational Neuroscience Hub, Krembil Research Institute, University Health Network, Toronto, ON, Canada
- Department of Physiology, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Fernando García-Moreno
- Achucarro Basque Center for Neuroscience, Scientific Park of the University of the Basque Country (UPV/EHU), Leioa, Spain
- Department of Neuroscience, Faculty of Medicine and Odontology, UPV/EHU, Barrio Sarriena s/n, Leioa, Bizkaia, Spain
- IKERBASQUE Foundation, Bilbao, Spain
| |
Collapse
|
6
|
Pereira-Obilinovic U, Froudist-Walsh S, Wang XJ. Cognitive network interactions through communication subspaces in large-scale models of the neocortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.01.621513. [PMID: 39554020 PMCID: PMC11566003 DOI: 10.1101/2024.11.01.621513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/19/2024]
Abstract
Neocortex-wide neural activity is organized into distinct networks of areas engaged in different cognitive processes. To elucidate the underlying mechanism of flexible network reconfiguration, we developed connectivity-constrained macaque and human whole-cortex models. In our model, within-area connectivity consists of a mixture of symmetric, asymmetric, and random motifs that give rise to stable (attractor) or transient (sequential) heterogeneous dynamics. Assuming sparse low-rank plus random inter-areal connectivity constrained by cognitive networks' activation maps, we show that our model captures key aspects of the cognitive networks' dynamics and interactions observed experimentally. In particular, the anti-correlation between the default mode network and the dorsal attention network. Communication between networks is shaped by the alignment of long-range communication subspaces with local connectivity motifs and is switchable in a bottom-up salience-dependent routing mechanism. Furthermore, the frontoparietal multiple-demand network displays a coexistence of stable and dynamic coding, suitable for top-down cognitive control. Our work provides a theoretical framework for understanding the dynamic routing in the cortical networks during cognition.
Collapse
Affiliation(s)
- Ulises Pereira-Obilinovic
- Center for Neural Science, New York University, New York, NY, USA
- The Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Sean Froudist-Walsh
- Bristol Computational Neuroscience Unit, School of Engineering Mathematics and Technology, University of Bristol, Bristol, UK
| | - Xiao-Jing Wang
- Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
7
|
Kati Y, Ranft J, Lindner B. Self-consistent autocorrelation of a disordered Kuramoto model in the asynchronous state. Phys Rev E 2024; 110:054301. [PMID: 39690640 DOI: 10.1103/physreve.110.054301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Accepted: 10/02/2024] [Indexed: 12/19/2024]
Abstract
The Kuramoto model has provided deep insights into synchronization phenomena and remains an important paradigm to study the dynamics of coupled oscillators. Yet, despite its success, the asynchronous regime in the Kuramoto model has received limited attention. Here, we adapt and enhance the mean-field approach originally proposed by Stiller and Radons [Phys. Rev. E 58, 1789 (1998)1063-651X10.1103/PhysRevE.58.1789] to study the asynchronous state in the Kuramoto model with a finite number of oscillators and with disordered connectivity. By employing an iterative stochastic mean field approximation, the complex N-oscillator system can effectively be reduced to a one-dimensional dynamics, both for homogeneous and heterogeneous networks. This method allows us to investigate the power spectra of individual oscillators as well as of the multiplicative "network noise" in the Kuramoto model in the asynchronous regime. By taking into account the finite system size and disorder in the connectivity, our findings become relevant for the dynamics of coupled oscillators that appear in the context of biological or technical systems.
Collapse
|
8
|
Pazó D. Discontinuous transition to chaos in a canonical random neural network. Phys Rev E 2024; 110:014201. [PMID: 39161016 DOI: 10.1103/physreve.110.014201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 06/11/2024] [Indexed: 08/21/2024]
Abstract
We study a paradigmatic random recurrent neural network introduced by Sompolinsky, Crisanti, and Sommers (SCS). In the infinite size limit, this system exhibits a direct transition from a homogeneous rest state to chaotic behavior, with the Lyapunov exponent gradually increasing from zero. We generalize the SCS model considering odd saturating nonlinear transfer functions, beyond the usual choice ϕ(x)=tanhx. A discontinuous transition to chaos occurs whenever the slope of ϕ at 0 is a local minimum [i.e., for ϕ^{'''}(0)>0]. Chaos appears out of the blue, by an attractor-repeller fold. Accordingly, the Lyapunov exponent stays away from zero at the birth of chaos.
Collapse
|
9
|
Poley L, Galla T, Baron JW. Eigenvalue spectra of finely structured random matrices. Phys Rev E 2024; 109:064301. [PMID: 39020998 DOI: 10.1103/physreve.109.064301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 04/12/2024] [Indexed: 07/20/2024]
Abstract
Random matrix theory allows for the deduction of stability criteria for complex systems using only a summary knowledge of the statistics of the interactions between components. As such, results like the well-known elliptical law are applicable in a myriad of different contexts. However, it is often assumed that all components of the complex system in question are statistically equivalent, which is unrealistic in many applications. Here we introduce the concept of a finely structured random matrix. These are random matrices with element-specific statistics, which can be used to model systems in which the individual components are statistically distinct. By supposing that the degree of "fine structure" in the matrix is small, we arrive at a succinct "modified" elliptical law. We demonstrate the direct applicability of our results to the niche and cascade models in theoretical ecology, as well as a model of a neural network, and a directed network with arbitrary degree distribution. The simple closed form of our central results allow us to draw broad qualitative conclusions about the effect of fine structure on stability.
Collapse
|
10
|
Mastrovito D, Liu YH, Kusmierz L, Shea-Brown E, Koch C, Mihalas S. Transition to chaos separates learning regimes and relates to measure of consciousness in recurrent neural networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.15.594236. [PMID: 38798582 PMCID: PMC11118502 DOI: 10.1101/2024.05.15.594236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Recurrent neural networks exhibit chaotic dynamics when the variance in their connection strengths exceed a critical value. Recent work indicates connection variance also modulates learning strategies; networks learn "rich" representations when initialized with low coupling and "lazier" solutions with larger variance. Using Watts-Strogatz networks of varying sparsity, structure, and hidden weight variance, we find that the critical coupling strength dividing chaotic from ordered dynamics also differentiates rich and lazy learning strategies. Training moves both stable and chaotic networks closer to the edge of chaos, with networks learning richer representations before the transition to chaos. In contrast, biologically realistic connectivity structures foster stability over a wide range of variances. The transition to chaos is also reflected in a measure that clinically discriminates levels of consciousness, the perturbational complexity index (PCIst). Networks with high values of PCIst exhibit stable dynamics and rich learning, suggesting a consciousness prior may promote rich learning. The results suggest a clear relationship between critical dynamics, learning regimes and complexity-based measures of consciousness.
Collapse
|
11
|
Terada Y, Toyoizumi T. Chaotic neural dynamics facilitate probabilistic computations through sampling. Proc Natl Acad Sci U S A 2024; 121:e2312992121. [PMID: 38648479 PMCID: PMC11067032 DOI: 10.1073/pnas.2312992121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 02/13/2024] [Indexed: 04/25/2024] Open
Abstract
Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.
Collapse
Affiliation(s)
- Yu Terada
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Neurobiology, University of California, San Diego, La Jolla, CA92093
- The Institute for Physics of Intelligence, The University of Tokyo, Tokyo113-0033, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo113-8656, Japan
| |
Collapse
|
12
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
13
|
Liu YH, Baratin A, Cornford J, Mihalas S, Shea-Brown E, Lajoie G. How connectivity structure shapes rich and lazy learning in neural circuits. ARXIV 2024:arXiv:2310.08513v2. [PMID: 37873007 PMCID: PMC10593070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learning dynamics. Notably, initial weight distributions with small (resp. large) variance may yield a rich (resp. lazy) regime, where significant (resp. minor) changes to network states and representation are observed over the course of learning. However, in biology, neural circuit connectivity could exhibit a low-rank structure and therefore differs markedly from the random initializations generally used for these studies. As such, here we investigate how the structure of the initial weights -- in particular their effective rank -- influences the network learning regime. Through both empirical and theoretical analyses, we discover that high-rank initializations typically yield smaller network changes indicative of lazier learning, a finding we also confirm with experimentally-driven initial connectivity in recurrent neural networks. Conversely, low-rank initialization biases learning towards richer learning. Importantly, however, as an exception to this rule, we find lazier learning can still occur with a low-rank initialization that aligns with task and data statistics. Our research highlights the pivotal role of initial weight structures in shaping learning regimes, with implications for metabolic costs of plasticity and risks of catastrophic forgetting.
Collapse
|
14
|
Hutt A, Trotter D, Pariz A, Valiante TA, Lefebvre J. Diversity-induced trivialization and resilience of neural dynamics. CHAOS (WOODBURY, N.Y.) 2024; 34:013147. [PMID: 38285722 DOI: 10.1063/5.0165773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 01/01/2024] [Indexed: 01/31/2024]
Abstract
Heterogeneity is omnipresent across all living systems. Diversity enriches the dynamical repertoire of these systems but remains challenging to reconcile with their manifest robustness and dynamical persistence over time, a fundamental feature called resilience. To better understand the mechanism underlying resilience in neural circuits, we considered a nonlinear network model, extracting the relationship between excitability heterogeneity and resilience. To measure resilience, we quantified the number of stationary states of this network, and how they are affected by various control parameters. We analyzed both analytically and numerically gradient and non-gradient systems modeled as non-linear sparse neural networks evolving over long time scales. Our analysis shows that neuronal heterogeneity quenches the number of stationary states while decreasing the susceptibility to bifurcations: a phenomenon known as trivialization. Heterogeneity was found to implement a homeostatic control mechanism enhancing network resilience to changes in network size and connection probability by quenching the system's dynamic volatility.
Collapse
Affiliation(s)
- Axel Hutt
- MLMS, MIMESIS, Université de Strasbourg, CNRS, Inria, ICube, 67000 Strasbourg, France
| | - Daniel Trotter
- Department of Physics, University of Ottawa, Ottawa, Ontario K1N 6N5, Canada
- Krembil Brain Institute, University Health Network, Toronto, Ontario M5T 0S8, Canada
| | - Aref Pariz
- Krembil Brain Institute, University Health Network, Toronto, Ontario M5T 0S8, Canada
- Department of Biology, University of Ottawa, Ottawa, Ontario K1N 6N5, Canada
| | - Taufik A Valiante
- Krembil Brain Institute, University Health Network, Toronto, Ontario M5T 0S8, Canada
- Department of Electrical and Computer Engineering, Institute of Medical Science, Institute of Biomedical Engineering, Division of Neurosurgery, Department of Surgery, CRANIA (Center for Advancing Neurotechnological Innovation to Application), Max Planck-University of Toronto Center for Neural Science and Technology, University of Toronto, Toronto, Ontario M5S 3G8, Canada
| | - Jérémie Lefebvre
- Department of Physics, University of Ottawa, Ottawa, Ontario K1N 6N5, Canada
- Krembil Brain Institute, University Health Network, Toronto, Ontario M5T 0S8, Canada
- Department of Biology, University of Ottawa, Ottawa, Ontario K1N 6N5, Canada
- Department of Mathematics, University of Toronto, Toronto, Ontario M5S 2E4, Canada
| |
Collapse
|
15
|
Stern M, Istrate N, Mazzucato L. A reservoir of timescales emerges in recurrent circuits with heterogeneous neural assemblies. eLife 2023; 12:e86552. [PMID: 38084779 PMCID: PMC10810607 DOI: 10.7554/elife.86552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 12/07/2023] [Indexed: 01/26/2024] Open
Abstract
The temporal activity of many physical and biological systems, from complex networks to neural circuits, exhibits fluctuations simultaneously varying over a large range of timescales. Long-tailed distributions of intrinsic timescales have been observed across neurons simultaneously recorded within the same cortical circuit. The mechanisms leading to this striking temporal heterogeneity are yet unknown. Here, we show that neural circuits, endowed with heterogeneous neural assemblies of different sizes, naturally generate multiple timescales of activity spanning several orders of magnitude. We develop an analytical theory using rate networks, supported by simulations of spiking networks with cell-type specific connectivity, to explain how neural timescales depend on assembly size and show that our model can naturally explain the long-tailed timescale distribution observed in the awake primate cortex. When driving recurrent networks of heterogeneous neural assemblies by a time-dependent broadband input, we found that large and small assemblies preferentially entrain slow and fast spectral components of the input, respectively. Our results suggest that heterogeneous assemblies can provide a biologically plausible mechanism for neural circuits to demix complex temporal input signals by transforming temporal into spatial neural codes via frequency-selective neural assemblies.
Collapse
Affiliation(s)
- Merav Stern
- Institute of Neuroscience, University of OregonEugeneUnited States
- Faculty of Medicine, The Hebrew University of JerusalemJerusalemIsrael
| | - Nicolae Istrate
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
| | - Luca Mazzucato
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
- Mathematics and Biology, University of OregonEugeneUnited States
| |
Collapse
|
16
|
Zdeblick DN, Shea-Brown ET, Witten DM, Buice MA. Modeling functional cell types in spike train data. PLoS Comput Biol 2023; 19:e1011509. [PMID: 37824442 PMCID: PMC10569560 DOI: 10.1371/journal.pcbi.1011509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Accepted: 09/12/2023] [Indexed: 10/14/2023] Open
Abstract
A major goal of computational neuroscience is to build accurate models of the activity of neurons that can be used to interpret their function in circuits. Here, we explore using functional cell types to refine single-cell models by grouping them into functionally relevant classes. Formally, we define a hierarchical generative model for cell types, single-cell parameters, and neural responses, and then derive an expectation-maximization algorithm with variational inference that maximizes the likelihood of the neural recordings. We apply this "simultaneous" method to estimate cell types and fit single-cell models from simulated data, and find that it accurately recovers the ground truth parameters. We then apply our approach to in vitro neural recordings from neurons in mouse primary visual cortex, and find that it yields improved prediction of single-cell activity. We demonstrate that the discovered cell-type clusters are well separated and generalizable, and thus amenable to interpretation. We then compare discovered cluster memberships with locational, morphological, and transcriptomic data. Our findings reveal the potential to improve models of neural responses by explicitly allowing for shared functional properties across neurons.
Collapse
Affiliation(s)
- Daniel N. Zdeblick
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington, United States of America
| | - Eric T. Shea-Brown
- Department of Applied Math, University of Washington, Seattle, Washington, United States of America
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Daniela M. Witten
- Department of Statistics and Biostatistics, University of Washington, Seattle, Washington, United States of America
| | - Michael A. Buice
- Department of Applied Math, University of Washington, Seattle, Washington, United States of America
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| |
Collapse
|
17
|
Clark DG, Abbott LF, Litwin-Kumar A. Dimension of Activity in Random Neural Networks. PHYSICAL REVIEW LETTERS 2023; 131:118401. [PMID: 37774280 DOI: 10.1103/physrevlett.131.118401] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 05/25/2023] [Accepted: 08/08/2023] [Indexed: 10/01/2023]
Abstract
Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units. Understanding how biological and machine-learning networks function and learn requires knowledge of the structure of this coordinated activity, information contained, for example, in cross covariances between units. Self-consistent dynamical mean field theory (DMFT) has elucidated several features of random neural networks-in particular, that they can generate chaotic activity-however, a calculation of cross covariances using this approach has not been provided. Here, we calculate cross covariances self-consistently via a two-site cavity DMFT. We use this theory to probe spatiotemporal features of activity coordination in a classic random-network model with independent and identically distributed (i.i.d.) couplings, showing an extensive but fractionally low effective dimension of activity and a long population-level timescale. Our formulas apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings. As an example of the latter, we analyze the case of partially symmetric couplings.
Collapse
Affiliation(s)
- David G Clark
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| | - L F Abbott
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| | - Ashok Litwin-Kumar
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| |
Collapse
|
18
|
Hutt A, Rich S, Valiante TA, Lefebvre J. Intrinsic neural diversity quenches the dynamic volatility of neural networks. Proc Natl Acad Sci U S A 2023; 120:e2218841120. [PMID: 37399421 PMCID: PMC10334753 DOI: 10.1073/pnas.2218841120] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 05/19/2023] [Indexed: 07/05/2023] Open
Abstract
Heterogeneity is the norm in biology. The brain is no different: Neuronal cell types are myriad, reflected through their cellular morphology, type, excitability, connectivity motifs, and ion channel distributions. While this biophysical diversity enriches neural systems' dynamical repertoire, it remains challenging to reconcile with the robustness and persistence of brain function over time (resilience). To better understand the relationship between excitability heterogeneity (variability in excitability within a population of neurons) and resilience, we analyzed both analytically and numerically a nonlinear sparse neural network with balanced excitatory and inhibitory connections evolving over long time scales. Homogeneous networks demonstrated increases in excitability, and strong firing rate correlations-signs of instability-in response to a slowly varying modulatory fluctuation. Excitability heterogeneity tuned network stability in a context-dependent way by restraining responses to modulatory challenges and limiting firing rate correlations, while enriching dynamics during states of low modulatory drive. Excitability heterogeneity was found to implement a homeostatic control mechanism enhancing network resilience to changes in population size, connection probability, strength and variability of synaptic weights, by quenching the volatility (i.e., its susceptibility to critical transitions) of its dynamics. Together, these results highlight the fundamental role played by cell-to-cell heterogeneity in the robustness of brain function in the face of change.
Collapse
Affiliation(s)
- Axel Hutt
- Université de Strasbourg, CNRS, Inria, ICube, MLMS, MIMESIS, StrasbourgF-67000, France
| | - Scott Rich
- Krembil Brain Institute, Division of Clinical and Computational Neuroscience, University Health Network, Toronto, ONM5T 0S8, Canada
| | - Taufik A. Valiante
- Krembil Brain Institute, Division of Clinical and Computational Neuroscience, University Health Network, Toronto, ONM5T 0S8, Canada
- Department of Electrical and Computer Engineering, University of Toronto, Toronto, ONM5S 3G8, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ONM5S 3G9, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, ONM5S 1A8, Canada
- Division of Neurosurgery, Department of Surgery, University of Toronto, Toronto, ONM5G 2C4, Canada
- Center for Advancing Neurotechnological Innovation to Application, University of Toronto, Toronto, ONM5G 2A2, Canada
- Max Planck-University of Toronto Center for Neural Science and Technology, University of Toronto, Toronto, ONM5S 3G8, Canada
| | - Jérémie Lefebvre
- Krembil Brain Institute, Division of Clinical and Computational Neuroscience, University Health Network, Toronto, ONM5T 0S8, Canada
- Department of Biology, University of Ottawa, Ottawa, ONK1N 6N5, Canada
- Department of Mathematics, University of Toronto, Toronto, ONM5S 2E4, Canada
| |
Collapse
|
19
|
Ranft J, Lindner B. Theory of the asynchronous state of structured rotator networks and its application to recurrent networks of excitatory and inhibitory units. Phys Rev E 2023; 107:044306. [PMID: 37198857 DOI: 10.1103/physreve.107.044306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Accepted: 03/28/2023] [Indexed: 05/19/2023]
Abstract
Recurrently coupled oscillators that are sufficiently heterogeneous and/or randomly coupled can show an asynchronous activity in which there are no significant correlations among the units of the network. The asynchronous state can nevertheless exhibit a rich temporal correlation statistics that is generally difficult to capture theoretically. For randomly coupled rotator networks, it is possible to derive differential equations that determine the autocorrelation functions of the network noise and of the single elements in the network. So far, the theory has been restricted to statistically homogeneous networks, making it difficult to apply this framework to real-world networks, which are structured with respect to the properties of the single units and their connectivity. A particularly striking case are neural networks for which one has to distinguish between excitatory and inhibitory neurons, which drive their target neurons towards or away from the firing threshold. To take into account network structures like that, here we extend the theory for rotator networks to the case of multiple populations. Specifically, we derive a system of differential equations that govern the self-consistent autocorrelation functions of the network fluctuations in the respective populations. We then apply this general theory to the special but important case of recurrent networks of excitatory and inhibitory units in the balanced case and compare our theory to numerical simulations. We inspect the effect of the network structure on the noise statistics by comparing our results to the case of an equivalent homogeneous network devoid of internal structure. Our results show that structured connectivity and heterogeneity of the oscillator type can both enhance or reduce the overall strength of the generated network noise and shape its temporal correlations.
Collapse
Affiliation(s)
- Jonas Ranft
- Institut de Biologie de l'ENS, Ecole Normale Supérieure, CNRS, Inserm, Université PSL, 46 rue d'Ulm, 75005 Paris, France
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience, Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany
| |
Collapse
|
20
|
Baron JW, Jewell TJ, Ryder C, Galla T. Breakdown of Random-Matrix Universality in Persistent Lotka-Volterra Communities. PHYSICAL REVIEW LETTERS 2023; 130:137401. [PMID: 37067312 DOI: 10.1103/physrevlett.130.137401] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 06/17/2022] [Accepted: 03/06/2023] [Indexed: 06/19/2023]
Abstract
The eigenvalue spectrum of a random matrix often only depends on the first and second moments of its elements, but not on the specific distribution from which they are drawn. The validity of this universality principle is often assumed without proof in applications. In this Letter, we offer a pertinent counterexample in the context of the generalized Lotka-Volterra equations. Using dynamic mean-field theory, we derive the statistics of the interactions between species in an evolved ecological community. We then show that the full statistics of these interactions, beyond those of a Gaussian ensemble, are required to correctly predict the eigenvalue spectrum and therefore stability. Consequently, the universality principle fails in this system. We thus show that the eigenvalue spectra of random matrices can be used to deduce the stability of "feasible" ecological communities, but only if the emergent non-Gaussian statistics of the interactions between species are taken into account.
Collapse
Affiliation(s)
- Joseph W Baron
- Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC-UIB), 07122 Palma de Mallorca, Spain
| | - Thomas Jun Jewell
- Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester M13 9PL, United Kingdom
| | - Christopher Ryder
- Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester M13 9PL, United Kingdom
| | - Tobias Galla
- Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC-UIB), 07122 Palma de Mallorca, Spain
- Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester M13 9PL, United Kingdom
| |
Collapse
|
21
|
Haruna J, Toshio R, Nakano N. Path integral approach to universal dynamics of reservoir computers. Phys Rev E 2023; 107:034306. [PMID: 37073052 DOI: 10.1103/physreve.107.034306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 02/06/2023] [Indexed: 04/20/2023]
Abstract
In this work, we give a characterization of the reservoir computer (RC) by the network structure, especially the probability distribution of random coupling constants. First, based on the path integral method, we clarify the universal behavior of the random network dynamics in the thermodynamic limit, which depends only on the asymptotic behavior of the second cumulant generating functions of the network coupling constants. This result enables us to classify the random networks into several universality classes, according to the distribution function of coupling constants chosen for the networks. Interestingly, it is revealed that such a classification has a close relationship with the distribution of eigenvalues of the random coupling matrix. We also comment on the relation between our theory and some practical choices of random connectivity in the RC. Subsequently, we investigate the relationship between the RC's computational power and the network parameters for several universality classes. We perform several numerical simulations to evaluate the phase diagrams of the steady reservoir states, common-signal-induced synchronization, and the computational power in the chaotic time series inference tasks. As a result, we clarify the close relationship between these quantities, especially a remarkable computational performance near the phase transitions, which is realized even near a nonchaotic transition boundary. These results may provide us with a new perspective on the designing principle for the RC.
Collapse
Affiliation(s)
- Junichi Haruna
- Department of Physics, Kyoto University, Kyoto 606-8502, Japan
| | - Riki Toshio
- Department of Physics, Kyoto University, Kyoto 606-8502, Japan
| | - Naoto Nakano
- Graduate School of Advanced Mathematical Sciences, Meiji University, Tokyo 164-8525, Japan
| |
Collapse
|
22
|
Zdeblick DN, Shea-Brown ET, Witten DM, Buice MA. Modeling functional cell types in spike train data. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.28.530327. [PMID: 36909648 PMCID: PMC10002678 DOI: 10.1101/2023.02.28.530327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
A major goal of computational neuroscience is to build accurate models of the activity of neurons that can be used to interpret their function in circuits. Here, we explore using functional cell types to refine single-cell models by grouping them into functionally relevant classes. Formally, we define a hierarchical generative model for cell types, single-cell parameters, and neural responses, and then derive an expectation-maximization algorithm with variational inference that maximizes the likelihood of the neural recordings. We apply this "simultaneous" method to estimate cell types and fit single-cell models from simulated data, and find that it accurately recovers the ground truth parameters. We then apply our approach to in vitro neural recordings from neurons in mouse primary visual cortex, and find that it yields improved prediction of single-cell activity. We demonstrate that the discovered cell-type clusters are well separated and generalizable, and thus amenable to interpretation. We then compare discovered cluster memberships with locational, morphological, and transcriptomic data. Our findings reveal the potential to improve models of neural responses by explicitly allowing for shared functional properties across neurons.
Collapse
Affiliation(s)
| | | | | | - Michael A. Buice
- Applied Math, University of Washington
- Allen Institute MindScope Program
| |
Collapse
|
23
|
Shao Y, Ostojic S. Relating local connectivity and global dynamics in recurrent excitatory-inhibitory networks. PLoS Comput Biol 2023; 19:e1010855. [PMID: 36689488 PMCID: PMC9894562 DOI: 10.1371/journal.pcbi.1010855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 02/02/2023] [Accepted: 01/06/2023] [Indexed: 01/24/2023] Open
Abstract
How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing. Specifically, it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.
Collapse
Affiliation(s)
- Yuxiu Shao
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| |
Collapse
|
24
|
Mosheiff N, Ermentrout B, Huang C. Chaotic dynamics in spatially distributed neuronal networks generate population-wide shared variability. PLoS Comput Biol 2023; 19:e1010843. [PMID: 36626362 PMCID: PMC9870129 DOI: 10.1371/journal.pcbi.1010843] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 01/23/2023] [Accepted: 12/26/2022] [Indexed: 01/11/2023] Open
Abstract
Neural activity in the cortex is highly variable in response to repeated stimuli. Population recordings across the cortex demonstrate that the variability of neuronal responses is shared among large groups of neurons and concentrates in a low dimensional space. However, the source of the population-wide shared variability is unknown. In this work, we analyzed the dynamical regimes of spatially distributed networks of excitatory and inhibitory neurons. We found chaotic spatiotemporal dynamics in networks with similar excitatory and inhibitory projection widths, an anatomical feature of the cortex. The chaotic solutions contain broadband frequency power in rate variability and have distance-dependent and low-dimensional correlations, in agreement with experimental findings. In addition, rate chaos can be induced by globally correlated noisy inputs. These results suggest that spatiotemporal chaos in cortical networks can explain the shared variability observed in neuronal population responses.
Collapse
Affiliation(s)
- Noga Mosheiff
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
| | - Bard Ermentrout
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Chengcheng Huang
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
25
|
Baron JW. Eigenvalue spectra and stability of directed complex networks. Phys Rev E 2022; 106:064302. [PMID: 36671075 DOI: 10.1103/physreve.106.064302] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 10/30/2022] [Indexed: 12/12/2022]
Abstract
Quantifying the eigenvalue spectra of large random matrices allows one to understand the factors that contribute to the stability of dynamical systems with many interacting components. This work explores the effect that the interaction network between components has on the eigenvalue spectrum. We build on previous results, which usually only take into account the mean degree of the network, by allowing for nontrivial network degree heterogeneity. We derive closed-form expressions for the eigenvalue spectrum of the adjacency matrix of a general weighted and directed network. Using these results, which are valid for any large well-connected complex network, we then derive compact formulas for the corrections (due to nonzero network heterogeneity) to well-known results in random matrix theory. Specifically, we derive modified versions of the Wigner semicircle law, the Girko circle law, and the elliptic law and any outlier eigenvalues. We also derive a surprisingly neat analytical expression for the eigenvalue density of a directed Barabási-Albert network. We are thus able to deduce that network heterogeneity is mostly a destabilizing influence in complex dynamical systems.
Collapse
Affiliation(s)
- Joseph W Baron
- Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC-UIB), 07122 Palma de Mallorca, Spain
| |
Collapse
|
26
|
Engelken R, Ingrosso A, Khajeh R, Goedeke S, Abbott LF. Input correlations impede suppression of chaos and learning in balanced firing-rate networks. PLoS Comput Biol 2022; 18:e1010590. [PMID: 36469504 PMCID: PMC9754616 DOI: 10.1371/journal.pcbi.1010590] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 12/15/2022] [Accepted: 09/20/2022] [Indexed: 12/12/2022] Open
Abstract
Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more difficult to suppress chaos with common input into each neuron than through independent input. To study this phenomenon, we develop a non-stationary dynamic mean-field theory for driven networks. The theory explains how the activity statistics and the largest Lyapunov exponent depend on the frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We further show that uncorrelated inputs facilitate learning in balanced networks.
Collapse
Affiliation(s)
- Rainer Engelken
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| | - Alessandro Ingrosso
- The Abdus Salam International Centre for Theoretical Physics, Trieste, Italy
| | - Ramin Khajeh
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| | - Sven Goedeke
- Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
| | - L. F. Abbott
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| |
Collapse
|
27
|
Movement is governed by rotational neural dynamics in spinal motor networks. Nature 2022; 610:526-531. [PMID: 36224394 DOI: 10.1038/s41586-022-05293-w] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 08/30/2022] [Indexed: 11/08/2022]
Abstract
Although the generation of movements is a fundamental function of the nervous system, the underlying neural principles remain unclear. As flexor and extensor muscle activities alternate during rhythmic movements such as walking, it is often assumed that the responsible neural circuitry is similarly exhibiting alternating activity1. Here we present ensemble recordings of neurons in the lumbar spinal cord that indicate that, rather than alternating, the population is performing a low-dimensional 'rotation' in neural space, in which the neural activity is cycling through all phases continuously during the rhythmic behaviour. The radius of rotation correlates with the intended muscle force, and a perturbation of the low-dimensional trajectory can modify the motor behaviour. As existing models of spinal motor control do not offer an adequate explanation of rotation1,2, we propose a theory of neural generation of movements from which this and other unresolved issues, such as speed regulation, force control and multifunctionalism, are readily explained.
Collapse
|
28
|
Regimes and mechanisms of transient amplification in abstract and biological neural networks. PLoS Comput Biol 2022; 18:e1010365. [PMID: 35969604 PMCID: PMC9377633 DOI: 10.1371/journal.pcbi.1010365] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 07/06/2022] [Indexed: 11/24/2022] Open
Abstract
Neuronal networks encode information through patterns of activity that define the networks’ function. The neurons’ activity relies on specific connectivity structures, yet the link between structure and function is not fully understood. Here, we tackle this structure-function problem with a new conceptual approach. Instead of manipulating the connectivity directly, we focus on upper triangular matrices, which represent the network dynamics in a given orthonormal basis obtained by the Schur decomposition. This abstraction allows us to independently manipulate the eigenspectrum and feedforward structures of a connectivity matrix. Using this method, we describe a diverse repertoire of non-normal transient amplification, and to complement the analysis of the dynamical regimes, we quantify the geometry of output trajectories through the effective rank of both the eigenvector and the dynamics matrices. Counter-intuitively, we find that shrinking the eigenspectrum’s imaginary distribution leads to highly amplifying regimes in linear and long-lasting dynamics in nonlinear networks. We also find a trade-off between amplification and dimensionality of neuronal dynamics, i.e., trajectories in neuronal state-space. Networks that can amplify a large number of orthogonal initial conditions produce neuronal trajectories that lie in the same subspace of the neuronal state-space. Finally, we examine networks of excitatory and inhibitory neurons. We find that the strength of global inhibition is directly linked with the amplitude of amplification, such that weakening inhibitory weights also decreases amplification, and that the eigenspectrum’s imaginary distribution grows with an increase in the ratio between excitatory-to-inhibitory and excitatory-to-excitatory connectivity strengths. Consequently, the strength of global inhibition reveals itself as a strong signature for amplification and a potential control mechanism to switch dynamical regimes. Our results shed a light on how biological networks, i.e., networks constrained by Dale’s law, may be optimised for specific dynamical regimes. The architecture of neuronal networks lies at the heart of its dynamic behaviour, or in other words, the function of the system. However, the relationship between changes in the architecture and their effect on the dynamics, a structure-function problem, is still poorly understood. Here, we approach this problem by studying a rotated connectivity matrix that is easier to manipulate and interpret. We focus our analysis on a dynamical regime that arises from the biological property that neurons are usually not connected symmetrically, which may result in a non-normal connectivity matrix. Our techniques unveil distinct expressions of the dynamical regime of non-normal amplification. Moreover, we devise a way to analyse the geometry of the dynamics: we assign a single number to a network that quantifies how dissimilar its repertoire of behaviours can be. Finally, using our approach, we can close the loop back to the original neuronal architecture and find that biologically plausible networks use the strength of inhibition and excitatory-to-inhibitory connectivity strength to navigate the different dynamical regimes of non-normal amplification.
Collapse
|
29
|
Wang B, Aljadeff J. Multiplicative Shot-Noise: A New Route to Stability of Plastic Networks. PHYSICAL REVIEW LETTERS 2022; 129:068101. [PMID: 36018633 DOI: 10.1103/physrevlett.129.068101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 06/30/2022] [Indexed: 06/15/2023]
Abstract
Fluctuations of synaptic weights, among many other physical, biological, and ecological quantities, are driven by coincident events of two "parent" processes. We propose a multiplicative shot-noise model that can capture the behaviors of a broad range of such natural phenomena, and analytically derive an approximation that accurately predicts its statistics. We apply our results to study the effects of a multiplicative synaptic plasticity rule that was recently extracted from measurements in physiological conditions. Using mean-field theory analysis and network simulations, we investigate how this rule shapes the connectivity and dynamics of recurrent spiking neural networks. The multiplicative plasticity rule is shown to support efficient learning of input stimuli, and it gives a stable, unimodal synaptic-weight distribution with a large fraction of strong synapses. The strong synapses remain stable over long times but do not "run away." Our results suggest that the multiplicative shot-noise offers a new route to understand the tradeoff between flexibility and stability in neural circuits and other dynamic networks.
Collapse
Affiliation(s)
- Bin Wang
- Department of Physics, University of California San Diego, La Jolla, California 92093, USA
| | - Johnatan Aljadeff
- Department of Neurobiology, University of California San Diego, La Jolla, California 92093, USA
| |
Collapse
|
30
|
Wardak A, Gong P. Extended Anderson Criticality in Heavy-Tailed Neural Networks. PHYSICAL REVIEW LETTERS 2022; 129:048103. [PMID: 35939004 DOI: 10.1103/physrevlett.129.048103] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/08/2022] [Accepted: 07/05/2022] [Indexed: 06/15/2023]
Abstract
We investigate the emergence of complex dynamics in networks with heavy-tailed connectivity by developing a non-Hermitian random matrix theory. We uncover the existence of an extended critical regime of spatially multifractal fluctuations between the quiescent and active phases. This multifractal critical phase combines features of localization and delocalization and differs from the edge of chaos in classical networks by the appearance of universal hallmarks of Anderson criticality over an extended region in phase space. We show that the rich nonlinear response properties of the extended critical regime can account for a variety of neural dynamics such as the diversity of timescales, providing a computational advantage for persistent classification in a reservoir setting.
Collapse
Affiliation(s)
- Asem Wardak
- School of Physics, University of Sydney, New South Wales 2006, Australia
| | - Pulin Gong
- School of Physics, University of Sydney, New South Wales 2006, Australia
| |
Collapse
|
31
|
Mazzucato L. Neural mechanisms underlying the temporal organization of naturalistic animal behavior. eLife 2022; 11:e76577. [PMID: 35792884 PMCID: PMC9259028 DOI: 10.7554/elife.76577] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 06/07/2022] [Indexed: 12/17/2022] Open
Abstract
Naturalistic animal behavior exhibits a strikingly complex organization in the temporal domain, with variability arising from at least three sources: hierarchical, contextual, and stochastic. What neural mechanisms and computational principles underlie such intricate temporal features? In this review, we provide a critical assessment of the existing behavioral and neurophysiological evidence for these sources of temporal variability in naturalistic behavior. Recent research converges on an emergent mechanistic theory of temporal variability based on attractor neural networks and metastable dynamics, arising via coordinated interactions between mesoscopic neural circuits. We highlight the crucial role played by structural heterogeneities as well as noise from mesoscopic feedback loops in regulating flexible behavior. We assess the shortcomings and missing links in the current theoretical and experimental literature and propose new directions of investigation to fill these gaps.
Collapse
Affiliation(s)
- Luca Mazzucato
- Institute of Neuroscience, Departments of Biology, Mathematics and Physics, University of OregonEugeneUnited States
| |
Collapse
|
32
|
Peng X, Lin W. Complex Dynamics of Noise-Perturbed Excitatory-Inhibitory Neural Networks With Intra-Correlative and Inter-Independent Connections. Front Physiol 2022; 13:915511. [PMID: 35812336 PMCID: PMC9263264 DOI: 10.3389/fphys.2022.915511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 05/09/2022] [Indexed: 11/24/2022] Open
Abstract
Real neural system usually contains two types of neurons, i.e., excitatory neurons and inhibitory ones. Analytical and numerical interpretation of dynamics induced by different types of interactions among the neurons of two types is beneficial to understanding those physiological functions of the brain. Here, we articulate a model of noise-perturbed random neural networks containing both excitatory and inhibitory (E&I) populations. Particularly, both intra-correlatively and inter-independently connected neurons in two populations are taken into account, which is different from the most existing E&I models only considering the independently-connected neurons. By employing the typical mean-field theory, we obtain an equivalent system of two dimensions with an input of stationary Gaussian process. Investigating the stationary autocorrelation functions along the obtained system, we analytically find the parameters’ conditions under which the synchronized behaviors between the two populations are sufficiently emergent. Taking the maximal Lyapunov exponent as an index, we also find different critical values of the coupling strength coefficients for the chaotic excitatory neurons and for the chaotic inhibitory ones. Interestingly, we reveal that the noise is able to suppress chaotic dynamics of the random neural networks having neurons in two populations, while an appropriate amount of correlation coefficient in intra-coupling strengths can enhance chaos occurrence. Finally, we also detect a previously-reported phenomenon where the parameters region corresponds to neither linearly stable nor chaotic dynamics; however, the size of the region area crucially depends on the populations’ parameters.
Collapse
Affiliation(s)
- Xiaoxiao Peng
- Shanghai Center for Mathematical Sciences, School of Mathematical Sciences, and LMNS, Fudan University, Shanghai, China
- Research Institute of Intelligent Complex Systemsand Center for Computational Systems Biology, Fudan University, Shanghai, China
- *Correspondence: Xiaoxiao Peng, ; Wei Lin,
| | - Wei Lin
- Shanghai Center for Mathematical Sciences, School of Mathematical Sciences, and LMNS, Fudan University, Shanghai, China
- Research Institute of Intelligent Complex Systemsand Center for Computational Systems Biology, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, and Institutes of Brain Science, Fudan University, Shanghai, China
- *Correspondence: Xiaoxiao Peng, ; Wei Lin,
| |
Collapse
|
33
|
Knoll G, Lindner B. Information transmission in recurrent networks: Consequences of network noise for synchronous and asynchronous signal encoding. Phys Rev E 2022; 105:044411. [PMID: 35590546 DOI: 10.1103/physreve.105.044411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 03/04/2022] [Indexed: 06/15/2023]
Abstract
Information about natural time-dependent stimuli encoded by the sensory periphery or communication between cortical networks may span a large frequency range or be localized to a smaller frequency band. Biological systems have been shown to multiplex such disparate broadband and narrow-band signals and then discriminate them in later populations by employing either an integration (low-pass) or coincidence detection (bandpass) encoding strategy. Analytical expressions have been developed for both encoding methods in feedforward populations of uncoupled neurons and confirm that the integration of a population's output low-pass filters the information, whereas synchronous output encodes less information overall and retains signal information in a selected frequency band. The present study extends the theory to recurrent networks and shows that recurrence may sharpen the synchronous bandpass filter. The frequency of the pass band is significantly influenced by the synaptic strengths, especially for inhibition-dominated networks. Synchronous information transfer is also increased when network models take into account heterogeneity that arises from the stochastic distribution of the synaptic weights.
Collapse
Affiliation(s)
- Gregory Knoll
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| |
Collapse
|
34
|
Baron JW, Jewell TJ, Ryder C, Galla T. Eigenvalues of Random Matrices with Generalized Correlations: A Path Integral Approach. PHYSICAL REVIEW LETTERS 2022; 128:120601. [PMID: 35394295 DOI: 10.1103/physrevlett.128.120601] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 01/10/2022] [Accepted: 02/23/2022] [Indexed: 06/14/2023]
Abstract
Random matrix theory allows one to deduce the eigenvalue spectrum of a large matrix given only statistical information about its elements. Such results provide insight into what factors contribute to the stability of complex dynamical systems. In this Letter, we study the eigenvalue spectrum of an ensemble of random matrices with correlations between any pair of elements. To this end, we introduce an analytical method that maps the resolvent of the random matrix onto the response functions of a linear dynamical system. The response functions are then evaluated using a path integral formalism, enabling us to make deductions about the eigenvalue spectrum. Our central result is a simple, closed-form expression for the leading eigenvalue of a large random matrix with generalized correlations. This formula demonstrates that correlations between matrix elements that are not diagonally opposite, which are often neglected, can have a significant impact on stability.
Collapse
Affiliation(s)
- Joseph W Baron
- Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC-UIB), 07122 Palma de Mallorca, Spain
| | - Thomas Jun Jewell
- Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester M13 9PL, United Kingdom
| | - Christopher Ryder
- Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester M13 9PL, United Kingdom
| | - Tobias Galla
- Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC-UIB), 07122 Palma de Mallorca, Spain
- Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester M13 9PL, United Kingdom
| |
Collapse
|
35
|
Zhivkoplias EK, Vavulov O, Hillerton T, Sonnhammer ELL. Generation of Realistic Gene Regulatory Networks by Enriching for Feed-Forward Loops. Front Genet 2022; 13:815692. [PMID: 35222536 PMCID: PMC8872634 DOI: 10.3389/fgene.2022.815692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 01/13/2022] [Indexed: 11/13/2022] Open
Abstract
The regulatory relationships between genes and proteins in a cell form a gene regulatory network (GRN) that controls the cellular response to changes in the environment. A number of inference methods to reverse engineer the original GRN from large-scale expression data have recently been developed. However, the absence of ground-truth GRNs when evaluating the performance makes realistic simulations of GRNs necessary. One aspect of this is that local network motif analysis of real GRNs indicates that the feed-forward loop (FFL) is significantly enriched. To simulate this properly, we developed a novel motif-based preferential attachment algorithm, FFLatt, which outperformed the popular GeneNetWeaver network generation tool in reproducing the FFL motif occurrence observed in literature-based biological GRNs. It also preserves important topological properties such as scale-free topology, sparsity, and average in/out-degree per node. We conclude that FFLatt is well-suited as a network generation module for a benchmarking framework with the aim to provide fair and robust performance evaluation of GRN inference methods.
Collapse
Affiliation(s)
- Erik K. Zhivkoplias
- Department of Biochemistry and Biophysics, Science for Life Laboratory, Stockholm University, Solna, Sweden
| | - Oleg Vavulov
- Bioinformatics Institute, St. Petersburg, Russia
| | - Thomas Hillerton
- Department of Biochemistry and Biophysics, Science for Life Laboratory, Stockholm University, Solna, Sweden
| | - Erik L. L. Sonnhammer
- Department of Biochemistry and Biophysics, Science for Life Laboratory, Stockholm University, Solna, Sweden
- *Correspondence: Erik L. L. Sonnhammer,
| |
Collapse
|
36
|
Dahmen D, Layer M, Deutz L, Dąbrowska PA, Voges N, von Papen M, Brochier T, Riehle A, Diesmann M, Grün S, Helias M. Global organization of neuronal activity only requires unstructured local connectivity. eLife 2022; 11:e68422. [PMID: 35049496 PMCID: PMC8776256 DOI: 10.7554/elife.68422] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons spread across large cortical distances. Yet, this parallel activity is often confined to relatively low-dimensional manifolds. This implies strong coordination also among neurons that are most likely not even connected. Here, we combine in vivo recordings with network models and theory to characterize the nature of mesoscopic coordination patterns in macaque motor cortex and to expose their origin: We find that heterogeneity in local connectivity supports network states with complex long-range cooperation between neurons that arises from multi-synaptic, short-range connections. Our theory explains the experimentally observed spatial organization of covariances in resting state recordings as well as the behaviorally related modulation of covariance patterns during a reach-to-grasp task. The ubiquity of heterogeneity in local cortical circuits suggests that the brain uses the described mechanism to flexibly adapt neuronal coordination to momentary demands.
Collapse
Affiliation(s)
- David Dahmen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Moritz Layer
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Lukas Deutz
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- School of Computing, University of LeedsLeedsUnited Kingdom
| | - Paulina Anna Dąbrowska
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Nicole Voges
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Michael von Papen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Thomas Brochier
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Alexa Riehle
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Markus Diesmann
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen UniversityAachenGermany
| | - Sonja Grün
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Theoretical Systems Neurobiology, RWTH Aachen UniversityAachenGermany
| | - Moritz Helias
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
| |
Collapse
|
37
|
Krishnamurthy K, Can T, Schwab DJ. Theory of Gating in Recurrent Neural Networks. PHYSICAL REVIEW. X 2022; 12:011011. [PMID: 36545030 PMCID: PMC9762509 DOI: 10.1103/physrevx.12.011011] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience. Prior theoretical work has focused on RNNs with additive interactions. However gating i.e., multiplicative interactions are ubiquitous in real neurons and also the central feature of the best-performing RNNs in ML. Here, we show that gating offers flexible control of two salient features of the collective dynamics: (i) timescales and (ii) dimensionality. The gate controlling timescales leads to a novel marginally stable state, where the network functions as a flexible integrator. Unlike previous approaches, gating permits this important function without parameter fine-tuning or special symmetries. Gates also provide a flexible, context-dependent mechanism to reset the memory trace, thus complementing the memory function. The gate modulating the dimensionality can induce a novel, discontinuous chaotic transition, where inputs push a stable system to strong chaotic activity, in contrast to the typically stabilizing effect of inputs. At this transition, unlike additive RNNs, the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity). The rich dynamics are summarized in phase diagrams, thus providing a map for principled parameter initialization choices to ML practitioners.
Collapse
Affiliation(s)
- Kamesh Krishnamurthy
- Joseph Henry Laboratories of Physics and PNI, Princeton University, Princeton, New Jersey 08544, USA
| | - Tankut Can
- Institute for Advanced Study, Princeton, New Jersey 08540, USA
| | - David J. Schwab
- Initiative for Theoretical Sciences, Graduate Center, CUNY, New York, New York 10016, USA
| |
Collapse
|
38
|
The Mean Field Approach for Populations of Spiking Neurons. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:125-157. [DOI: 10.1007/978-3-030-89439-9_6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractMean field theory is a device to analyze the collective behavior of a dynamical system comprising many interacting particles. The theory allows to reduce the behavior of the system to the properties of a handful of parameters. In neural circuits, these parameters are typically the firing rates of distinct, homogeneous subgroups of neurons. Knowledge of the firing rates under conditions of interest can reveal essential information on both the dynamics of neural circuits and the way they can subserve brain function. The goal of this chapter is to provide an elementary introduction to the mean field approach for populations of spiking neurons. We introduce the general idea in networks of binary neurons, starting from the most basic results and then generalizing to more relevant situations. This allows to derive the mean field equations in a simplified setting. We then derive the mean field equations for populations of integrate-and-fire neurons. An effort is made to derive the main equations of the theory using only elementary methods from calculus and probability theory. The chapter ends with a discussion of the assumptions of the theory and some of the consequences of violating those assumptions. This discussion includes an introduction to balanced and metastable networks and a brief catalogue of successful applications of the mean field approach to the study of neural circuits.
Collapse
|
39
|
Baron JW. Consensus, polarization, and coexistence in a continuous opinion dynamics model with quenched disorder. Phys Rev E 2021; 104:044309. [PMID: 34781547 DOI: 10.1103/physreve.104.044309] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 10/06/2021] [Indexed: 12/19/2022]
Abstract
A model of opinion dynamics is introduced in which each individual's opinion is measured on a bounded continuous spectrum. Each opinion is influenced heterogeneously by every other opinion in the population. It is demonstrated that consensus, polarization and a spread of moderate opinions are all possible within this model. Using dynamic mean-field theory, we are able to identify the statistical features of the interactions between individuals that give rise to each of the aforementioned emergent phenomena. The nature of the transitions between each of the observed macroscopic states is also studied. It is demonstrated that heterogeneity of interactions between individuals can lead to polarization, that mostly antagonistic or contrarian interactions can promote consensus at a moderate opinion, and that mostly reinforcing interactions encourage the majority to take an extreme opinion.
Collapse
Affiliation(s)
- Joseph W Baron
- Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC-UIB), 07122 Palma de Mallorca, Spain
| |
Collapse
|
40
|
van Meegen A, Kühn T, Helias M. Large-Deviation Approach to Random Recurrent Neuronal Networks: Parameter Inference and Fluctuation-Induced Transitions. PHYSICAL REVIEW LETTERS 2021; 127:158302. [PMID: 34678014 DOI: 10.1103/physrevlett.127.158302] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 07/05/2021] [Accepted: 08/19/2021] [Indexed: 06/13/2023]
Abstract
We here unify the field-theoretical approach to neuronal networks with large deviations theory. For a prototypical random recurrent network model with continuous-valued units, we show that the effective action is identical to the rate function and derive the latter using field theory. This rate function takes the form of a Kullback-Leibler divergence which enables data-driven inference of model parameters and calculation of fluctuations beyond mean-field theory. Lastly, we expose a regime with fluctuation-induced transitions between mean-field solutions.
Collapse
Affiliation(s)
- Alexander van Meegen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
- Institute of Zoology, University of Cologne, 50674 Cologne, Germany
| | - Tobias Kühn
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, 52074 Aachen, Germany
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, F-75005 Paris, France
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, 52074 Aachen, Germany
| |
Collapse
|
41
|
Gozel O, Gerstner W. A functional model of adult dentate gyrus neurogenesis. eLife 2021; 10:66463. [PMID: 34137370 PMCID: PMC8260225 DOI: 10.7554/elife.66463] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 06/16/2021] [Indexed: 12/27/2022] Open
Abstract
In adult dentate gyrus neurogenesis, the link between maturation of newborn neurons and their function, such as behavioral pattern separation, has remained puzzling. By analyzing a theoretical model, we show that the switch from excitation to inhibition of the GABAergic input onto maturing newborn cells is crucial for their proper functional integration. When the GABAergic input is excitatory, cooperativity drives the growth of synapses such that newborn cells become sensitive to stimuli similar to those that activate mature cells. When GABAergic input switches to inhibitory, competition pushes the configuration of synapses onto newborn cells toward stimuli that are different from previously stored ones. This enables the maturing newborn cells to code for concepts that are novel, yet similar to familiar ones. Our theory of newborn cell maturation explains both how adult-born dentate granule cells integrate into the preexisting network and why they promote separation of similar but not distinct patterns.
Collapse
Affiliation(s)
- Olivia Gozel
- School of Life Sciences and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.,Departments of Neurobiology and Statistics, University of Chicago, Chicago, United States.,Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, United States
| | - Wulfram Gerstner
- School of Life Sciences and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
42
|
An emergent autonomous flow for mean-field spin glasses. Probab Theory Relat Fields 2021. [DOI: 10.1007/s00440-021-01040-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
43
|
Metz FL, Neri I. Localization and Universality of Eigenvectors in Directed Random Graphs. PHYSICAL REVIEW LETTERS 2021; 126:040604. [PMID: 33576654 DOI: 10.1103/physrevlett.126.040604] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 12/07/2020] [Accepted: 01/08/2021] [Indexed: 06/12/2023]
Abstract
Although the spectral properties of random graphs have been a long-standing focus of network theory, the properties of right eigenvectors of directed graphs have so far eluded an exact analytic treatment. We present a general theory for the statistics of the right eigenvector components in directed random graphs with a prescribed degree distribution and with randomly weighted links. We obtain exact analytic expressions for the inverse participation ratio and show that right eigenvectors of directed random graphs with a small average degree are localized. Remarkably, if the fourth moment of the degree distribution is finite, then the critical mean degree of the localization transition is independent of the degree fluctuations, which is different from localization in undirected graphs that is governed by degree fluctuations. We also show that in the high connectivity limit the distribution of the right eigenvector components is solely determined by the degree distribution. For delocalized eigenvectors, we recover in this limit the universal results from standard random matrix theory that are independent of the degree distribution, while for localized eigenvectors the eigenvector distribution depends on the degree distribution.
Collapse
Affiliation(s)
- Fernando Lucas Metz
- Physics Institute, Federal University of Rio Grande do Sul, 91501-970 Porto Alegre, Brazil and London Mathematical Laboratory, 18 Margravine Gardens, London W6 8RH, United Kingdom
| | - Izaak Neri
- Department of Mathematics, King's College London, Strand, London WC2R 2LS, United Kingdom
| |
Collapse
|
44
|
Kuśmierz Ł, Ogawa S, Toyoizumi T. Edge of Chaos and Avalanches in Neural Networks with Heavy-Tailed Synaptic Weight Distribution. PHYSICAL REVIEW LETTERS 2020; 125:028101. [PMID: 32701351 DOI: 10.1103/physrevlett.125.028101] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 03/03/2020] [Accepted: 05/26/2020] [Indexed: 06/11/2023]
Abstract
We propose an analytically tractable neural connectivity model with power-law distributed synaptic strengths. When threshold neurons with biologically plausible number of incoming connections are considered, our model features a continuous transition to chaos and can reproduce biologically relevant low activity levels and scale-free avalanches, i.e., bursts of activity with power-law distributions of sizes and lifetimes. In contrast, the Gaussian counterpart exhibits a discontinuous transition to chaos and thus cannot be poised near the edge of chaos. We validate our predictions in simulations of networks of binary as well as leaky integrate-and-fire neurons. Our results suggest that heavy-tailed synaptic distribution may form a weakly informative sparse-connectivity prior that can be useful in biological and artificial adaptive systems.
Collapse
Affiliation(s)
- Łukasz Kuśmierz
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| | - Shun Ogawa
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8656, Japan
| |
Collapse
|
45
|
Scale free topology as an effective feedback system. PLoS Comput Biol 2020; 16:e1007825. [PMID: 32392249 PMCID: PMC7241857 DOI: 10.1371/journal.pcbi.1007825] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 05/21/2020] [Accepted: 03/26/2020] [Indexed: 12/13/2022] Open
Abstract
Biological networks are often heterogeneous in their connectivity pattern, with degree distributions featuring a heavy tail of highly connected hubs. The implications of this heterogeneity on dynamical properties are a topic of much interest. Here we show that interpreting topology as a feedback circuit can provide novel insights on dynamics. Based on the observation that in finite networks a small number of hubs have a disproportionate effect on the entire system, we construct an approximation by lumping these nodes into a single effective hub, which acts as a feedback loop with the rest of the nodes. We use this approximation to study dynamics of networks with scale-free degree distributions, focusing on their probability of convergence to fixed points. We find that the approximation preserves convergence statistics over a wide range of settings. Our mapping provides a parametrization of scale free topology which is predictive at the ensemble level and also retains properties of individual realizations. Specifically, outgoing hubs have an organizing role that can drive the network to convergence, in analogy to suppression of chaos by an external drive. In contrast, incoming hubs have no such property, resulting in a marked difference between the behavior of networks with outgoing vs. incoming scale free degree distribution. Combining feedback analysis with mean field theory predicts a transition between convergent and divergent dynamics which is corroborated by numerical simulations. Furthermore, they highlight the effect of a handful of outlying hubs, rather than of the connectivity distribution law as a whole, on network dynamics. Nature abounds with complex networks of interacting elements—from the proteins in our cells, through neural networks in our brains, to species interacting in ecosystems. In all of these fields, the relation between network structure and dynamics is an important research question. A recurring feature of natural networks is their heterogeneous structure: individual elements exhibit a huge diversity of connectivity patterns, which complicates the understanding of network dynamics. To address this problem, we devised a simplified approximation for complex structured networks which captures their dynamical properties. Separating out the largest “hubs”—a small number of nodes with disproportionately high connectivity—we represent them by a single node linked to the rest of the network. This enables us to borrow concepts from control theory, where a system’s output is linked back to itself forming a feedback loop. In this analogy, hubs in heterogeneous networks implement a feedback circuit with the rest of the network. The analogy reveals how these hubs can coordinate the network and drive it more easily towards stable states. Our approach enables analyzing dynamical properties of heterogeneous networks, which is difficult to achieve with existing techniques. It is potentially applicable to many fields where heterogeneous networks are important.
Collapse
|
46
|
Sederberg A, Nemenman I. Randomly connected networks generate emergent selectivity and predict decoding properties of large populations of neurons. PLoS Comput Biol 2020; 16:e1007875. [PMID: 32379751 PMCID: PMC7237045 DOI: 10.1371/journal.pcbi.1007875] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 05/19/2020] [Accepted: 04/14/2020] [Indexed: 01/12/2023] Open
Abstract
Modern recording methods enable sampling of thousands of neurons during the performance of behavioral tasks, raising the question of how recorded activity relates to theoretical models. In the context of decision making, functional connectivity between choice-selective cortical neurons was recently reported. The straightforward interpretation of these data suggests the existence of selective pools of inhibitory and excitatory neurons. Computationally investigating an alternative mechanism for these experimental observations, we find that a randomly connected network of excitatory and inhibitory neurons generates single-cell selectivity, patterns of pairwise correlations, and the same ability of excitatory and inhibitory populations to predict choice, as in experimental observations. Further, we predict that, for this task, there are no anatomically defined subpopulations of neurons representing choice, and that choice preference of a particular neuron changes with the details of the task. We suggest that distributed stimulus selectivity and functional organization in population codes could be emergent properties of randomly connected networks.
Collapse
Affiliation(s)
- Audrey Sederberg
- Department of Physics, Emory University, Atlanta, Georgia, United States of America
- Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia, United States of America
| | - Ilya Nemenman
- Department of Physics, Emory University, Atlanta, Georgia, United States of America
- Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia, United States of America
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
| |
Collapse
|
47
|
Ipsen JR, Peterson ADH. Consequences of Dale's law on the stability-complexity relationship of random neural networks. Phys Rev E 2020; 101:052412. [PMID: 32575310 DOI: 10.1103/physreve.101.052412] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Accepted: 04/08/2020] [Indexed: 06/11/2023]
Abstract
In the study of randomly connected neural network dynamics there is a phase transition from a simple state with few equilibria to a complex state characterized by the number of equilibria growing exponentially with the neuron population. Such phase transitions are often used to describe pathological brain state transitions observed in neurological diseases such as epilepsy. In this paper we investigate how more realistic heterogeneous network structures affect these phase transitions using techniques from random matrix theory. Specifically, we parametrize the network structure according to Dale's law and use the Kac-Rice formalism to compute the change in the number of equilibria when a phase transition occurs. We also examine the condition where the network is not balanced between excitation and inhibition causing outliers to appear in the eigenspectrum. This enables us to compute the effects of different heterogeneous network connectivities on brain state transitions, which can provide insights into pathological brain dynamics.
Collapse
Affiliation(s)
- J R Ipsen
- ARC Centre of Excellence for Mathematical and Statistical Frontiers, School of Mathematics and Statistics, University of Melbourne, 3010 Parkville, Victoria, Australia
| | - A D H Peterson
- Graeme Clarke Institute, University of Melbourne, 3053 Carlton, Victoria, Australia and Department of Medicine, St. Vincent's Hospital, University of Melbourne, 3065 Fitzroy, Victoria, Australia
| |
Collapse
|
48
|
Pandey A, Kumar A, Puri S. Finite-range Coulomb gas models. I. Some analytical results. Phys Rev E 2020; 101:022217. [PMID: 32168648 DOI: 10.1103/physreve.101.022217] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Accepted: 01/31/2020] [Indexed: 11/07/2022]
Abstract
Dyson has shown an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. In this paper, we introduce finite-range Coulomb gas models as a generalization of the Dyson models with a finite range of eigenvalue interactions. As the range of interaction increases, there is a transition from Poisson statistics to classical random matrix statistics. These models yield distinct universality classes of random matrix ensembles. They also provide a theoretical framework to study banded random matrices, and dynamical systems the matrix representation of which can be written in the form of banded matrices.
Collapse
Affiliation(s)
- Akhilesh Pandey
- School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110067, India
| | - Avanish Kumar
- School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110067, India
| | - Sanjay Puri
- School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110067, India
| |
Collapse
|
49
|
Cook N. The Circular Law for random regular digraphs. ANNALES DE L'INSTITUT HENRI POINCARÉ, PROBABILITÉS ET STATISTIQUES 2019. [DOI: 10.1214/18-aihp943] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
50
|
Hoseini MS, Rakela B, Flores-Ramirez Q, Hasenstaub AR, Alvarez-Buylla A, Stryker MP. Transplanted Cells Are Essential for the Induction But Not the Expression of Cortical Plasticity. J Neurosci 2019; 39:7529-7538. [PMID: 31391263 PMCID: PMC6750933 DOI: 10.1523/jneurosci.1430-19.2019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 07/25/2019] [Accepted: 08/01/2019] [Indexed: 01/31/2023] Open
Abstract
Transplantation of even a small number of embryonic inhibitory neurons from the medial ganglionic eminence (MGE) into postnatal visual cortex makes it lose responsiveness to an eye deprived of vision when the transplanted neurons reach the age of the normal critical period of activity-dependent ocular dominance (OD) plasticity. The transplant might induce OD plasticity in the host circuitry or might instead construct a parallel circuit of its own to suppress cortical responses to the deprived eye. We transplanted MGE neurons expressing either archaerhodopsin or channelrhodopsin into the visual cortex of both male and female mice, closed one eyelid for 4-5 d, and, as expected, observed transplant-induced OD plasticity. This plasticity was evident even when the activity of the transplanted cells was suppressed or enhanced optogenetically, demonstrating that the plasticity was produced by changes in the host visual cortex.SIGNIFICANCE STATEMENT Interneuron transplantation into mouse V1 creates a window of heightened plasticity that is quantitatively and qualitatively similar to the normal critical period; that is, short-term occlusion of either eye markedly changes ocular dominance (OD). The underlying mechanism of this process is not known. Transplanted interneurons might either form a separate circuit to maintain the OD shift or might instead trigger changes in the host circuity. We designed experiments to distinguish the two hypotheses. Our findings suggest that while inhibition produced by the transplanted cells triggers this form of plasticity, the host circuity is entirely responsible for maintaining the OD shift.
Collapse
Affiliation(s)
- Mahmood S Hoseini
- Center for Integrative Neuroscience, University of California, San Francisco, California 94158,
- Department of Physiology, University of California, San Francisco, California 94143
| | - Benjamin Rakela
- Center for Integrative Neuroscience, University of California, San Francisco, California 94158
- Department of Physiology, University of California, San Francisco, California 94143
| | - Quetzal Flores-Ramirez
- Department of Neurological Surgery, University of California, San Francisco, California 94143
| | - Andrea R Hasenstaub
- Center for Integrative Neuroscience, University of California, San Francisco, California 94158
- Coleman Memorial Laboratory, University of California, San Francisco, California 94158
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California 94143, and
| | - Arturo Alvarez-Buylla
- Department of Neurological Surgery, University of California, San Francisco, California 94143
- The Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research, University of California, San Francisco, California 94143
| | - Michael P Stryker
- Center for Integrative Neuroscience, University of California, San Francisco, California 94158,
- Department of Physiology, University of California, San Francisco, California 94143
| |
Collapse
|