51
|
Alt J, Erdős L, Krüger T, Nemish Y. Location of the spectrum of Kronecker random matrices. ANNALES DE L'INSTITUT HENRI POINCARÉ, PROBABILITÉS ET STATISTIQUES 2019. [DOI: 10.1214/18-aihp894] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
52
|
Beiran M, Ostojic S. Contrasting the effects of adaptation and synaptic filtering on the timescales of dynamics in recurrent networks. PLoS Comput Biol 2019; 15:e1006893. [PMID: 30897092 PMCID: PMC6445477 DOI: 10.1371/journal.pcbi.1006893] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 04/02/2019] [Accepted: 02/19/2019] [Indexed: 11/19/2022] Open
Abstract
Neural activity in awake behaving animals exhibits a vast range of timescales that can be several fold larger than the membrane time constant of individual neurons. Two types of mechanisms have been proposed to explain this conundrum. One possibility is that large timescales are generated by a network mechanism based on positive feedback, but this hypothesis requires fine-tuning of the strength or structure of the synaptic connections. A second possibility is that large timescales in the neural dynamics are inherited from large timescales of underlying biophysical processes, two prominent candidates being intrinsic adaptive ionic currents and synaptic transmission. How the timescales of adaptation or synaptic transmission influence the timescale of the network dynamics has however not been fully explored. To address this question, here we analyze large networks of randomly connected excitatory and inhibitory units with additional degrees of freedom that correspond to adaptation or synaptic filtering. We determine the fixed points of the systems, their stability to perturbations and the corresponding dynamical timescales. Furthermore, we apply dynamical mean field theory to study the temporal statistics of the activity in the fluctuating regime, and examine how the adaptation and synaptic timescales transfer from individual units to the whole population. Our overarching finding is that synaptic filtering and adaptation in single neurons have very different effects at the network level. Unexpectedly, the macroscopic network dynamics do not inherit the large timescale present in adaptive currents. In contrast, the timescales of network activity increase proportionally to the time constant of the synaptic filter. Altogether, our study demonstrates that the timescales of different biophysical processes have different effects on the network level, so that the slow processes within individual neurons do not necessarily induce slow activity in large recurrent neural networks.
Collapse
Affiliation(s)
- Manuel Beiran
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
53
|
Vesicular GABA Transporter Is Necessary for Transplant-Induced Critical Period Plasticity in Mouse Visual Cortex. J Neurosci 2019; 39:2635-2648. [PMID: 30705101 DOI: 10.1523/jneurosci.1253-18.2019] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 01/07/2019] [Accepted: 01/08/2019] [Indexed: 12/14/2022] Open
Abstract
The maturation of GABAergic inhibitory circuits is necessary for the onset of the critical period for ocular dominance plasticity (ODP) in the postnatal visual cortex (Hensch, 2005; Espinosa and Stryker, 2012). When it is deficient, the critical period does not start. When inhibitory maturation or signaling is precocious, it induces a precocious critical period. Heterochronic transplantation of GABAergic interneuron precursors derived from the medial ganglionic eminence (MGE) can induce a second period of functional plasticity in the visual cortex (Southwell et al., 2010). Although the timing of MGE transplantation-induced plasticity is dictated by the maturation of the transplanted cells, its mechanisms remain largely unknown. Here, we sought to test the effect of blocking vesicular GABA loading and subsequent release by transplanted interneurons on the ability to migrate, integrate, and induce plasticity in the host circuitry. We show that MGE cells taken from male and female donors that lack vesicular GABA transporter (Vgat) expression disperse and differentiate into somatostatin- and parvalbumin-expressing interneurons upon heterochronic transplantation in the postnatal mouse cortex. Although transplanted Vgat mutant interneurons come to express mature interneuron markers and display electrophysiological properties similar to those of control cells, their morphology is significantly more complex. Significantly, Vgat mutant MGE transplants fail to induce ODP, demonstrating the pivotal role of vesicular GABAergic transmission for MGE transplantation-induced plasticity in the postnatal mouse visual cortex.SIGNIFICANCE STATEMENT Embryonic inhibitory neurons thrive when transplanted into postnatal brains, migrating and differentiating in the host as they would have done if left in the donor. Once integrated into the host, these new neurons can have profound effects. For example, in the visual cortex, such neurons induce a second critical period of activity-dependent plasticity when they reach the appropriate stage of development. The cellular mechanism by which these transplanted GABAergic interneurons induce plasticity is unknown. Here, we show that transplanted interneurons that are unable to fill synaptic vesicles with GABA migrate and integrate into the host circuit, but they do not induce a second period of plasticity. These data suggest a role for the vesicular GABA transporter in transplantation-mediated plasticity.
Collapse
|
54
|
van Meegen A, Lindner B. Self-Consistent Correlations of Randomly Coupled Rotators in the Asynchronous State. PHYSICAL REVIEW LETTERS 2018; 121:258302. [PMID: 30608814 DOI: 10.1103/physrevlett.121.258302] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2017] [Revised: 10/09/2018] [Indexed: 06/09/2023]
Abstract
We study a network of unidirectionally coupled rotators with independent identically distributed (i.i.d.) frequencies and i.i.d. coupling coefficients. Similar to biological networks, this system can attain an asynchronous state with pronounced temporal autocorrelations of the rotators. We derive differential equations for the self-consistent autocorrelation function that can be solved analytically in limit cases. For more involved scenarios, its numerical solution is confirmed by simulations of networks with Gaussian or sparsely distributed coupling coefficients. The theory is finally generalized for pulse-coupled units and tested on a standard model of computational neuroscience, a recurrent network of sparsely coupled exponential integrate-and-fire neurons.
Collapse
Affiliation(s)
- Alexander van Meegen
- Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany
| |
Collapse
|
55
|
Martí D, Brunel N, Ostojic S. Correlations between synapses in pairs of neurons slow down dynamics in randomly connected neural networks. Phys Rev E 2018; 97:062314. [PMID: 30011528 DOI: 10.1103/physreve.97.062314] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Indexed: 01/11/2023]
Abstract
Networks of randomly connected neurons are among the most popular models in theoretical neuroscience. The connectivity between neurons in the cortex is however not fully random, the simplest and most prominent deviation from randomness found in experimental data being the overrepresentation of bidirectional connections among pyramidal cells. Using numerical and analytical methods, we investigate the effects of partially symmetric connectivity on the dynamics in networks of rate units. We consider the two dynamical regimes exhibited by random neural networks: the weak-coupling regime, where the firing activity decays to a single fixed point unless the network is stimulated, and the strong-coupling or chaotic regime, characterized by internally generated fluctuating firing rates. In the weak-coupling regime, we compute analytically, for an arbitrary degree of symmetry, the autocorrelation of network activity in the presence of external noise. In the chaotic regime, we perform simulations to determine the timescale of the intrinsic fluctuations. In both cases, symmetry increases the characteristic asymptotic decay time of the autocorrelation function and therefore slows down the dynamics in the network.
Collapse
Affiliation(s)
- Daniel Martí
- Laboratoire de Neurosciences Cognitives, Inserm UMR No. 960, Ecole Normale Supérieure, PSL Research University, 75230 Paris, France
| | - Nicolas Brunel
- Department of Statistics and Department of Neurobiology, University of Chicago, Chicago, Illinois 60637, USA.,Department of Neurobiology and Department of Physics, Duke University, Durham, North Carolina 27710, USA
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives, Inserm UMR No. 960, Ecole Normale Supérieure, PSL Research University, 75230 Paris, France
| |
Collapse
|
56
|
Mastrogiuseppe F, Ostojic S. Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks. Neuron 2018; 99:609-623.e29. [PMID: 30057201 DOI: 10.1016/j.neuron.2018.07.003] [Citation(s) in RCA: 169] [Impact Index Per Article: 24.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 04/27/2018] [Accepted: 07/02/2018] [Indexed: 11/18/2022]
Abstract
Large-scale neural recordings have established that the transformation of sensory stimuli into motor outputs relies on low-dimensional dynamics at the population level, while individual neurons exhibit complex selectivity. Understanding how low-dimensional computations on mixed, distributed representations emerge from the structure of the recurrent connectivity and inputs to cortical networks is a major challenge. Here, we study a class of recurrent network models in which the connectivity is a sum of a random part and a minimal, low-dimensional structure. We show that, in such networks, the dynamics are low dimensional and can be directly inferred from connectivity using a geometrical approach. We exploit this understanding to determine minimal connectivity required to implement specific computations and find that the dynamical range and computational capacity quickly increase with the dimensionality of the connectivity structure. This framework produces testable experimental predictions for the relationship between connectivity, low-dimensional dynamics, and computational features of recorded neurons.
Collapse
Affiliation(s)
- Francesca Mastrogiuseppe
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, 75005 Paris, France; Laboratoire de Physique Statistique, CNRS UMR 8550, École Normale Supérieure - PSL Research University, 75005 Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, 75005 Paris, France.
| |
Collapse
|
57
|
Stone L. The feasibility and stability of large complex biological networks: a random matrix approach. Sci Rep 2018; 8:8246. [PMID: 29844420 PMCID: PMC5974107 DOI: 10.1038/s41598-018-26486-2] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2017] [Accepted: 05/09/2018] [Indexed: 11/24/2022] Open
Abstract
In the 70's, Robert May demonstrated that complexity creates instability in generic models of ecological networks having random interaction matrices A. Similar random matrix models have since been applied in many disciplines. Central to assessing stability is the "circular law" since it describes the eigenvalue distribution for an important class of random matrices A. However, despite widespread adoption, the "circular law" does not apply for ecological systems in which density-dependence operates (i.e., where a species growth is determined by its density). Instead one needs to study the far more complicated eigenvalue distribution of the community matrix S = DA, where D is a diagonal matrix of population equilibrium values. Here we obtain this eigenvalue distribution. We show that if the random matrix A is locally stable, the community matrix S = DA will also be locally stable, providing the system is feasible (i.e., all species have positive equilibria D > 0). This helps explain why, unusually, nearly all feasible systems studied here are locally stable. Large complex systems may thus be even more fragile than May predicted, given the difficulty of assembling a feasible system. It was also found that the degree of stability, or resilience of a system, depended on the minimum equilibrium population.
Collapse
Affiliation(s)
- Lewi Stone
- Biomathematics Unit, Faculty of Life Sciences, Tel Aviv University, Ramat Aviv, Israel.
- Mathematical Sciences, Faculty of Science, RMIT University, Melbourne, Australia.
| |
Collapse
|
58
|
Pena RFO, Vellmer S, Bernardi D, Roque AC, Lindner B. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks. Front Comput Neurosci 2018; 12:9. [PMID: 29551968 PMCID: PMC5840464 DOI: 10.3389/fncom.2018.00009] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Accepted: 02/07/2018] [Indexed: 11/13/2022] Open
Abstract
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.
Collapse
Affiliation(s)
- Rodrigo F O Pena
- Laboratório de Sistemas Neurais, Department of Physics, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, São Paulo, Brazil
| | - Sebastian Vellmer
- Theory of Complex Systems and Neurophysics, Bernstein Center for Computational Neuroscience, Berlin, Germany.,Department of Physics, Humboldt Universität zu Berlin, Berlin, Germany
| | - Davide Bernardi
- Theory of Complex Systems and Neurophysics, Bernstein Center for Computational Neuroscience, Berlin, Germany.,Department of Physics, Humboldt Universität zu Berlin, Berlin, Germany
| | - Antonio C Roque
- Laboratório de Sistemas Neurais, Department of Physics, School of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, São Paulo, Brazil
| | - Benjamin Lindner
- Theory of Complex Systems and Neurophysics, Bernstein Center for Computational Neuroscience, Berlin, Germany.,Department of Physics, Humboldt Universität zu Berlin, Berlin, Germany
| |
Collapse
|
59
|
Livi L, Bianchi FM, Alippi C. Determination of the Edge of Criticality in Echo State Networks Through Fisher Information Maximization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:706-717. [PMID: 28092580 DOI: 10.1109/tnnls.2016.2644268] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
It is a widely accepted fact that the computational capability of recurrent neural networks (RNNs) is maximized on the so-called "edge of criticality." Once the network operates in this configuration, it performs efficiently on a specific application both in terms of: 1) low prediction error and 2) high short-term memory capacity. Since the behavior of recurrent networks is strongly influenced by the particular input signal driving the dynamics, a universal, application-independent method for determining the edge of criticality is still missing. In this paper, we aim at addressing this issue by proposing a theoretically motivated, unsupervised method based on Fisher information for determining the edge of criticality in RNNs. It is proved that Fisher information is maximized for (finite-size) systems operating in such critical regions. However, Fisher information is notoriously difficult to compute and requires the analytic form of the probability density function ruling the system behavior. This paper takes advantage of a recently developed nonparametric estimator of the Fisher information matrix and provides a method to determine the critical region of echo state networks (ESNs), a particular class of recurrent networks. The considered control parameters, which indirectly affect the ESN performance, are explored to identify those configurations lying on the edge of criticality and, as such, maximizing Fisher information and computational performance. Experimental results on benchmarks and real-world data demonstrate the effectiveness of the proposed method.
Collapse
|
60
|
|
61
|
Pandey A, Kumar A, Puri S. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors. Phys Rev E 2018; 96:052211. [PMID: 29347738 DOI: 10.1103/physreve.96.052211] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2017] [Indexed: 11/07/2022]
Abstract
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Collapse
Affiliation(s)
- Akhilesh Pandey
- School of Physical Sciences, Jawaharlal Nehru University, New Delhi-110067, India
| | - Avanish Kumar
- School of Physical Sciences, Jawaharlal Nehru University, New Delhi-110067, India
| | - Sanjay Puri
- School of Physical Sciences, Jawaharlal Nehru University, New Delhi-110067, India
| |
Collapse
|
62
|
Cook N, Hachem W, Najim J, Renfrew D. Non-Hermitian random matrices with a variance profile (I): deterministic equivalents and limiting ESDs. ELECTRON J PROBAB 2018. [DOI: 10.1214/18-ejp230] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
63
|
Blackwell JM, Geffen MN. Progress and challenges for understanding the function of cortical microcircuits in auditory processing. Nat Commun 2017; 8:2165. [PMID: 29255268 PMCID: PMC5735136 DOI: 10.1038/s41467-017-01755-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Accepted: 10/12/2017] [Indexed: 12/21/2022] Open
Abstract
An important outstanding question in auditory neuroscience is to identify the mechanisms by which specific motifs within inter-connected neural circuits affect auditory processing and, ultimately, behavior. In the auditory cortex, a combination of large-scale electrophysiological recordings and concurrent optogenetic manipulations are improving our understanding of the role of inhibitory–excitatory interactions. At the same time, computational approaches have grown to incorporate diverse neuronal types and connectivity patterns. However, we are still far from understanding how cortical microcircuits encode and transmit information about complex acoustic scenes. In this review, we focus on recent results identifying the special function of different cortical neurons in the auditory cortex and discuss a computational framework for future work that incorporates ideas from network science and network dynamics toward the coding of complex auditory scenes. Advances in multi-neuron recordings and optogenetic manipulation have resulted in an interrogation of the function of specific cortical cell types in auditory cortex during sound processing. Here, the authors review this literature and discuss the merits of integrating computational approaches from dynamic network science.
Collapse
Affiliation(s)
- Jennifer M Blackwell
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Neuroscience Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Neuroscience Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
64
|
Barreiro AK, Kutz JN, Shlizerman E. Symmetries Constrain Dynamics in a Family of Balanced Neural Networks. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2017; 7:10. [PMID: 29019105 PMCID: PMC5635020 DOI: 10.1186/s13408-017-0052-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 09/19/2017] [Indexed: 06/07/2023]
Abstract
We examine a family of random firing-rate neural networks in which we enforce the neurobiological constraint of Dale's Law-each neuron makes either excitatory or inhibitory connections onto its post-synaptic targets. We find that this constrained system may be described as a perturbation from a system with nontrivial symmetries. We analyze the symmetric system using the tools of equivariant bifurcation theory and demonstrate that the symmetry-implied structures remain evident in the perturbed system. In comparison, spectral characteristics of the network coupling matrix are relatively uninformative about the behavior of the constrained system.
Collapse
Affiliation(s)
- Andrea K. Barreiro
- Department of Mathematics, Southern Methodist University, POB 750156, Dallas, TX 75275 USA
| | - J. Nathan Kutz
- Department of Applied Mathematics, University of Washington, Box 353925, Seattle, WA 98195-3925 USA
| | - Eli Shlizerman
- Department of Applied Mathematics, University of Washington, Box 353925, Seattle, WA 98195-3925 USA
| |
Collapse
|
65
|
Huang C, Doiron B. Once upon a (slow) time in the land of recurrent neuronal networks…. Curr Opin Neurobiol 2017; 46:31-38. [PMID: 28756341 PMCID: PMC12038865 DOI: 10.1016/j.conb.2017.07.003] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Revised: 06/21/2017] [Accepted: 07/06/2017] [Indexed: 12/22/2022]
Abstract
The brain must both react quickly to new inputs as well as store a memory of past activity. This requires biology that operates over a vast range of time scales. Fast time scales are determined by the kinetics of synaptic conductances and ionic channels; however, the mechanics of slow time scales are more complicated. In this opinion article we review two distinct network-based mechanisms that impart slow time scales in recurrently coupled neuronal networks. The first is in strongly coupled networks where the time scale of the internally generated fluctuations diverges at the transition between stable and chaotic firing rate activity. The second is in networks with finitely many members where noise-induced transitions between metastable states appear as a slow time scale in the ongoing network firing activity. We discuss these mechanisms with an emphasis on their similarities and differences.
Collapse
Affiliation(s)
- Chengcheng Huang
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
66
|
Barak O. Recurrent neural networks as versatile tools of neuroscience research. Curr Opin Neurobiol 2017; 46:1-6. [PMID: 28668365 DOI: 10.1016/j.conb.2017.06.003] [Citation(s) in RCA: 97] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Revised: 05/07/2017] [Accepted: 06/15/2017] [Indexed: 02/07/2023]
Abstract
Recurrent neural networks (RNNs) are a class of computational models that are often used as a tool to explain neurobiological phenomena, considering anatomical, electrophysiological and computational constraints. RNNs can either be designed to implement a certain dynamical principle, or they can be trained by input-output examples. Recently, there has been large progress in utilizing trained RNNs both for computational tasks, and as explanations of neural phenomena. I will review how combining trained RNNs with reverse engineering can provide an alternative framework for modeling in neuroscience, potentially serving as a powerful hypothesis generation tool. Despite the recent progress and potential benefits, there are many fundamental gaps towards a theory of these networks. I will discuss these challenges and possible methods to attack them.
Collapse
Affiliation(s)
- Omri Barak
- Faculty of Medicine and Network Biology Research Laboratories, Technion - Israel Institute of Technology, Israel.
| |
Collapse
|
67
|
Abstract
Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks’ spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities—including those of different cell types—combine with connectivity to shape population activity and function. Neuronal networks, like many biological systems, exhibit variable activity. This activity is shaped by both the underlying biology of the component neurons and the structure of their interactions. How can we combine knowledge of these two things—that is, models of individual neurons and of their interactions—to predict the statistics of single- and multi-neuron activity? Current approaches rely on linearizing neural activity around a stationary state. In the face of neural nonlinearities, however, these linear methods can fail to predict spiking statistics and even fail to correctly predict whether activity is stable or pathological. Here, we show how to calculate any spike train cumulant in a broad class of models, while systematically accounting for nonlinear effects. We then study a fundamental effect of nonlinear input-rate transfer–coupling between different orders of spiking statistic–and how this depends on single-neuron and network properties.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Krešimir Josić
- Department of Mathematics and Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America
- Department of BioSciences, Rice University, Houston, Texas, United States of America
| | - Eric Shea-Brown
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- Department of Physiology and Biophysics, and UW Institute of Neuroengineering, University of Washington, Seattle, Washington, United States of America
| | - Michael A. Buice
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
68
|
Mastrogiuseppe F, Ostojic S. Intrinsically-generated fluctuating activity in excitatory-inhibitory networks. PLoS Comput Biol 2017; 13:e1005498. [PMID: 28437436 PMCID: PMC5421821 DOI: 10.1371/journal.pcbi.1005498] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Revised: 05/08/2017] [Accepted: 04/04/2017] [Indexed: 12/05/2022] Open
Abstract
Recurrent networks of non-linear units display a variety of dynamical regimes depending on the structure of their synaptic connectivity. A particularly remarkable phenomenon is the appearance of strongly fluctuating, chaotic activity in networks of deterministic, but randomly connected rate units. How this type of intrinsically generated fluctuations appears in more realistic networks of spiking neurons has been a long standing question. To ease the comparison between rate and spiking networks, recent works investigated the dynamical regimes of randomly-connected rate networks with segregated excitatory and inhibitory populations, and firing rates constrained to be positive. These works derived general dynamical mean field (DMF) equations describing the fluctuating dynamics, but solved these equations only in the case of purely inhibitory networks. Using a simplified excitatory-inhibitory architecture in which DMF equations are more easily tractable, here we show that the presence of excitation qualitatively modifies the fluctuating activity compared to purely inhibitory networks. In presence of excitation, intrinsically generated fluctuations induce a strong increase in mean firing rates, a phenomenon that is much weaker in purely inhibitory networks. Excitation moreover induces two different fluctuating regimes: for moderate overall coupling, recurrent inhibition is sufficient to stabilize fluctuations; for strong coupling, firing rates are stabilized solely by the upper bound imposed on activity, even if inhibition is stronger than excitation. These results extend to more general network architectures, and to rate networks receiving noisy inputs mimicking spiking activity. Finally, we show that signatures of the second dynamical regime appear in networks of integrate-and-fire neurons.
Collapse
Affiliation(s)
- Francesca Mastrogiuseppe
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, Paris, France
- Laboratoire de Physique Statistique, CNRS UMR 8550, École Normale Supérieure - PSL Research University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, Paris, France
| |
Collapse
|
69
|
Plasticity to the Rescue. Neuron 2016; 92:935-936. [PMID: 27930907 DOI: 10.1016/j.neuron.2016.11.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
The balance between excitatory and inhibitory inputs is critical for the proper functioning of neural circuits. Landau and colleagues show that, in the presence of cell-type-specific connectivity, this balance is difficult to achieve without either synaptic plasticity or spike-frequency adaptation to fine-tune the connection strengths.
Collapse
|
70
|
Neri I, Metz FL. Eigenvalue Outliers of Non-Hermitian Random Matrices with a Local Tree Structure. PHYSICAL REVIEW LETTERS 2016; 117:224101. [PMID: 27925747 DOI: 10.1103/physrevlett.117.224101] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2016] [Indexed: 06/06/2023]
Abstract
Spectra of sparse non-Hermitian random matrices determine the dynamics of complex processes on graphs. Eigenvalue outliers in the spectrum are of particular interest, since they determine the stationary state and the stability of dynamical processes. We present a general and exact theory for the eigenvalue outliers of random matrices with a local tree structure. For adjacency and Laplacian matrices of oriented random graphs, we derive analytical expressions for the eigenvalue outliers, the first moments of the distribution of eigenvector elements associated with an outlier, the support of the spectral density, and the spectral gap. We show that these spectral observables obey universal expressions, which hold for a broad class of oriented random matrices.
Collapse
Affiliation(s)
- Izaak Neri
- Max Planck Institute for the Physics of Complex Systems, Nöthnitzerstraße 38, 01187 Dresden, Germany
- Max Planck Institute of Molecular Cell Biology and Genetics, Pfotenhauerstraße 108, 01307 Dresden, Germany
| | - Fernando Lucas Metz
- Departamento de Física, Universidade Federal de Santa Maria, 97105-900 Santa Maria, Brazil
| |
Collapse
|
71
|
Kuczala A, Sharpee TO. Eigenvalue spectra of large correlated random matrices. Phys Rev E 2016; 94:050101. [PMID: 27967175 PMCID: PMC5161118 DOI: 10.1103/physreve.94.050101] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2016] [Indexed: 11/07/2022]
Abstract
Using the diagrammatic method, we derive a set of self-consistent equations that describe eigenvalue distributions of large correlated asymmetric random matrices. The matrix elements can have different variances and be correlated with each other. The analytical results are confirmed by numerical simulations. The results have implications for the dynamics of neural and other biological networks where plasticity induces correlations in the connection strengths within the network. We find that the presence of correlations can have a major impact on network stability.
Collapse
Affiliation(s)
- Alexander Kuczala
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California 92037, USA and Department of Physics, University of California, San Diego, California 92161, USA
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California 92037, USA and Department of Physics, University of California, San Diego, California 92161, USA
| |
Collapse
|
72
|
The Google matrix controls the stability of structured ecological and biological networks. Nat Commun 2016; 7:12857. [PMID: 27687986 PMCID: PMC5056432 DOI: 10.1038/ncomms12857] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2016] [Accepted: 08/09/2016] [Indexed: 11/08/2022] Open
Abstract
May's celebrated theoretical work of the 70's contradicted the established paradigm by demonstrating that complexity leads to instability in biological systems. Here May's random-matrix modelling approach is generalized to realistic large-scale webs of species interactions, be they structured by networks of competition, mutualism or both. Simple relationships are found to govern these otherwise intractable models, and control the parameter ranges for which biological systems are stable and feasible. Our analysis of model and real empirical networks is only achievable on introducing a simplifying Google-matrix reduction scheme, which in the process, yields a practical ecological eigenvalue stability index. These results provide an insight into how network topology, especially connectance, influences species stable coexistence. Constraints controlling feasibility (positive equilibrium populations) in these systems are found more restrictive than those controlling stability, helping explain the enigma of why many classes of feasible ecological models are nearly always stable.
Collapse
|
73
|
Abstract
Networks composed of distinct, densely connected subsystems are called modular. In ecology, it has been posited that a modular organization of species interactions would benefit the dynamical stability of communities, even though evidence supporting this hypothesis is mixed. Here we study the effect of modularity on the local stability of ecological dynamical systems, by presenting new results in random matrix theory, which are obtained using a quaternionic parameterization of the cavity method. Results show that modularity can have moderate stabilizing effects for particular parameter choices, while anti-modularity can greatly destabilize ecological networks. Modularity in food webs can be caused by spatial and temporal mismatches in interactions. Here, Jacopo Grilli, Tim Rogers and Stefano Allesina show that modularity, contrary to expectations, does not generally help stabilizing ecological communities.
Collapse
|
74
|
Aljadeff J, Renfrew D, Vegué M, Sharpee TO. Low-dimensional dynamics of structured random networks. Phys Rev E 2016; 93:022302. [PMID: 26986347 DOI: 10.1103/physreve.93.022302] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Indexed: 01/12/2023]
Abstract
Using a generalized random recurrent neural network model, and by extending our recently developed mean-field approach [J. Aljadeff, M. Stern, and T. Sharpee, Phys. Rev. Lett. 114, 088101 (2015)], we study the relationship between the network connectivity structure and its low-dimensional dynamics. Each connection in the network is a random number with mean 0 and variance that depends on pre- and postsynaptic neurons through a sufficiently smooth function g of their identities. We find that these networks undergo a phase transition from a silent to a chaotic state at a critical point we derive as a function of g. Above the critical point, although unit activation levels are chaotic, their autocorrelation functions are restricted to a low-dimensional subspace. This provides a direct link between the network's structure and some of its functional characteristics. We discuss example applications of the general results to neuroscience where we derive the support of the spectrum of connectivity matrices with heterogeneous and possibly correlated degree distributions, and to ecology where we study the stability of the cascade model for food web structure.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Department of Neurobiology, University of Chicago, Chicago, Illinois, USA.,Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, California, USA
| | - David Renfrew
- Department of Mathematics, University of California Los Angeles, Los Angeles, California, USA
| | - Marina Vegué
- Centre de Recerca Matemàtica, Campus de Bellaterra, Barcelona, Spain.,Departament de Matemàtiques, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, California, USA
| |
Collapse
|
75
|
Wieland S, Bernardi D, Schwalger T, Lindner B. Slow fluctuations in recurrent networks of spiking neurons. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 92:040901. [PMID: 26565154 DOI: 10.1103/physreve.92.040901] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Indexed: 06/05/2023]
Abstract
Networks of fast nonlinear elements may display slow fluctuations if interactions are strong. We find a transition in the long-term variability of a sparse recurrent network of perfect integrate-and-fire neurons at which the Fano factor switches from zero to infinity and the correlation time is minimized. This corresponds to a bifurcation in a linear map arising from the self-consistency of temporal input and output statistics. More realistic neural dynamics with a leak current and refractory period lead to smoothed transitions and modified critical couplings that can be theoretically predicted.
Collapse
Affiliation(s)
- Stefan Wieland
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| | - Davide Bernardi
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| | - Tilo Schwalger
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Station 15, 1015 Lausanne EPFL, Switzerland
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| |
Collapse
|
76
|
Toyoizumi T, Huang H. Structure of attractors in randomly connected networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 91:032802. [PMID: 25871152 DOI: 10.1103/physreve.91.032802] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2014] [Indexed: 06/04/2023]
Abstract
The deterministic dynamics of randomly connected neural networks are studied, where a state of binary neurons evolves according to a discrete-time synchronous update rule. We give theoretical support that the overlap of systems' states between the current and a previous time develops in time according to a Markovian stochastic process in large networks. This Markovian process predicts how often a network revisits one of the previously visited states, depending on the system size. The state concentration probability, i.e., the probability that two distinct states coevolve to the same state, is utilized to analytically derive various characteristics that quantify attractors' structure. The analytical predictions about the total number of attractors, the typical cycle length, and the number of states belonging to all attractive cycles match well with numerical simulations for relatively large system sizes.
Collapse
Affiliation(s)
- Taro Toyoizumi
- RIKEN Brain Science Institute, Wako-shi, Saitama 351-0198, Japan and Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 226-8502, Japan
| | - Haiping Huang
- RIKEN Brain Science Institute, Wako-shi, Saitama 351-0198, Japan
| |
Collapse
|