1
|
Olsen VK, Whitlock JR, Roudi Y. The quality and complexity of pairwise maximum entropy models for large cortical populations. PLoS Comput Biol 2024; 20:e1012074. [PMID: 38696532 DOI: 10.1371/journal.pcbi.1012074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 05/14/2024] [Accepted: 04/10/2024] [Indexed: 05/04/2024] Open
Abstract
We investigate the ability of the pairwise maximum entropy (PME) model to describe the spiking activity of large populations of neurons recorded from the visual, auditory, motor, and somatosensory cortices. To quantify this performance, we use (1) Kullback-Leibler (KL) divergences, (2) the extent to which the pairwise model predicts third-order correlations, and (3) its ability to predict the probability that multiple neurons are simultaneously active. We compare these with the performance of a model with independent neurons and study the relationship between the different performance measures, while varying the population size, mean firing rate of the chosen population, and the bin size used for binarizing the data. We confirm the previously reported excellent performance of the PME model for small population sizes N < 20. But we also find that larger mean firing rates and bin sizes generally decreases performance. The performance for larger populations were generally not as good. For large populations, pairwise models may be good in terms of predicting third-order correlations and the probability of multiple neurons being active, but still significantly worse than small populations in terms of their improvement over the independent model in KL-divergence. We show that these results are independent of the cortical area and of whether approximate methods or Boltzmann learning are used for inferring the pairwise couplings. We compared the scaling of the inferred couplings with N and find it to be well explained by the Sherrington-Kirkpatrick (SK) model, whose strong coupling regime shows a complex phase with many metastable states. We find that, up to the maximum population size studied here, the fitted PME model remains outside its complex phase. However, the standard deviation of the couplings compared to their mean increases, and the model gets closer to the boundary of the complex phase as the population size grows.
Collapse
Affiliation(s)
- Valdemar Kargård Olsen
- Kavli Institute for Systems Neuroscience, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Jonathan R Whitlock
- Kavli Institute for Systems Neuroscience, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Yasser Roudi
- Kavli Institute for Systems Neuroscience, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Mathematics, King's College London, London, United Kingdom
| |
Collapse
|
2
|
Lombardi F, Pepić S, Shriki O, Tkačik G, De Martino D. Statistical modeling of adaptive neural networks explains co-existence of avalanches and oscillations in resting human brain. NATURE COMPUTATIONAL SCIENCE 2023; 3:254-263. [PMID: 38177880 PMCID: PMC10766559 DOI: 10.1038/s43588-023-00410-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 02/02/2023] [Indexed: 01/06/2024]
Abstract
Neurons in the brain are wired into adaptive networks that exhibit collective dynamics as diverse as scale-specific oscillations and scale-free neuronal avalanches. Although existing models account for oscillations and avalanches separately, they typically do not explain both phenomena, are too complex to analyze analytically or intractable to infer from data rigorously. Here we propose a feedback-driven Ising-like class of neural networks that captures avalanches and oscillations simultaneously and quantitatively. In the simplest yet fully microscopic model version, we can analytically compute the phase diagram and make direct contact with human brain resting-state activity recordings via tractable inference of the model's two essential parameters. The inferred model quantitatively captures the dynamics over a broad range of scales, from single sensor oscillations to collective behaviors of extreme events and neuronal avalanches. Importantly, the inferred parameters indicate that the co-existence of scale-specific (oscillations) and scale-free (avalanches) dynamics occurs close to a non-equilibrium critical point at the onset of self-sustained oscillations.
Collapse
Affiliation(s)
- Fabrizio Lombardi
- Institute of Science and Technology Austria, Klosterneuburg, Austria.
| | - Selver Pepić
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Oren Shriki
- Department of Cognitive and Brain Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Gašper Tkačik
- Institute of Science and Technology Austria, Klosterneuburg, Austria.
| | - Daniele De Martino
- Biofisika Institute (CSIC, UPV-EHU) and Ikerbasque Foundation, Bilbao, Spain.
| |
Collapse
|
3
|
Modelling time-varying interactions in complex systems: the Score Driven Kinetic Ising Model. Sci Rep 2022; 12:19339. [PMID: 36369262 PMCID: PMC9652375 DOI: 10.1038/s41598-022-23770-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 11/04/2022] [Indexed: 11/13/2022] Open
Abstract
A common issue when analyzing real-world complex systems is that the interactions between their elements often change over time. Here we propose a new modeling approach for time-varying interactions generalising the well-known Kinetic Ising Model, a minimalistic pairwise constant interactions model which has found applications in several scientific disciplines. Keeping arbitrary choices of dynamics to a minimum and seeking information theoretical optimality, the Score-Driven methodology allows to extract from data and interpret the presence of temporal patterns describing time-varying interactions. We identify a parameter whose value at a given time can be directly associated with the local predictability of the dynamics and we introduce a method to dynamically learn its value from the data, without specifying parametrically the system's dynamics. We extend our framework to disentangle different sources (e.g. endogenous vs exogenous) of predictability in real time, and show how our methodology applies to a variety of complex systems such as financial markets, temporal (social) networks, and neuronal populations.
Collapse
|
4
|
Cessac B. Retinal Processing: Insights from Mathematical Modelling. J Imaging 2022; 8:14. [PMID: 35049855 PMCID: PMC8780400 DOI: 10.3390/jimaging8010014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 01/11/2022] [Accepted: 01/12/2022] [Indexed: 02/04/2023] Open
Abstract
The retina is the entrance of the visual system. Although based on common biophysical principles, the dynamics of retinal neurons are quite different from their cortical counterparts, raising interesting problems for modellers. In this paper, I address some mathematically stated questions in this spirit, discussing, in particular: (1) How could lateral amacrine cell connectivity shape the spatio-temporal spike response of retinal ganglion cells? (2) How could spatio-temporal stimuli correlations and retinal network dynamics shape the spike train correlations at the output of the retina? These questions are addressed, first, introducing a mathematically tractable model of the layered retina, integrating amacrine cells' lateral connectivity and piecewise linear rectification, allowing for computing the retinal ganglion cells receptive field together with the voltage and spike correlations of retinal ganglion cells resulting from the amacrine cells networks. Then, I review some recent results showing how the concept of spatio-temporal Gibbs distributions and linear response theory can be used to characterize the collective spike response to a spatio-temporal stimulus of a set of retinal ganglion cells, coupled via effective interactions corresponding to the amacrine cells network. On these bases, I briefly discuss several potential consequences of these results at the cortical level.
Collapse
Affiliation(s)
- Bruno Cessac
- France INRIA Biovision Team and Neuromod Institute, Université Côte d'Azur, 2004 Route des Lucioles, BP 93, 06902 Valbonne, France
| |
Collapse
|
5
|
Lee S, Periwal V, Jo J. Inference of stochastic time series with missing data. Phys Rev E 2021; 104:024119. [PMID: 34525568 DOI: 10.1103/physreve.104.024119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 07/22/2021] [Indexed: 11/07/2022]
Abstract
Inferring dynamics from time series is an important objective in data analysis. In particular, it is challenging to infer stochastic dynamics given incomplete data. We propose an expectation maximization (EM) algorithm that iterates between alternating two steps: E-step restores missing data points, while M-step infers an underlying network model from the restored data. Using synthetic data of a kinetic Ising model, we confirm that the algorithm works for restoring missing data points as well as inferring the underlying model. At the initial iteration of the EM algorithm, the model inference shows better model-data consistency with observed data points than with missing data points. As we keep iterating, however, missing data points show better model-data consistency. We find that demanding equal consistency of observed and missing data points provides an effective stopping criterion for the iteration to prevent going beyond the most accurate model inference. Using the EM algorithm and the stopping criterion together, we infer missing data points from a time-series data of real neuronal activities. Our method reproduces collective properties of neuronal activities such as correlations and firing statistics even when 70% of data points are masked as missing points.
Collapse
Affiliation(s)
- Sangwon Lee
- Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea
| | - Vipul Periwal
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892, USA
| | - Junghyo Jo
- Department of Physics Education and Center for Theoretical Physics and Artificial Intelligence Institute, Seoul National University, Seoul 08826, Korea.,School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, Korea
| |
Collapse
|
6
|
Cofré R, Maldonado C, Cessac B. Thermodynamic Formalism in Neuronal Dynamics and Spike Train Statistics. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1330. [PMID: 33266513 PMCID: PMC7712217 DOI: 10.3390/e22111330] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 11/13/2020] [Accepted: 11/15/2020] [Indexed: 12/04/2022]
Abstract
The Thermodynamic Formalism provides a rigorous mathematical framework for studying quantitative and qualitative aspects of dynamical systems. At its core, there is a variational principle that corresponds, in its simplest form, to the Maximum Entropy principle. It is used as a statistical inference procedure to represent, by specific probability measures (Gibbs measures), the collective behaviour of complex systems. This framework has found applications in different domains of science. In particular, it has been fruitful and influential in neurosciences. In this article, we review how the Thermodynamic Formalism can be exploited in the field of theoretical neuroscience, as a conceptual and operational tool, in order to link the dynamics of interacting neurons and the statistics of action potentials from either experimental data or mathematical models. We comment on perspectives and open problems in theoretical neuroscience that could be addressed within this formalism.
Collapse
Affiliation(s)
- Rodrigo Cofré
- CIMFAV-Ingemat, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2340000, Chile
| | - Cesar Maldonado
- IPICYT/División de Matemáticas Aplicadas, San Luis Potosí 78216, Mexico;
| | - Bruno Cessac
- Inria Biovision team and Neuromod Institute, Université Côte d’Azur, 06901 CEDEX Inria, France;
| |
Collapse
|
7
|
Ponce-Alvarez A, Mochol G, Hermoso-Mendizabal A, de la Rocha J, Deco G. Cortical state transitions and stimulus response evolve along stiff and sloppy parameter dimensions, respectively. eLife 2020; 9:53268. [PMID: 32181740 PMCID: PMC7108864 DOI: 10.7554/elife.53268] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Accepted: 03/16/2020] [Indexed: 11/26/2022] Open
Abstract
Previous research showed that spontaneous neuronal activity presents sloppiness: the collective behavior is strongly determined by a small number of parameter combinations, defined as ‘stiff’ dimensions, while it is insensitive to many others (‘sloppy’ dimensions). Here, we analyzed neural population activity from the auditory cortex of anesthetized rats while the brain spontaneously transited through different synchronized and desynchronized states and intermittently received sensory inputs. We showed that cortical state transitions were determined by changes in stiff parameters associated with the activity of a core of neurons with low responses to stimuli and high centrality within the observed network. In contrast, stimulus-evoked responses evolved along sloppy dimensions associated with the activity of neurons with low centrality and displaying large ongoing and stimulus-evoked fluctuations without affecting the integrity of the network. Our results shed light on the interplay among stability, flexibility, and responsiveness of neuronal collective dynamics during intrinsic and induced activity.
Collapse
Affiliation(s)
- Adrian Ponce-Alvarez
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Gabriela Mochol
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | | | - Jaime de la Rocha
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.,Institució Catalana de la Recerca i Estudis Avançats (ICREA), Barcelona, Spain.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,School of Psychological Sciences, Monash University, Melbourne, Australia
| |
Collapse
|
8
|
A general method to generate artificial spike train populations matching recorded neurons. J Comput Neurosci 2020; 48:47-63. [PMID: 31974719 DOI: 10.1007/s10827-020-00741-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2019] [Revised: 01/05/2020] [Accepted: 01/07/2020] [Indexed: 10/25/2022]
Abstract
We developed a general method to generate populations of artificial spike trains (ASTs) that match the statistics of recorded neurons. The method is based on computing a Gaussian local rate function of the recorded spike trains, which results in rate templates from which ASTs are drawn as gamma distributed processes with a refractory period. Multiple instances of spike trains can be sampled from the same rate templates. Importantly, we can manipulate rate-covariances between spike trains by performing simple algorithmic transformations on the rate templates, such as filtering or amplifying specific frequency bands, and adding behavior related rate modulations. The method was examined for accuracy and limitations using surrogate data such as sine wave rate templates, and was then verified for recorded spike trains from cerebellum and cerebral cortex. We found that ASTs generated with this method can closely follow the firing rate and local as well as global spike time variance and power spectrum. The method is primarily intended to generate well-controlled spike train populations as inputs for dynamic clamp studies or biophysically realistic multicompartmental models. Such inputs are essential to study detailed properties of synaptic integration with well-controlled input patterns that mimic the in vivo situation while allowing manipulation of input rate covariances at different time scales.
Collapse
|
9
|
Abstract
Despite their differences, biological systems at different spatial scales tend to exhibit common organizational patterns. Unfortunately, these commonalities are often hard to grasp due to the highly specialized nature of modern science and the parcelled terminology employed by various scientific sub-disciplines. To explore these common organizational features, this paper provides a comparative study of diverse applications of the maximum entropy principle, which has found many uses at different biological spatial scales ranging from amino acids up to societies. By presenting these studies under a common approach and language, this paper aims to establish a unified view over these seemingly highly heterogeneous scenarios.
Collapse
|
10
|
Cofré R, Videla L, Rosas F. An Introduction to the Non-Equilibrium Steady States of Maximum Entropy Spike Trains. ENTROPY 2019. [PMCID: PMC7515414 DOI: 10.3390/e21090884] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Although most biological processes are characterized by a strong temporal asymmetry, several popular mathematical models neglect this issue. Maximum entropy methods provide a principled way of addressing time irreversibility, which leverages powerful results and ideas from the literature of non-equilibrium statistical mechanics. This tutorial provides a comprehensive overview of these issues, with a focus in the case of spike train statistics. We provide a detailed account of the mathematical foundations and work out examples to illustrate the key concepts and results from non-equilibrium statistical mechanics.
Collapse
Affiliation(s)
- Rodrigo Cofré
- Centro de Investigación y Modelamiento de Fenómenos Aleatorios CIMFAV-Ingemat, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2340000, Chile
- Correspondence:
| | - Leonardo Videla
- Centro de Investigación y Modelamiento de Fenómenos Aleatorios CIMFAV-Ingemat, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2340000, Chile
| | - Fernando Rosas
- Centre for Psychedelic Research, Department of Medicine, Imperial College London, London SW7 2DD, UK
- Centre for Complexity Science and Department of Mathematics, Imperial College London, London SW7 2AZ, UK
- Data Science Institute, Imperial College London, London SW7 2AZ, UK
| |
Collapse
|
11
|
Makkeh A, Chicharro D, Theis DO, Vicente R. MAXENT3D_PID: An Estimator for the Maximum-Entropy Trivariate Partial Information Decomposition. ENTROPY 2019; 21:862. [PMCID: PMC7515392 DOI: 10.3390/e21090862] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 08/27/2019] [Indexed: 07/04/2023]
Abstract
Partial information decomposition (PID) separates the contributions of sources about a target into unique, redundant, and synergistic components of information. In essence, PID answers the question of “who knows what” of a system of random variables and hence has applications to a wide spectrum of fields ranging from social to biological sciences. The paper presents MaxEnt3D_Pid, an algorithm that computes the PID of three sources, based on a recently-proposed maximum entropy measure, using convex optimization (cone programming). We describe the algorithm and its associated software utilization and report the results of various experiments assessing its accuracy. Moreover, the paper shows that a hierarchy of bivariate and trivariate PID allows obtaining the finer quantities of the trivariate partial information measure.
Collapse
Affiliation(s)
- Abdullah Makkeh
- Institute of Computer Science, University of Tartu, 51014 Tartu, Estonia
| | - Daniel Chicharro
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, 38068 Rovereto (TN), Italy
| | - Dirk Oliver Theis
- Institute of Computer Science, University of Tartu, 51014 Tartu, Estonia
| | - Raul Vicente
- Institute of Computer Science, University of Tartu, 51014 Tartu, Estonia
| |
Collapse
|
12
|
Abstract
We investigate the complexity of logistic regression models, which is defined by counting the number of indistinguishable distributions that the model can represent (Balasubramanian, 1997). We find that the complexity of logistic models with binary inputs depends not only on the number of parameters but also on the distribution of inputs in a nontrivial way that standard treatments of complexity do not address. In particular, we observe that correlations among inputs induce effective dependencies among parameters, thus constraining the model and, consequently, reducing its complexity. We derive simple relations for the upper and lower bounds of the complexity. Furthermore, we show analytically that defining the model parameters on a finite support rather than the entire axis decreases the complexity in a manner that critically depends on the size of the domain. Based on our findings, we propose a novel model selection criterion that takes into account the entropy of the input distribution. We test our proposal on the problem of selecting the input variables of a logistic regression model in a Bayesian model selection framework. In our numerical tests, we find that while the reconstruction errors of standard model selection approaches (AIC, BIC, ℓ1 regularization) strongly depend on the sparsity of the ground truth, the reconstruction error of our method is always close to the minimum in all conditions of sparsity, data size, and strength of input correlations. Finally, we observe that when considering categorical instead of binary inputs, in a simple and mathematically tractable case, the contribution of the alphabet size to the complexity is very small compared to that of parameter space dimension. We further explore the issue by analyzing the data set of the "13 keys to the White House," a method for forecasting the outcomes of US presidential elections.
Collapse
Affiliation(s)
- Nicola Bulso
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Matteo Marsili
- The Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy
| | - Yasser Roudi
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| |
Collapse
|
13
|
Zanoci C, Dehghani N, Tegmark M. Ensemble inhibition and excitation in the human cortex: An Ising-model analysis with uncertainties. Phys Rev E 2019; 99:032408. [PMID: 30999501 DOI: 10.1103/physreve.99.032408] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Indexed: 11/07/2022]
Abstract
The pairwise maximum entropy model, also known as the Ising model, has been widely used to analyze the collective activity of neurons. However, controversy persists in the literature about seemingly inconsistent findings, whose significance is unclear due to lack of reliable error estimates. We therefore develop a method for accurately estimating parameter uncertainty based on random walks in parameter space using adaptive Markov-chain Monte Carlo after the convergence of the main optimization algorithm. We apply our method to the activity patterns of excitatory and inhibitory neurons recorded with multielectrode arrays in the human temporal cortex during the wake-sleep cycle. Our analysis shows that the Ising model captures neuronal collective behavior much better than the independent model during wakefulness, light sleep, and deep sleep when both excitatory (E) and inhibitory (I) neurons are modeled; ignoring the inhibitory effects of I neurons dramatically overestimates synchrony among E neurons. Furthermore, information-theoretic measures reveal that the Ising model explains about 80-95% of the correlations, depending on sleep state and neuron type. Thermodynamic measures show signatures of criticality, although we take this with a grain of salt as it may be merely a reflection of long-range neural correlations.
Collapse
Affiliation(s)
- Cristian Zanoci
- Department of Physics and Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Nima Dehghani
- Department of Physics and Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Max Tegmark
- Department of Physics and Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
14
|
Ernst OK, Bartol TM, Sejnowski TJ, Mjolsness E. Learning moment closure in reaction-diffusion systems with spatial dynamic Boltzmann distributions. Phys Rev E 2019; 99:063315. [PMID: 31330605 PMCID: PMC6852890 DOI: 10.1103/physreve.99.063315] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Indexed: 12/16/2022]
Abstract
Many physical systems are described by probability distributions that evolve in both time and space. Modeling these systems is often challenging due to their large state space and analytically intractable or computationally expensive dynamics. To address these problems, we study a machine-learning approach to model reduction based on the Boltzmann machine. Given the form of the reduced model Boltzmann distribution, we introduce an autonomous differential equation system for the interactions appearing in the energy function. The reduced model can treat systems in continuous space (described by continuous random variables), for which we formulate a variational learning problem using the adjoint method to determine the right-hand sides of the differential equations. This approach can be used to enforce a reduced physical model by a suitable parametrization of the differential equations. The parametrization we employ uses the basis functions from finite-element methods, which can be used to model any physical system. One application domain for such physics-informed learning algorithms is to modeling reaction-diffusion systems. We study a lattice version of the Rössler chaotic oscillator, which illustrates the accuracy of the moment closure approximation made by the method and its dimensionality reduction power.
Collapse
Affiliation(s)
- Oliver K Ernst
- Department of Physics, University of California at San Diego, La Jolla, California 92093, USA
| | | | - Terrence J Sejnowski
- Salk Institute for Biological Studies, La Jolla, California 92037, USA and Division of Biological Sciences, University of California at San Diego, La Jolla, California 92093, USA
| | - Eric Mjolsness
- Departments of Computer Science and Mathematics, and Institute for Genomics and Bioinformatics, University of California at Irvine, Irvine, 92697 California, USA
| |
Collapse
|
15
|
Xu ZQJ, Crodelle J, Zhou D, Cai D. Maximum entropy principle analysis in network systems with short-time recordings. Phys Rev E 2019; 99:022409. [PMID: 30934291 DOI: 10.1103/physreve.99.022409] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Indexed: 11/07/2022]
Abstract
In many realistic systems, maximum entropy principle (MEP) analysis provides an effective characterization of the probability distribution of network states. However, to implement the MEP analysis, a sufficiently long-time data recording in general is often required, e.g., hours of spiking recordings of neurons in neuronal networks. The issue of whether the MEP analysis can be successfully applied to network systems with data from short-time recordings has yet to be fully addressed. In this work, we investigate relationships underlying the probability distributions, moments, and effective interactions in the MEP analysis and then show that, with short-time recordings of network dynamics, the MEP analysis can be applied to reconstructing probability distributions of network states that is much more accurate than the one directly measured from the short-time recording. Using spike trains obtained from both Hodgkin-Huxley neuronal networks and electrophysiological experiments, we verify our results and demonstrate that MEP analysis provides a tool to investigate the neuronal population coding properties for short-time recordings.
Collapse
Affiliation(s)
- Zhi-Qin John Xu
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Jennifer Crodelle
- Courant Institute of Mathematical Sciences, New York University, New York, New York, USA
| | - Douglas Zhou
- School of Mathematical Sciences, MOE-LSC and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, P.R. China
| | - David Cai
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.,Courant Institute of Mathematical Sciences, New York University, New York, New York, USA.,School of Mathematical Sciences, MOE-LSC and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, P.R. China.,Center for Neural Science, New York University, New York, New York, USA
| |
Collapse
|
16
|
Xu ZQJ, Zhou D, Cai D. Dynamical and Coupling Structure of Pulse-Coupled Networks in Maximum Entropy Analysis. ENTROPY 2019; 21:e21010076. [PMID: 33266793 PMCID: PMC7514185 DOI: 10.3390/e21010076] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 12/16/2018] [Accepted: 01/09/2019] [Indexed: 01/02/2023]
Abstract
Maximum entropy principle (MEP) analysis with few non-zero effective interactions successfully characterizes the distribution of dynamical states of pulse-coupled networks in many fields, e.g., in neuroscience. To better understand the underlying mechanism, we found a relation between the dynamical structure, i.e., effective interactions in MEP analysis, and the anatomical coupling structure of pulse-coupled networks and it helps to understand how a sparse coupling structure could lead to a sparse coding by effective interactions. This relation quantitatively displays how the dynamical structure is closely related to the anatomical coupling structure.
Collapse
Affiliation(s)
- Zhi-Qin John Xu
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi 129188, UAE
- Correspondence: (Z.-Q.J.X.); (D.Z.)
| | - Douglas Zhou
- School of Mathematical Sciences, MOE-LSC and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
- Correspondence: (Z.-Q.J.X.); (D.Z.)
| | - David Cai
- NYUAD Institute, New York University Abu Dhabi, Abu Dhabi 129188, UAE
- School of Mathematical Sciences, MOE-LSC and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
- Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, NY 10012, USA
| |
Collapse
|
17
|
Gardella C, Marre O, Mora T. Modeling the Correlated Activity of Neural Populations: A Review. Neural Comput 2018; 31:233-269. [PMID: 30576613 DOI: 10.1162/neco_a_01154] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The principles of neural encoding and computations are inherently collective and usually involve large populations of interacting neurons with highly correlated activities. While theories of neural function have long recognized the importance of collective effects in populations of neurons, only in the past two decades has it become possible to record from many cells simultaneously using advanced experimental techniques with single-spike resolution and to relate these correlations to function and behavior. This review focuses on the modeling and inference approaches that have been recently developed to describe the correlated spiking activity of populations of neurons. We cover a variety of models describing correlations between pairs of neurons, as well as between larger groups, synchronous or delayed in time, with or without the explicit influence of the stimulus, and including or not latent variables. We discuss the advantages and drawbacks or each method, as well as the computational challenges related to their application to recordings of ever larger populations.
Collapse
Affiliation(s)
- Christophe Gardella
- Laboratoire de physique statistique, CNRS, Sorbonne Université, Université Paris-Diderot, and École normale supérieure, 75005 Paris, France, and Institut de la Vision, INSERM, CNRS, and Sorbonne Université, 75012 Paris, France
| | - Olivier Marre
- Institut de la Vision, INSERM, CNRS, and Sorbonne Université, 75012 Paris, France
| | - Thierry Mora
- Laboratoire de physique statistique, CNRS, Sorbonne Université, Université Paris-Diderot, and École normale supérieure, 75005 Paris, France
| |
Collapse
|
18
|
Nghiem TA, Telenczuk B, Marre O, Destexhe A, Ferrari U. Maximum-entropy models reveal the excitatory and inhibitory correlation structures in cortical neuronal activity. Phys Rev E 2018; 98:012402. [PMID: 30110850 DOI: 10.1103/physreve.98.012402] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Indexed: 01/20/2023]
Abstract
Maximum entropy models can be inferred from large datasets to uncover how collective dynamics emerge from local interactions. Here, such models are employed to investigate neurons recorded by multi-electrode arrays in the human and monkey cortex. Taking advantage of the separation of excitatory and inhibitory neuron types, we construct a model including this distinction. This approach allows us to shed light on differences between excitatory and inhibitory activity across different brain states such as wakefulness and deep sleep, in agreement with previous findings. Additionally, maximum entropy models can also unveil novel features of neuronal interactions, which are found to be dominated by pairwise interactions during wakefulness, but are population-wide during deep sleep. Overall, we demonstrate that maximum entropy models can be useful to analyze datasets with classified neuron types and to reveal the respective roles of excitatory and inhibitory neurons in organizing coherent dynamics in the cerebral cortex.
Collapse
Affiliation(s)
- Trang-Anh Nghiem
- Laboratory of Computational Neuroscience, Unité de Neurosciences, Information et Complexité, CNRS, Gif-Sur-Yvette, France
| | - Bartosz Telenczuk
- Laboratory of Computational Neuroscience, Unité de Neurosciences, Information et Complexité, CNRS, Gif-Sur-Yvette, France
| | - Olivier Marre
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012 Paris, France
| | - Alain Destexhe
- Laboratory of Computational Neuroscience, Unité de Neurosciences, Information et Complexité, CNRS, Gif-Sur-Yvette, France
| | - Ulisse Ferrari
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012 Paris, France
| |
Collapse
|
19
|
Large Deviations Properties of Maximum Entropy Markov Chains from Spike Trains. ENTROPY 2018; 20:e20080573. [PMID: 33265662 PMCID: PMC7513098 DOI: 10.3390/e20080573] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 07/04/2018] [Accepted: 07/11/2018] [Indexed: 11/23/2022]
Abstract
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. To find the maximum entropy Markov chain, we use the thermodynamic formalism, which provides insightful connections with statistical physics and thermodynamics from which large deviations properties arise naturally. We provide an accessible introduction to the maximum entropy Markov chain inference problem and large deviations theory to the community of computational neuroscience, avoiding some technicalities while preserving the core ideas and intuitions. We review large deviations techniques useful in spike train statistics to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability, and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
Collapse
|
20
|
A Moment-Based Maximum Entropy Model for Fitting Higher-Order Interactions in Neural Data. ENTROPY 2018; 20:e20070489. [PMID: 33265579 PMCID: PMC7513015 DOI: 10.3390/e20070489] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 06/15/2018] [Accepted: 06/19/2018] [Indexed: 11/22/2022]
Abstract
Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide spiking patterns. Maximum entropy models are an increasingly popular method for capturing collective neural activity by including successively higher-order interaction terms. However, incorporating higher-order interactions in these models is difficult in practice due to two factors. First, the number of parameters exponentially increases as higher orders are added. Second, because triplet (and higher) spiking events occur infrequently, estimates of higher-order statistics may be contaminated by sampling noise. To address this, we extend previous work on the Reliable Interaction class of models to develop a normalized variant that adaptively identifies the specific pairwise and higher-order moments that can be estimated from a given dataset for a specified confidence level. The resulting “Reliable Moment” model is able to capture cortical-like distributions of population spiking patterns. Finally, we show that, compared with the Reliable Interaction model, the Reliable Moment model infers fewer strong spurious higher-order interactions and is better able to predict the frequencies of previously unobserved spiking patterns.
Collapse
|
21
|
De Martino A, De Martino D. An introduction to the maximum entropy approach and its application to inference problems in biology. Heliyon 2018; 4:e00596. [PMID: 29862358 PMCID: PMC5968179 DOI: 10.1016/j.heliyon.2018.e00596] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 03/31/2018] [Accepted: 04/03/2018] [Indexed: 11/15/2022] Open
Abstract
A cornerstone of statistical inference, the maximum entropy framework is being increasingly applied to construct descriptive and predictive models of biological systems, especially complex biological networks, from large experimental data sets. Both its broad applicability and the success it obtained in different contexts hinge upon its conceptual simplicity and mathematical soundness. Here we try to concisely review the basic elements of the maximum entropy principle, starting from the notion of 'entropy', and describe its usefulness for the analysis of biological systems. As examples, we focus specifically on the problem of reconstructing gene interaction networks from expression data and on recent work attempting to expand our system-level understanding of bacterial metabolism. Finally, we highlight some extensions and potential limitations of the maximum entropy approach, and point to more recent developments that are likely to play a key role in the upcoming challenges of extracting structures and information from increasingly rich, high-throughput biological data.
Collapse
Affiliation(s)
- Andrea De Martino
- Soft & Living Matter Lab, Institute of Nanotechnology (NANOTEC), Consiglio Nazionale delle Ricerche, Rome, Italy
- Italian Institute for Genomic Medicine (IIGM), Turin, Italy
| | | |
Collapse
|
22
|
Chicharro D, Pica G, Panzeri S. The Identity of Information: How Deterministic Dependencies Constrain Information Synergy and Redundancy. ENTROPY (BASEL, SWITZERLAND) 2018; 20:e20030169. [PMID: 33265260 PMCID: PMC7512685 DOI: 10.3390/e20030169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 02/26/2018] [Accepted: 02/28/2018] [Indexed: 06/12/2023]
Abstract
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a partial information decomposition (PID) that separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013) proposed an identity axiom that imposes necessary conditions to quantify qualitatively common information. However, Bertschinger et al. (2012) showed that, in a counterexample with deterministic target-source dependencies, the identity axiom is incompatible with ensuring PID nonnegativity. Here, we study systematically the consequences of information identity criteria that assign identity based on associations between target and source variables resulting from deterministic dependencies. We show how these criteria are related to the identity axiom and to previously proposed redundancy measures, and we characterize how they lead to negative PID terms. This constitutes a further step to more explicitly address the role of information identity in the quantification of redundancy. The implications for studying neural coding are discussed.
Collapse
Affiliation(s)
- Daniel Chicharro
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115, USA
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy
| | - Giuseppe Pica
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy
| |
Collapse
|
23
|
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains. ENTROPY 2018; 20:e20010034. [PMID: 33265123 PMCID: PMC7512206 DOI: 10.3390/e20010034] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Revised: 01/03/2018] [Accepted: 01/05/2018] [Indexed: 11/16/2022]
Abstract
The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics.
Collapse
|
24
|
Tavoni G, Ferrari U, Battaglia FP, Cocco S, Monasson R. Functional coupling networks inferred from prefrontal cortex activity show experience-related effective plasticity. Netw Neurosci 2017; 1:275-301. [PMID: 29855621 PMCID: PMC5874136 DOI: 10.1162/netn_a_00014] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2016] [Accepted: 04/24/2017] [Indexed: 01/28/2023] Open
Abstract
Functional coupling networks are widely used to characterize collective patterns of activity in neural populations. Here, we ask whether functional couplings reflect the subtle changes, such as in physiological interactions, believed to take place during learning. We infer functional network models reproducing the spiking activity of simultaneously recorded neurons in prefrontal cortex (PFC) of rats, during the performance of a cross-modal rule shift task (task epoch), and during preceding and following sleep epochs. A large-scale study of the 96 recorded sessions allows us to detect, in about 20% of sessions, effective plasticity between the sleep epochs. These coupling modifications are correlated with the coupling values in the task epoch, and are supported by a small subset of the recorded neurons, which we identify by means of an automatized procedure. These potentiated groups increase their coativation frequency in the spiking data between the two sleep epochs, and, hence, participate to putative experience-related cell assemblies. Study of the reactivation dynamics of the potentiated groups suggests a possible connection with behavioral learning. Reactivation is largely driven by hippocampal ripple events when the rule is not yet learned, and may be much more autonomous, and presumably sustained by the potentiated PFC network, when learning is consolidated. Cell assemblies coding for memories are widely believed to emerge through synaptic modification resulting from learning, yet their identification from activity is very arduous. We propose a functional-connectivity-based approach to identify experience-related cell assemblies from multielectrode recordings in vivo, and apply it to the prefrontal cortex activity of rats recorded during a task epoch and the preceding and following sleep epochs. We infer functional couplings between the recorded cells in each epoch. Comparisons of the functional coupling networks across the epochs allow us to identify effective potentiation between the two sleep epochs. The neurons supporting these potentiated interactions strongly coactivate during the task and subsequent sleep epochs, but not in the preceding sleep, and, hence, presumably belong to an experience-related cell assembly. Study of the reactivation of this assembly in response to hippocampal ripple inputs suggests possible relations between the stage of behavorial learning and memory consolidation mechanisms.
Collapse
Affiliation(s)
- Gaia Tavoni
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research and CNRS - UMR 8550, Paris Sorbonne UPMC, Paris, France.,Laboratoire de Physique Théorique, Ecole Normale Supérieure, PSL Research and CNRS- UMR 8549, Paris Sorbonne UPMC, Paris, France
| | - Ulisse Ferrari
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research and CNRS - UMR 8550, Paris Sorbonne UPMC, Paris, France.,Laboratoire de Physique Théorique, Ecole Normale Supérieure, PSL Research and CNRS- UMR 8549, Paris Sorbonne UPMC, Paris, France
| | - Francesco P Battaglia
- Donders Institute for Brain, Cognition and Behaviour, Radboud Universiteit, Nijmegen, The Netherlands
| | - Simona Cocco
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research and CNRS - UMR 8550, Paris Sorbonne UPMC, Paris, France
| | - Rémi Monasson
- Laboratoire de Physique Théorique, Ecole Normale Supérieure, PSL Research and CNRS- UMR 8549, Paris Sorbonne UPMC, Paris, France
| |
Collapse
|
25
|
Savin C, Tkačik G. Maximum entropy models as a tool for building precise neural controls. Curr Opin Neurobiol 2017; 46:120-126. [DOI: 10.1016/j.conb.2017.08.001] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Revised: 07/31/2017] [Accepted: 08/03/2017] [Indexed: 12/27/2022]
|
26
|
Cessac B, Kornprobst P, Kraria S, Nasser H, Pamplona D, Portelli G, Viéville T. PRANAS: A New Platform for Retinal Analysis and Simulation. Front Neuroinform 2017; 11:49. [PMID: 28919854 PMCID: PMC5585572 DOI: 10.3389/fninf.2017.00049] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Accepted: 07/17/2017] [Indexed: 01/28/2023] Open
Abstract
The retina encodes visual scenes by trains of action potentials that are sent to the brain via the optic nerve. In this paper, we describe a new free access user-end software allowing to better understand this coding. It is called PRANAS (https://pranas.inria.fr), standing for Platform for Retinal ANalysis And Simulation. PRANAS targets neuroscientists and modelers by providing a unique set of retina-related tools. PRANAS integrates a retina simulator allowing large scale simulations while keeping a strong biological plausibility and a toolbox for the analysis of spike train population statistics. The statistical method (entropy maximization under constraints) takes into account both spatial and temporal correlations as constraints, allowing to analyze the effects of memory on statistics. PRANAS also integrates a tool computing and representing in 3D (time-space) receptive fields. All these tools are accessible through a friendly graphical user interface. The most CPU-costly of them have been implemented to run in parallel.
Collapse
Affiliation(s)
- Bruno Cessac
- Biovision Team, Inria, Université Côte d'AzurSophia Antipolis, France
| | - Pierre Kornprobst
- Biovision Team, Inria, Université Côte d'AzurSophia Antipolis, France
| | - Selim Kraria
- Biovision Team, Inria, Université Côte d'AzurSophia Antipolis, France
| | - Hassan Nasser
- Biovision Team, Inria, Université Côte d'AzurSophia Antipolis, France
| | - Daniela Pamplona
- Biovision Team, Inria, Université Côte d'AzurSophia Antipolis, France
| | - Geoffrey Portelli
- Biovision Team, Inria, Université Côte d'AzurSophia Antipolis, France
| | | |
Collapse
|
27
|
Network Inference and Maximum Entropy Estimation on Information Diagrams. Sci Rep 2017; 7:7062. [PMID: 28765522 PMCID: PMC5539257 DOI: 10.1038/s41598-017-06208-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2016] [Accepted: 06/08/2017] [Indexed: 11/30/2022] Open
Abstract
Maximum entropy estimation is of broad interest for inferring properties of systems across many disciplines. Using a recently introduced technique for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies, we show how this can be used to estimate the direct network connectivity between interacting units from observed activity. As a generic example, we consider phase oscillators and show that our approach is typically superior to simply using the mutual information. In addition, we propose a nonparametric formulation of connected informations, used to test the explanatory power of a network description in general. We give an illustrative example showing how this agrees with the existing parametric formulation, and demonstrate its applicability and advantages for resting-state human brain networks, for which we also discuss its direct effective connectivity. Finally, we generalize to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish significant advantages of this approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases.
Collapse
|
28
|
Cocco S, Monasson R, Posani L, Tavoni G. Functional networks from inverse modeling of neural population activity. ACTA ACUST UNITED AC 2017. [DOI: 10.1016/j.coisb.2017.04.017] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
29
|
Hahn G, Ponce-Alvarez A, Monier C, Benvenuti G, Kumar A, Chavane F, Deco G, Frégnac Y. Spontaneous cortical activity is transiently poised close to criticality. PLoS Comput Biol 2017; 13:e1005543. [PMID: 28542191 PMCID: PMC5464673 DOI: 10.1371/journal.pcbi.1005543] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Revised: 06/08/2017] [Accepted: 04/26/2017] [Indexed: 11/19/2022] Open
Abstract
Brain activity displays a large repertoire of dynamics across the sleep-wake cycle and even during anesthesia. It was suggested that criticality could serve as a unifying principle underlying the diversity of dynamics. This view has been supported by the observation of spontaneous bursts of cortical activity with scale-invariant sizes and durations, known as neuronal avalanches, in recordings of mesoscopic cortical signals. However, the existence of neuronal avalanches in spiking activity has been equivocal with studies reporting both its presence and absence. Here, we show that signs of criticality in spiking activity can change between synchronized and desynchronized cortical states. We analyzed the spontaneous activity in the primary visual cortex of the anesthetized cat and the awake monkey, and found that neuronal avalanches and thermodynamic indicators of criticality strongly depend on collective synchrony among neurons, LFP fluctuations, and behavioral state. We found that synchronized states are associated to criticality, large dynamical repertoire and prolonged epochs of eye closure, while desynchronized states are associated to sub-criticality, reduced dynamical repertoire, and eyes open conditions. Our results show that criticality in cortical dynamics is not stationary, but fluctuates during anesthesia and between different vigilance states.
Collapse
Affiliation(s)
- Gerald Hahn
- Unité de Neuroscience, Information et Complexité (UNIC), CNRS, Gif-sur-Yvette, France
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Adrian Ponce-Alvarez
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Cyril Monier
- Unité de Neuroscience, Information et Complexité (UNIC), CNRS, Gif-sur-Yvette, France
| | | | - Arvind Kumar
- Bernstein Center for Computational Neuroscience, Freiburg, Germany
- Dept. of Computational Science and Technology, School of Computer Science and Communication, KTH, Royal Institute of Technology, Stockholm, Sweden
| | - Frédéric Chavane
- Institut des Neurosciences de la Timone, CNRS, Marseille, France
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
- Institució Catalana de la Recerca i Estudis Avançats, Universitat Pompeu Fabra, Barcelona, Spain
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- School of Psychological Sciences, Monash University, Melbourne, Clayton, Victoria, Australia
| | - Yves Frégnac
- Unité de Neuroscience, Information et Complexité (UNIC), CNRS, Gif-sur-Yvette, France
| |
Collapse
|
30
|
O'Donnell C, Gonçalves JT, Whiteley N, Portera-Cailliau C, Sejnowski TJ. The Population Tracking Model: A Simple, Scalable Statistical Model for Neural Population Data. Neural Comput 2016; 29:50-93. [PMID: 27870612 DOI: 10.1162/neco_a_00910] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Our understanding of neural population coding has been limited by a lack of analysis methods to characterize spiking data from large populations. The biggest challenge comes from the fact that the number of possible network activity patterns scales exponentially with the number of neurons recorded ([Formula: see text]). Here we introduce a new statistical method for characterizing neural population activity that requires semi-independent fitting of only as many parameters as the square of the number of neurons, requiring drastically smaller data sets and minimal computation time. The model works by matching the population rate (the number of neurons synchronously active) and the probability that each individual neuron fires given the population rate. We found that this model can accurately fit synthetic data from up to 1000 neurons. We also found that the model could rapidly decode visual stimuli from neural population data from macaque primary visual cortex about 65 ms after stimulus onset. Finally, we used the model to estimate the entropy of neural population activity in developing mouse somatosensory cortex and, surprisingly, found that it first increases, and then decreases during development. This statistical model opens new options for interrogating neural population data and can bolster the use of modern large-scale in vivo Ca[Formula: see text] and voltage imaging tools.
Collapse
Affiliation(s)
- Cian O'Donnell
- Department of Computer Science, University of Bristol, Bristol BS81UB. U.K., and Howard Hughes Medical Institute, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A.
| | - J Tiago Gonçalves
- Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A., and Departments of Neurology and Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA 90095, U.S.A.
| | - Nick Whiteley
- School of Mathematics, University of Bristol, Bristol BS81UB, U.K.
| | - Carlos Portera-Cailliau
- Departments of Neurology and Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA 90095, U.S.A.
| | - Terrence J Sejnowski
- Howard Hughes Medical Institute, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A., and Division of Biological Sciences, University of California at San Diego, La Jolla, CA 92093, U.S.A.
| |
Collapse
|
31
|
Error-Robust Modes of the Retinal Population Code. PLoS Comput Biol 2016; 12:e1005148. [PMID: 27855154 PMCID: PMC5113862 DOI: 10.1371/journal.pcbi.1005148] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2015] [Accepted: 09/15/2016] [Indexed: 01/23/2023] Open
Abstract
Across the nervous system, certain population spiking patterns are observed far more frequently than others. A hypothesis about this structure is that these collective activity patterns function as population codewords–collective modes–carrying information distinct from that of any single cell. We investigate this phenomenon in recordings of ∼150 retinal ganglion cells, the retina’s output. We develop a novel statistical model that decomposes the population response into modes; it predicts the distribution of spiking activity in the ganglion cell population with high accuracy. We found that the modes represent localized features of the visual stimulus that are distinct from the features represented by single neurons. Modes form clusters of activity states that are readily discriminated from one another. When we repeated the same visual stimulus, we found that the same mode was robustly elicited. These results suggest that retinal ganglion cells’ collective signaling is endowed with a form of error-correcting code–a principle that may hold in brain areas beyond retina. Neurons in most parts of the nervous system represent and process information in a collective fashion, yet the nature of this collective code is poorly understood. An important constraint placed on any such collective processing comes from the fact that individual neurons’ signaling is prone to corruption by noise. The information theory and engineering literatures have studied error-correcting codes that allow individual noise-prone coding units to “check” each other, forming an overall representation that is robust to errors. In this paper, we have analyzed the population code of one of the best-studied neural systems, the retina, and found that it is structured in a manner analogous to error-correcting schemes. Indeed, we found that the complex activity patterns over ~150 retinal ganglion cells, the output neurons of the retina, could be mapped onto collective code words, and that these code words represented precise visual information while suppressing noise. In order to analyze this coding scheme, we introduced a novel quantitative model of the retinal output that predicted neural activity patterns more accurately than existing state-of-the-art approaches.
Collapse
|
32
|
Jacquin H, Rançon A. Resummed mean-field inference for strongly coupled data. Phys Rev E 2016; 94:042118. [PMID: 27841631 DOI: 10.1103/physreve.94.042118] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2015] [Indexed: 11/07/2022]
Abstract
We present a resummed mean-field approximation for inferring the parameters of an Ising or a Potts model from empirical, noisy, one- and two-point correlation functions. Based on a resummation of a class of diagrams of the small correlation expansion of the log-likelihood, the method outperforms standard mean-field inference methods, even when they are regularized. The inference is stable with respect to sampling noise, contrarily to previous works based either on the small correlation expansion, on the Bethe free energy, or on the mean-field and Gaussian models. Because it is mostly analytic, its complexity is still very low, requiring an iterative algorithm to solve for N auxiliary variables, that resorts only to matrix inversions and multiplications. We test our algorithm on the Sherrington-Kirkpatrick model submitted to a random external field and large random couplings, and demonstrate that even without regularization, the inference is stable across the whole phase diagram. In addition, the calculation leads to a consistent estimation of the entropy of the data and allows us to sample form the inferred distribution to obtain artificial data that are consistent with the empirical distribution.
Collapse
Affiliation(s)
- Hugo Jacquin
- Laboratoire de Physique Statistique, École Normale Supérieure, UMR CNRS 8550, 24 rue Lhomond, 75005 Paris, France
| | - A Rançon
- Université de Lyon, ENS de Lyon, Université Claude Bernard, CNRS, Laboratoire de Physique, F-69342 Lyon, France
| |
Collapse
|
33
|
Schneidman E. Towards the design principles of neural population codes. Curr Opin Neurobiol 2016; 37:133-140. [PMID: 27016639 DOI: 10.1016/j.conb.2016.03.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2016] [Revised: 03/01/2016] [Accepted: 03/02/2016] [Indexed: 12/18/2022]
Abstract
The ability to record the joint activity of large groups of neurons would allow for direct study of information representation and computation at the level of whole circuits in the brain. The combinatorial space of potential population activity patterns and neural noise imply that it would be impossible to directly map the relations between stimuli and population responses. Understanding of large neural population codes therefore depends on identifying simplifying design principles. We review recent results showing that strongly correlated population codes can be explained using minimal models that rely on low order relations among cells. We discuss the implications for large populations, and how such models allow for mapping the semantic organization of the neural codebook and stimulus space, and decoding.
Collapse
Affiliation(s)
- Elad Schneidman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel.
| |
Collapse
|
34
|
|
35
|
Abstract
Various plasticity mechanisms, including experience-dependent, spontaneous, as well as homeostatic ones, continuously remodel neural circuits. Yet, despite fluctuations in the properties of single neurons and synapses, the behavior and function of neuronal assemblies are generally found to be very stable over time. This raises the important question of how plasticity is coordinated across the network. To address this, we investigated the stability of network activity in cultured rat hippocampal neurons recorded with high-density multielectrode arrays over several days. We used parametric models to characterize multineuron activity patterns and analyzed their sensitivity to changes. We found that the models exhibited sloppiness, a property where the model behavior is insensitive to changes in many parameter combinations, but very sensitive to a few. The activity of neurons with sloppy parameters showed faster and larger fluctuations than the activity of a small subset of neurons associated with sensitive parameters. Furthermore, parameter sensitivity was highly correlated with firing rates. Finally, we tested our observations from cell cultures on an in vivo recording from monkey visual cortex and we confirm that spontaneous cortical activity also shows hallmarks of sloppy behavior and firing rate dependence. Our findings suggest that a small subnetwork of highly active and stable neurons supports group stability, and that this endows neuronal networks with the flexibility to continuously remodel without compromising stability and function.
Collapse
|
36
|
Cavagna A, Del Castello L, Dey S, Giardina I, Melillo S, Parisi L, Viale M. Short-range interactions versus long-range correlations in bird flocks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 92:012705. [PMID: 26274201 DOI: 10.1103/physreve.92.012705] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2014] [Indexed: 06/04/2023]
Abstract
Bird flocks are a paradigmatic example of collective motion. One of the prominent traits of flocking is the presence of long range velocity correlations between individuals, which allow them to influence each other over the large scales, keeping a high level of group coordination. A crucial question is to understand what is the mutual interaction between birds generating such nontrivial correlations. Here we use the maximum entropy (ME) approach to infer from experimental data of natural flocks the effective interactions between individuals. Compared to previous studies, we make a significant step forward as we retrieve the full functional dependence of the interaction on distance, and find that it decays exponentially over a range of a few individuals. The fact that ME gives a short-range interaction even though its experimental input is the long-range correlation function, shows that the method is able to discriminate the relevant information encoded in such correlations and single out a minimal number of effective parameters. Finally, we show how the method can be used to capture the degree of anisotropy of mutual interactions.
Collapse
Affiliation(s)
- Andrea Cavagna
- Istituto Sistemi Complessi, Consiglio Nazionale delle Ricerche, UOS Sapienza, 00185 Rome, Italy
- Dipartimento di Fisica, Università Sapienza, 00185 Rome, Italy
- Initiative for the Theoretical Sciences, The Graduate Center, 365 Fifth Avenue, New York, New York 10016, USA
| | - Lorenzo Del Castello
- Istituto Sistemi Complessi, Consiglio Nazionale delle Ricerche, UOS Sapienza, 00185 Rome, Italy
- Dipartimento di Fisica, Università Sapienza, 00185 Rome, Italy
| | - Supravat Dey
- Istituto Sistemi Complessi, Consiglio Nazionale delle Ricerche, UOS Sapienza, 00185 Rome, Italy
- Dipartimento di Fisica, Università Sapienza, 00185 Rome, Italy
| | - Irene Giardina
- Istituto Sistemi Complessi, Consiglio Nazionale delle Ricerche, UOS Sapienza, 00185 Rome, Italy
- Dipartimento di Fisica, Università Sapienza, 00185 Rome, Italy
- Initiative for the Theoretical Sciences, The Graduate Center, 365 Fifth Avenue, New York, New York 10016, USA
| | - Stefania Melillo
- Istituto Sistemi Complessi, Consiglio Nazionale delle Ricerche, UOS Sapienza, 00185 Rome, Italy
- Dipartimento di Fisica, Università Sapienza, 00185 Rome, Italy
| | - Leonardo Parisi
- Istituto Sistemi Complessi, Consiglio Nazionale delle Ricerche, UOS Sapienza, 00185 Rome, Italy
- Dipartimento di Fisica, Università Sapienza, 00185 Rome, Italy
- Dipartimento di Informatica, Università Sapienza, 00198 Rome, Italy
| | - Massimiliano Viale
- Istituto Sistemi Complessi, Consiglio Nazionale delle Ricerche, UOS Sapienza, 00185 Rome, Italy
- Dipartimento di Fisica, Università Sapienza, 00185 Rome, Italy
| |
Collapse
|
37
|
|
38
|
Large-scale spatiotemporal spike patterning consistent with wave propagation in motor cortex. Nat Commun 2015; 6:7169. [PMID: 25994554 PMCID: PMC4443713 DOI: 10.1038/ncomms8169] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 04/14/2015] [Indexed: 11/18/2022] Open
Abstract
Aggregate signals in cortex are known to be spatiotemporally organized as propagating waves across the cortical surface, but it remains unclear whether the same is true for spiking activity in individual neurons. Furthermore, the functional interactions between cortical neurons are well documented but their spatial arrangement on the cortical surface has been largely ignored. Here we use a functional network analysis to demonstrate that a subset of motor cortical neurons in non-human primates spatially coordinate their spiking activity in a manner that closely matches wave propagation measured in the beta oscillatory band of the local field potential. We also demonstrate that sequential spiking of pairs of neuron contains task-relevant information that peaks when the neurons are spatially oriented along the wave axis. We hypothesize that the spatial anisotropy of spike patterning may reflect the underlying organization of motor cortex and may be a general property shared by other cortical areas. Aggregate signals in cortex are spatiotemporally organized as propagating waves across the cortical surface. Here the authors demonstrate that neurons in primary motor cortex of monkeys spatially coordinate their spiking activity in a manner that closely matches wave propagation.
Collapse
|
39
|
Cayco-Gajic NA, Zylberberg J, Shea-Brown E. Triplet correlations among similarly tuned cells impact population coding. Front Comput Neurosci 2015; 9:57. [PMID: 26042024 PMCID: PMC4435073 DOI: 10.3389/fncom.2015.00057] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 04/29/2015] [Indexed: 11/18/2022] Open
Abstract
Which statistical features of spiking activity matter for how stimuli are encoded in neural populations? A vast body of work has explored how firing rates in individual cells and correlations in the spikes of cell pairs impact coding. Recent experiments have shown evidence for the existence of higher-order spiking correlations, which describe simultaneous firing in triplets and larger ensembles of cells; however, little is known about their impact on encoded stimulus information. Here, we take a first step toward closing this gap. We vary triplet correlations in small (approximately 10 cell) neural populations while keeping single cell and pairwise statistics fixed at typically reported values. This connection with empirically observed lower-order statistics is important, as it places strong constraints on the level of triplet correlations that can occur. For each value of triplet correlations, we estimate the performance of the neural population on a two-stimulus discrimination task. We find that the allowed changes in the level of triplet correlations can significantly enhance coding, in particular if triplet correlations differ for the two stimuli. In this scenario, triplet correlations must be included in order to accurately quantify the functionality of neural populations. When both stimuli elicit similar triplet correlations, however, pairwise models provide relatively accurate descriptions of coding accuracy. We explain our findings geometrically via the skew that triplet correlations induce in population-wide distributions of neural responses. Finally, we calculate how many samples are necessary to accurately measure spiking correlations of this type, providing an estimate of the necessary recording times in future experiments.
Collapse
Affiliation(s)
| | - Joel Zylberberg
- Department of Applied Mathematics, University of Washington Seattle, WA, USA
| | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington Seattle, WA, USA
| |
Collapse
|
40
|
Horvát S, Czabarka É, Toroczkai Z. Reducing degeneracy in maximum entropy models of networks. PHYSICAL REVIEW LETTERS 2015; 114:158701. [PMID: 25933345 DOI: 10.1103/physrevlett.114.158701] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Indexed: 06/04/2023]
Abstract
Based on Jaynes's maximum entropy principle, exponential random graphs provide a family of principled models that allow the prediction of network properties as constrained by empirical data (observables). However, their use is often hindered by the degeneracy problem characterized by spontaneous symmetry breaking, where predictions fail. Here we show that degeneracy appears when the corresponding density of states function is not log-concave, which is typically the consequence of nonlinear relationships between the constraining observables. Exploiting these nonlinear relationships here we propose a solution to the degeneracy problem for a large class of systems via transformations that render the density of states function log-concave. The effectiveness of the method is demonstrated on examples.
Collapse
Affiliation(s)
- Szabolcs Horvát
- Department of Physics and Interdisciplinary Center for Network Science & Applications, University of Notre Dame, Notre Dame, Indiana 46556, USA
| | - Éva Czabarka
- Department of Mathematics, University of South Carolina, Columbia, South Carolina 29208, USA
| | - Zoltán Toroczkai
- Department of Physics and Interdisciplinary Center for Network Science & Applications, University of Notre Dame, Notre Dame, Indiana 46556, USA
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556 USA
| |
Collapse
|
41
|
Inferring synaptic structure in presence of neural interaction time scales. PLoS One 2015; 10:e0118412. [PMID: 25807389 PMCID: PMC4373808 DOI: 10.1371/journal.pone.0118412] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2014] [Accepted: 01/16/2015] [Indexed: 12/04/2022] Open
Abstract
Biological networks display a variety of activity patterns reflecting a web of interactions that is complex both in space and time. Yet inference methods have mainly focused on reconstructing, from the network’s activity, the spatial structure, by assuming equilibrium conditions or, more recently, a probabilistic dynamics with a single arbitrary time-step. Here we show that, under this latter assumption, the inference procedure fails to reconstruct the synaptic matrix of a network of integrate-and-fire neurons when the chosen time scale of interaction does not closely match the synaptic delay or when no single time scale for the interaction can be identified; such failure, moreover, exposes a distinctive bias of the inference method that can lead to infer as inhibitory the excitatory synapses with interaction time scales longer than the model’s time-step. We therefore introduce a new two-step method, that first infers through cross-correlation profiles the delay-structure of the network and then reconstructs the synaptic matrix, and successfully test it on networks with different topologies and in different activity regimes. Although step one is able to accurately recover the delay-structure of the network, thus getting rid of any a priori guess about the time scales of the interaction, the inference method introduces nonetheless an arbitrary time scale, the time-bin dt used to binarize the spike trains. We therefore analytically and numerically study how the choice of dt affects the inference in our network model, finding that the relationship between the inferred couplings and the real synaptic efficacies, albeit being quadratic in both cases, depends critically on dt for the excitatory synapses only, whilst being basically independent of it for the inhibitory ones.
Collapse
|
42
|
Mora T, Deny S, Marre O. Dynamical criticality in the collective activity of a population of retinal neurons. PHYSICAL REVIEW LETTERS 2015; 114:078105. [PMID: 25763977 DOI: 10.1103/physrevlett.114.078105] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Indexed: 06/04/2023]
Abstract
Recent experimental results based on multielectrode and imaging techniques have reinvigorated the idea that large neural networks operate near a critical point, between order and disorder. However, evidence for criticality has relied on the definition of arbitrary order parameters, or on models that do not address the dynamical nature of network activity. Here we introduce a novel approach to assess criticality that overcomes these limitations, while encompassing and generalizing previous criteria. We find a simple model to describe the global activity of large populations of ganglion cells in the rat retina, and show that their statistics are poised near a critical point. Taking into account the temporal dynamics of the activity greatly enhances the evidence for criticality, revealing it where previous methods would not. The approach is general and could be used in other biological networks.
Collapse
Affiliation(s)
- Thierry Mora
- Laboratoire de physique statistique, École normale supérieure, CNRS and UPMC, 24 rue Lhomond, 75005 Paris, France
| | - Stéphane Deny
- Institut de la Vision, INSERM and UMPC, 17 rue Moreau, 75012 Paris, France
| | - Olivier Marre
- Institut de la Vision, INSERM and UMPC, 17 rue Moreau, 75012 Paris, France
| |
Collapse
|
43
|
Eldawlatly S, Oweiss KG. Temporal precision in population-but not individual neuron-dynamics reveals rapid experience-dependent plasticity in the rat barrel cortex. Front Comput Neurosci 2014; 8:155. [PMID: 25505407 PMCID: PMC4243556 DOI: 10.3389/fncom.2014.00155] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 11/07/2014] [Indexed: 11/13/2022] Open
Abstract
Cortical reorganization following sensory deprivation is characterized by alterations in the connectivity between neurons encoding spared and deprived cortical inputs. The extent to which this alteration depends on Spike Timing Dependent Plasticity (STDP), however, is largely unknown. We quantified changes in the functional connectivity between layer V neurons in the vibrissal primary somatosensory cortex (vSI) (barrel cortex) of rats following sensory deprivation. One week after chronic implantation of a microelectrode array in vSI, sensory-evoked activity resulting from mechanical deflections of individual whiskers was recorded (control data) after which two whiskers on the contralateral side were paired by sparing them while trimming all other whiskers on the rat's mystacial pad. The rats' environment was then enriched by placing novel objects in the cages to encourage exploratory behavior with the spared whiskers. Sensory-evoked activity in response to individual stimulation of spared whiskers and adjacent re-grown whiskers was then recorded under anesthesia 1–2 days and 6–7 days post-trimming (plasticity data). We analyzed spike trains within 100 ms of stimulus onset and confirmed previously published reports documenting changes in receptive field sizes in the spared whisker barrels. We analyzed the same data using Dynamic Bayesian Networks (DBNs) to infer the functional connectivity between the recorded neurons. We found that DBNs inferred from population responses to stimulation of each of the spared whiskers exhibited graded increase in similarity that was proportional to the pairing duration. A significant early increase in network similarity in the spared-whisker barrels was detected 1–2 days post pairing, but not when single neuron responses were examined during the same period. These results suggest that rapid reorganization of cortical neurons following sensory deprivation may be mediated by an STDP mechanism.
Collapse
Affiliation(s)
- Seif Eldawlatly
- Department of Computer and Systems Engineering, Faculty of Engineering, Ain Shams University Cairo, Egypt
| | - Karim G Oweiss
- Department of Electrical and Computer Engineering, University of Florida Gainesville, FL, USA ; Department of Biomedical Engineering, University of Florida Gainesville, FL, USA ; Department of Neuroscience, University of Florida Gainesville, FL, USA ; Department of Electrical and Computer Engineering, Michigan State University East Lansing, MI, USA
| |
Collapse
|
44
|
Cofré R, Cessac B. Exact computation of the maximum-entropy potential of spiking neural-network models. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2014; 89:052117. [PMID: 25353749 DOI: 10.1103/physreve.89.052117] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Indexed: 06/04/2023]
Abstract
Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.
Collapse
Affiliation(s)
- R Cofré
- NeuroMathComp Team, INRIA, and Laboratoire de Mathématiques J.A. Dieudonné, Université de Nice, 2004 Route des Lucioles, 06902 Sophia Antipolis, France
| | - B Cessac
- NeuroMathComp Team, INRIA, and Laboratoire de Mathématiques J.A. Dieudonné, Université de Nice, 2004 Route des Lucioles, 06902 Sophia Antipolis, France
| |
Collapse
|
45
|
Parameter Estimation for Spatio-Temporal Maximum Entropy Distributions: Application to Neural Spike Trains. ENTROPY 2014. [DOI: 10.3390/e16042244] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
46
|
Cavagna A, Giardina I, Ginelli F, Mora T, Piovani D, Tavarone R, Walczak AM. Dynamical maximum entropy approach to flocking. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2014; 89:042707. [PMID: 24827278 DOI: 10.1103/physreve.89.042707] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2013] [Indexed: 05/11/2023]
Abstract
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Collapse
Affiliation(s)
- Andrea Cavagna
- Istituto Sistemi Complessi, Consiglio Nazionale delle Ricerche, UOS Sapienza, Rome, Italy and Dipartimento di Fisica, Università Sapienza, Rome, Italy and Initiative for the Theoretical Sciences, The Graduate Center, The City University of New York, New York, USA
| | - Irene Giardina
- Istituto Sistemi Complessi, Consiglio Nazionale delle Ricerche, UOS Sapienza, Rome, Italy and Dipartimento di Fisica, Università Sapienza, Rome, Italy and Initiative for the Theoretical Sciences, The Graduate Center, The City University of New York, New York, USA
| | - Francesco Ginelli
- SUPA, Institute for Complex Systems and Mathematical Biology, King's College, University of Aberdeen, Aberdeen, UK
| | - Thierry Mora
- Laboratoire de physique statistique, CNRS, UPMC and École normale supérieure, Paris, France
| | - Duccio Piovani
- Dipartimento di Fisica, Università Sapienza, Rome, Italy
| | | | - Aleksandra M Walczak
- Laboratoire de physique théorique, CNRS, UPMC and École normale supérieure, Paris, France
| |
Collapse
|
47
|
Hamilton LS, Sohl-Dickstein J, Huth AG, Carels VM, Deisseroth K, Bao S. Optogenetic activation of an inhibitory network enhances feedforward functional connectivity in auditory cortex. Neuron 2014; 80:1066-76. [PMID: 24267655 DOI: 10.1016/j.neuron.2013.08.017] [Citation(s) in RCA: 72] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/19/2013] [Indexed: 11/17/2022]
Abstract
The mammalian neocortex is a highly interconnected network of different types of neurons organized into both layers and columns. Overlaid on this structural organization is a pattern of functional connectivity that can be rapidly and flexibly altered during behavior. Parvalbumin-positive (PV+) inhibitory neurons, which are implicated in cortical oscillations and can change neuronal selectivity, may play a pivotal role in these dynamic changes. We found that optogenetic activation of PV+ neurons in the auditory cortex enhanced feedforward functional connectivity in the putative thalamorecipient circuit and in cortical columnar circuits. In contrast, stimulation of PV+ neurons induced no change in connectivity between sites in the same layers. The activity of PV+ neurons may thus serve as a gating mechanism to enhance feedforward, but not lateral or feedback, information flow in cortical circuits. Functionally, it may preferentially enhance the contribution of bottom-up sensory inputs to perception.
Collapse
Affiliation(s)
- Liberty S Hamilton
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| | | | | | | | | | | |
Collapse
|
48
|
Tkačik G, Marre O, Amodei D, Schneidman E, Bialek W, Berry MJ. Searching for collective behavior in a large network of sensory neurons. PLoS Comput Biol 2014; 10:e1003408. [PMID: 24391485 PMCID: PMC3879139 DOI: 10.1371/journal.pcbi.1003408] [Citation(s) in RCA: 123] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2013] [Accepted: 11/05/2013] [Indexed: 11/30/2022] Open
Abstract
Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such "K-pairwise" models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the population's capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.
Collapse
Affiliation(s)
- Gašper Tkačik
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Olivier Marre
- Institut de la Vision, INSERM U968, UPMC, CNRS U7210, CHNO Quinze-Vingts, Paris, France
- Department of Molecular Biology, Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
| | - Dario Amodei
- Department of Molecular Biology, Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
- Joseph Henry Laboratories of Physics, Princeton University, Princeton, New Jersey, United States of America
| | - Elad Schneidman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - William Bialek
- Joseph Henry Laboratories of Physics, Princeton University, Princeton, New Jersey, United States of America
- Lewis–Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey, United States of America
| | - Michael J. Berry
- Department of Molecular Biology, Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
| |
Collapse
|
49
|
Shemesh Y, Sztainberg Y, Forkosh O, Shlapobersky T, Chen A, Schneidman E. High-order social interactions in groups of mice. eLife 2013; 2:e00759. [PMID: 24015357 PMCID: PMC3762363 DOI: 10.7554/elife.00759] [Citation(s) in RCA: 113] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2013] [Accepted: 07/15/2013] [Indexed: 11/25/2022] Open
Abstract
Social behavior in mammals is often studied in pairs under artificial conditions, yet groups may rely on more complicated social structures. Here, we use a novel system for tracking multiple animals in a rich environment to characterize the nature of group behavior and interactions, and show strongly correlated group behavior in mice. We have found that the minimal models that rely only on individual traits and pairwise correlations between animals are not enough to capture group behavior, but that models that include third-order interactions give a very accurate description of the group. These models allow us to infer social interaction maps for individual groups. Using this approach, we show that environmental complexity during adolescence affects the collective group behavior of adult mice, in particular altering the role of high-order structure. Our results provide new experimental and mathematical frameworks for studying group behavior and social interactions. DOI:http://dx.doi.org/10.7554/eLife.00759.001 All animals need to interact with others of the same species, even if it is only to mate. To date, social behavior has been studied mainly at two extremes: detailed observation of pairs; and studies of the collective behavior of large groups, such as flocks of birds. However, to gain an understanding of social behavior in mammals will require an approach that falls between these two extremes. It will be necessary to study animals in larger groups, rather than in pairs, but also to track individuals rather than looking at the activity of the group as a whole. Now, Shemesh et al. have developed a system that can track the behavior of each of four mice with high spatial and temporal resolution as they move around freely in an arena containing ramps, nest boxes, and barriers. Because mice are largely nocturnal, Shemesh et al. dyed the animals’ fur with compounds that produced different coloured fluorescence under ultraviolet light, and then employed an automated system to accurately track each mouse during 12 hr of darkness, over a number of days. Using these data it was possible to estimate the extent to which the behavior of the group is determined by the characteristics of individual mice and how much is determined by interactions between animals. Models based solely on the behavior of individuals could not accurately describe the behavior of the group. Surprisingly, neither could models that focused on interactions between pairs of mice. Only models that included interactions between three mice gave a good approximation of the observed behavior. This shows that, even in a small group, social behavior is determined by relatively complex interactions. Shemesh et al. also found that the behavior of the mice depended on the environment in which they had been raised. Animals that had lived in larger groups and in more interesting enclosures were influenced more by pairwise interactions, and less by three-way interactions, than mice that had been raised in a standard laboratory environment. This suggests that being raised in a complex environment strengthens mouse ‘individuality’. The approach developed by Shemesh et al. could be extended to study larger groups of animals and could also be used to examine the interplay between genes, environment and other factors in shaping social interactions. DOI:http://dx.doi.org/10.7554/eLife.00759.002
Collapse
Affiliation(s)
- Yair Shemesh
- Department of Neurobiology , Weizmann Institute of Science , Rehovot , Israel
| | | | | | | | | | | |
Collapse
|
50
|
Trousdale J, Hu Y, Shea-Brown E, Josić K. A generative spike train model with time-structured higher order correlations. Front Comput Neurosci 2013; 7:84. [PMID: 23908626 PMCID: PMC3727174 DOI: 10.3389/fncom.2013.00084] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2013] [Accepted: 06/12/2013] [Indexed: 11/16/2022] Open
Abstract
Emerging technologies are revealing the spiking activity in ever larger neural ensembles. Frequently, this spiking is far from independent, with correlations in the spike times of different cells. Understanding how such correlations impact the dynamics and function of neural ensembles remains an important open problem. Here we describe a new, generative model for correlated spike trains that can exhibit many of the features observed in data. Extending prior work in mathematical finance, this generalized thinning and shift (GTaS) model creates marginally Poisson spike trains with diverse temporal correlation structures. We give several examples which highlight the model's flexibility and utility. For instance, we use it to examine how a neural network responds to highly structured patterns of inputs. We then show that the GTaS model is analytically tractable, and derive cumulant densities of all orders in terms of model parameters. The GTaS framework can therefore be an important tool in the experimental and theoretical exploration of neural dynamics.
Collapse
Affiliation(s)
- James Trousdale
- Department of Mathematics, University of Houston Houston, TX, USA
| | | | | | | |
Collapse
|