1
|
Nejatbakhsh A, Fumarola F, Esteki S, Toyoizumi T, Kiani R, Mazzucato L. Predicting the effect of micro-stimulation on macaque prefrontal activity based on spontaneous circuit dynamics. PHYSICAL REVIEW RESEARCH 2024; 5:043211. [PMID: 39669288 PMCID: PMC11636805 DOI: 10.1103/physrevresearch.5.043211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/14/2024]
Abstract
A crucial challenge in targeted manipulation of neural activity is to identify perturbation sites whose stimulation exerts significant effects downstream with high efficacy, a procedure currently achieved by labor-intensive and potentially harmful trial and error. Can one predict the effects of electrical stimulation on neural activity based on the circuit dynamics during spontaneous periods? Here we show that the effects of single-site micro-stimulation on ensemble activity in an alert monkey's prefrontal cortex can be predicted solely based on the ensemble's spontaneous activity. We first inferred the ensemble's causal flow based on the directed functional interactions inferred during spontaneous periods using convergent cross-mapping and showed that it uncovers a causal hierarchy between the recording electrodes. We find that causal flow inferred at rest successfully predicts the spatiotemporal effects of micro-stimulation. We validate the computational features underlying causal flow using ground truth data from recurrent neural network models, showing that it is robust to noise and common inputs. A detailed comparison between convergent-cross mapping and alternative methods based on information theory reveals the advantages of the former method in predicting perturbation effects. Our results elucidate the causal interactions within neural ensembles and will facilitate the design of intervention protocols and targeted circuit manipulations suitable for brain-machine interfaces.
Collapse
Affiliation(s)
- Amin Nejatbakhsh
- Center for Theoretical Neuroscience, Columbia University, New York, New York 10027, USA
| | - Francesco Fumarola
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| | - Saleh Esteki
- Center for Neural Science, New York University, New York, New York 10003, USA
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| | - Roozbeh Kiani
- Center for Neural Science, New York University, New York, New York 10003, USA
- Department of Psychology, New York University, New York, New York 10003, USA
| | - Luca Mazzucato
- Departments of Biology and Mathematics and Institute of Neuroscience, University of Oregon, Eugene, Oregon 97403, USA
| |
Collapse
|
2
|
Wu EG, Brackbill N, Rhoades C, Kling A, Gogliettino AR, Shah NP, Sher A, Litke AM, Simoncelli EP, Chichilnisky EJ. Fixational eye movements enhance the precision of visual information transmitted by the primate retina. Nat Commun 2024; 15:7964. [PMID: 39261491 PMCID: PMC11390888 DOI: 10.1038/s41467-024-52304-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 08/29/2024] [Indexed: 09/13/2024] Open
Abstract
Fixational eye movements alter the number and timing of spikes transmitted from the retina to the brain, but whether these changes enhance or degrade the retinal signal is unclear. To quantify this, we developed a Bayesian method for reconstructing natural images from the recorded spikes of hundreds of retinal ganglion cells (RGCs) in the macaque retina (male), combining a likelihood model for RGC light responses with the natural image prior implicitly embedded in an artificial neural network optimized for denoising. The method matched or surpassed the performance of previous reconstruction algorithms, and provides an interpretable framework for characterizing the retinal signal. Reconstructions were improved with artificial stimulus jitter that emulated fixational eye movements, even when the eye movement trajectory was assumed to be unknown and had to be inferred from retinal spikes. Reconstructions were degraded by small artificial perturbations of spike times, revealing more precise temporal encoding than suggested by previous studies. Finally, reconstructions were substantially degraded when derived from a model that ignored cell-to-cell interactions, indicating the importance of stimulus-evoked correlations. Thus, fixational eye movements enhance the precision of the retinal representation.
Collapse
Affiliation(s)
- Eric G Wu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Nora Brackbill
- Department of Physics, Stanford University, Stanford, CA, USA
| | - Colleen Rhoades
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Alexandra Kling
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
| | - Alex R Gogliettino
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
- Neurosciences PhD Program, Stanford University, Stanford, CA, USA
| | - Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Eero P Simoncelli
- Flatiron Institute, Simons Foundation, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - E J Chichilnisky
- Department of Neurosurgery, Stanford University, Stanford, CA, USA.
- Department of Ophthalmology, Stanford University, Stanford, CA, USA.
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA.
| |
Collapse
|
3
|
Wu EG, Brackbill N, Rhoades C, Kling A, Gogliettino AR, Shah NP, Sher A, Litke AM, Simoncelli EP, Chichilnisky E. Fixational Eye Movements Enhance the Precision of Visual Information Transmitted by the Primate Retina. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.12.552902. [PMID: 37645934 PMCID: PMC10462030 DOI: 10.1101/2023.08.12.552902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Fixational eye movements alter the number and timing of spikes transmitted from the retina to the brain, but whether these changes enhance or degrade the retinal signal is unclear. To quantify this, we developed a Bayesian method for reconstructing natural images from the recorded spikes of hundreds of retinal ganglion cells (RGCs) in the macaque retina (male), combining a likelihood model for RGC light responses with the natural image prior implicitly embedded in an artificial neural network optimized for denoising. The method matched or surpassed the performance of previous reconstruction algorithms, and provides an interpretable framework for characterizing the retinal signal. Reconstructions were improved with artificial stimulus jitter that emulated fixational eye movements, even when the eye movement trajectory was assumed to be unknown and had to be inferred from retinal spikes. Reconstructions were degraded by small artificial perturbations of spike times, revealing more precise temporal encoding than suggested by previous studies. Finally, reconstructions were substantially degraded when derived from a model that ignored cell-to-cell interactions, indicating the importance of stimulus-evoked correlations. Thus, fixational eye movements enhance the precision of the retinal representation.
Collapse
Affiliation(s)
- Eric G. Wu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Nora Brackbill
- Department of Physics, Stanford University, Stanford, CA, USA
| | - Colleen Rhoades
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Alexandra Kling
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
| | - Alex R. Gogliettino
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
- Neurosciences PhD Program, Stanford University, Stanford, CA, USA
| | - Nishal P. Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Alan M. Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Eero P. Simoncelli
- Flatiron Institute, Simons Foundation, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - E.J. Chichilnisky
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
| |
Collapse
|
4
|
Wei 魏赣超 G, Tajik Mansouri زینب تاجیک منصوری Z, Wang 王晓婧 X, Stevenson IH. Calibrating Bayesian Decoders of Neural Spiking Activity. J Neurosci 2024; 44:e2158232024. [PMID: 38538143 PMCID: PMC11063820 DOI: 10.1523/jneurosci.2158-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/29/2024] [Accepted: 03/11/2024] [Indexed: 05/03/2024] Open
Abstract
Accurately decoding external variables from observations of neural activity is a major challenge in systems neuroscience. Bayesian decoders, which provide probabilistic estimates, are some of the most widely used. Here we show how, in many common settings, the probabilistic predictions made by traditional Bayesian decoders are overconfident. That is, the estimates for the decoded stimulus or movement variables are more certain than they should be. We then show how Bayesian decoding with latent variables, taking account of low-dimensional shared variability in the observations, can improve calibration, although additional correction for overconfidence is still needed. Using data from males, we examine (1) decoding the direction of grating stimuli from spike recordings in the primary visual cortex in monkeys, (2) decoding movement direction from recordings in the primary motor cortex in monkeys, (3) decoding natural images from multiregion recordings in mice, and (4) decoding position from hippocampal recordings in rats. For each setting, we characterize the overconfidence, and we describe a possible method to correct miscalibration post hoc. Properly calibrated Bayesian decoders may alter theoretical results on probabilistic population coding and lead to brain-machine interfaces that more accurately reflect confidence levels when identifying external variables.
Collapse
Affiliation(s)
- Ganchao Wei 魏赣超
- Department of Statistical Science, Duke University, Durham, North Carolina 27708
| | | | | | - Ian H Stevenson
- Departments of Biomedical Engineering, University of Connecticut, Storrs, Connecticut 06269
- Psychological Sciences, University of Connecticut, Storrs, Connecticut 06269
- Connecticut Institute for Brain and Cognitive Science, University of Connecticut, Storrs, Connecticut 06269
| |
Collapse
|
5
|
Liang T, Brinkman BAW. Statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances. Phys Rev E 2024; 109:044404. [PMID: 38755896 DOI: 10.1103/physreve.109.044404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/29/2024] [Indexed: 05/18/2024]
Abstract
Statistically inferred neuronal connections from observed spike train data are often skewed from ground truth by factors such as model mismatch, unobserved neurons, and limited data. Spike train covariances, sometimes referred to as "functional connections," are often used as a proxy for the connections between pairs of neurons, but reflect statistical relationships between neurons, not anatomical connections. Moreover, covariances are not causal: spiking activity is correlated in both the past and the future, whereas neurons respond only to synaptic inputs in the past. Connections inferred by maximum likelihood inference, however, can be constrained to be causal. However, we show in this work that the inferred connections in spontaneously active networks modeled by stochastic leaky integrate-and-fire networks strongly correlate with the covariances between neurons, and may reflect noncausal relationships, when many neurons are unobserved or when neurons are weakly coupled. This phenomenon occurs across different network structures, including random networks and balanced excitatory-inhibitory networks. We use a combination of simulations and a mean-field analysis with fluctuation corrections to elucidate the relationships between spike train covariances, inferred synaptic filters, and ground-truth connections in partially observed networks.
Collapse
Affiliation(s)
- Tong Liang
- Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794, USA
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
6
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
7
|
Shomali SR, Rasuli SN, Ahmadabadi MN, Shimazaki H. Uncovering hidden network architecture from spiking activities using an exact statistical input-output relation of neurons. Commun Biol 2023; 6:169. [PMID: 36792689 PMCID: PMC9932086 DOI: 10.1038/s42003-023-04511-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 01/20/2023] [Indexed: 02/17/2023] Open
Abstract
Identifying network architecture from observed neural activities is crucial in neuroscience studies. A key requirement is knowledge of the statistical input-output relation of single neurons in vivo. By utilizing an exact analytical solution of the spike-timing for leaky integrate-and-fire neurons under noisy inputs balanced near the threshold, we construct a framework that links synaptic type, strength, and spiking nonlinearity with the statistics of neuronal population activity. The framework explains structured pairwise and higher-order interactions of neurons receiving common inputs under different architectures. We compared the theoretical predictions with the activity of monkey and mouse V1 neurons and found that excitatory inputs given to pairs explained the observed sparse activity characterized by strong negative triple-wise interactions, thereby ruling out the alternative explanation by shared inhibition. Moreover, we showed that the strong interactions are a signature of excitatory rather than inhibitory inputs whenever the spontaneous rate is low. We present a guide map of neural interactions that help researchers to specify the hidden neuronal motifs underlying observed interactions found in empirical data.
Collapse
Affiliation(s)
- Safura Rashid Shomali
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5746, Iran.
| | - Seyyed Nader Rasuli
- grid.418744.a0000 0000 8841 7951School of Physics, Institute for Research in Fundamental Sciences (IPM), Tehran, 19395-5531 Iran ,grid.411872.90000 0001 2087 2250Department of Physics, University of Guilan, Rasht, 41335-1914 Iran
| | - Majid Nili Ahmadabadi
- grid.46072.370000 0004 0612 7950Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14395-515 Iran
| | - Hideaki Shimazaki
- Graduate School of Informatics, Kyoto University, Kyoto, 606-8501, Japan. .,Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN), Hokkaido University, Hokkaido, 060-0812, Japan.
| |
Collapse
|
8
|
Ingrosso A, Goldt S. Data-driven emergence of convolutional structure in neural networks. Proc Natl Acad Sci U S A 2022; 119:e2201854119. [PMID: 36161906 PMCID: PMC9546588 DOI: 10.1073/pnas.2201854119] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 08/12/2022] [Indexed: 11/18/2022] Open
Abstract
Exploiting data invariances is crucial for efficient learning in both artificial and biological neural circuits. Understanding how neural networks can discover appropriate representations capable of harnessing the underlying symmetries of their inputs is thus crucial in machine learning and neuroscience. Convolutional neural networks, for example, were designed to exploit translation symmetry, and their capabilities triggered the first wave of deep learning successes. However, learning convolutions directly from translation-invariant data with a fully connected network has so far proven elusive. Here we show how initially fully connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs, resulting in localized, space-tiling receptive fields. These receptive fields match the filters of a convolutional network trained on the same task. By carefully designing data models for the visual scene, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs, which has long been recognized as the hallmark of natural images. We provide an analytical and numerical characterization of the pattern formation mechanism responsible for this phenomenon in a simple model and find an unexpected link between receptive field formation and tensor decomposition of higher-order input correlations. These results provide a perspective on the development of low-level feature detectors in various sensory modalities and pave the way for studying the impact of higher-order statistics on learning in neural networks.
Collapse
Affiliation(s)
- Alessandro Ingrosso
- Quantitative Life Sciences, The Abdus Salam International Centre for Theoretical Physics, 34151 Trieste, Italy
| | - Sebastian Goldt
- Department of Physics, International School of Advanced Studies, 34136 Trieste, Italy
| |
Collapse
|
9
|
Triplett MA, Goodhill GJ. Inference of Multiplicative Factors Underlying Neural Variability in Calcium Imaging Data. Neural Comput 2022; 34:1143-1169. [PMID: 35344990 DOI: 10.1162/neco_a_01492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 01/11/2022] [Indexed: 11/04/2022]
Abstract
Understanding brain function requires disentangling the high-dimensional activity of populations of neurons. Calcium imaging is an increasingly popular technique for monitoring such neural activity, but computational tools for interpreting extracted calcium signals are lacking. While there has been a substantial development of factor-analysis-type methods for neural spike train analysis, similar methods targeted at calcium imaging data are only beginning to emerge. Here we develop a flexible modeling framework that identifies low-dimensional latent factors in calcium imaging data with distinct additive and multiplicative modulatory effects. Our model includes spike-and-slab sparse priors that regularize additive factor activity and gaussian process priors that constrain multiplicative effects to vary only gradually, allowing for the identification of smooth and interpretable changes in multiplicative gain. These factors are estimated from the data using a variational expectation-maximization algorithm that requires a differentiable reparameterization of both continuous and discrete latent variables. After demonstrating our method on simulated data, we apply it to experimental data from the zebrafish optic tectum, uncovering low-dimensional fluctuations in multiplicative excitability that govern trial-to-trial variation in evoked responses.
Collapse
Affiliation(s)
- Marcus A Triplett
- Queensland Brain Institute and School of Mathematics and Physics, University of Queensland, St Lucia, QLD 4072, Australia
| | - Geoffrey J Goodhill
- Queensland Brain Institute and School of Mathematics and Physics, University of Queensland, St Lucia, QLD 4072, Australia
| |
Collapse
|
10
|
Papana A. Connectivity Analysis for Multivariate Time Series: Correlation vs. Causality. ENTROPY (BASEL, SWITZERLAND) 2021; 23:1570. [PMID: 34945876 PMCID: PMC8700128 DOI: 10.3390/e23121570] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 11/17/2021] [Accepted: 11/24/2021] [Indexed: 12/16/2022]
Abstract
The study of the interdependence relationships of the variables of an examined system is of great importance and remains a challenging task. There are two distinct cases of interdependence. In the first case, the variables evolve in synchrony, connections are undirected and the connectivity is examined based on symmetric measures, such as correlation. In the second case, a variable drives another one and they are connected with a causal relationship. Therefore, directed connections entail the determination of the interrelationships based on causality measures. The main open question that arises is the following: can symmetric correlation measures or directional causality measures be applied to infer the connectivity network of an examined system? Using simulations, we demonstrate the performance of different connectivity measures in case of contemporaneous or/and temporal dependencies. Results suggest the sensitivity of correlation measures when temporal dependencies exist in the data. On the other hand, causality measures do not spuriously indicate causal effects when data present only contemporaneous dependencies. Finally, the necessity of introducing effective instantaneous causality measures is highlighted since they are able to handle both contemporaneous and causal effects at the same time. Results based on instantaneous causality measures are promising; however, further investigation is required in order to achieve an overall satisfactory performance.
Collapse
Affiliation(s)
- Angeliki Papana
- Department of Economics, University of Macedonia, 54636 Thessaloniki, Greece
| |
Collapse
|
11
|
Sokoloski S, Aschner A, Coen-Cagli R. Modelling the neural code in large populations of correlated neurons. eLife 2021; 10:64615. [PMID: 34608865 PMCID: PMC8577837 DOI: 10.7554/elife.64615] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 10/01/2021] [Indexed: 01/02/2023] Open
Abstract
Neurons respond selectively to stimuli, and thereby define a code that associates stimuli with population response patterns. Certain correlations within population responses (noise correlations) significantly impact the information content of the code, especially in large populations. Understanding the neural code thus necessitates response models that quantify the coding properties of modelled populations, while fitting large-scale neural recordings and capturing noise correlations. In this paper, we propose a class of response model based on mixture models and exponential families. We show how to fit our models with expectation-maximization, and that they capture diverse variability and covariability in recordings of macaque primary visual cortex. We also show how they facilitate accurate Bayesian decoding, provide a closed-form expression for the Fisher information, and are compatible with theories of probabilistic population coding. Our framework could allow researchers to quantitatively validate the predictions of neural coding theories against both large-scale neural recordings and cognitive performance.
Collapse
Affiliation(s)
- Sacha Sokoloski
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, United States.,Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
| | - Amir Aschner
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, United States
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, United States.,Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, United States
| |
Collapse
|
12
|
Weber AI, Shea-Brown E, Rieke F. Identification of Multiple Noise Sources Improves Estimation of Neural Responses across Stimulus Conditions. eNeuro 2021; 8:ENEURO.0191-21.2021. [PMID: 34083382 PMCID: PMC8260275 DOI: 10.1523/eneuro.0191-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 05/10/2021] [Indexed: 11/21/2022] Open
Abstract
Most models of neural responses are constructed to reproduce the average response to inputs but lack the flexibility to capture observed variability in responses. The origins and structure of this variability have significant implications for how information is encoded and processed in the nervous system, both by limiting information that can be conveyed and by determining processing strategies that are favorable for minimizing its negative effects. Here, we present a new modeling framework that incorporates multiple sources of noise to better capture observed features of neural response variability across stimulus conditions. We apply this model to retinal ganglion cells at two different ambient light levels and demonstrate that it captures the full distribution of responses. Further, the model reveals light level-dependent changes that could not be seen with previous models, showing both large changes in rectification of nonlinear circuit elements and systematic differences in the contributions of different noise sources under different conditions.
Collapse
Affiliation(s)
- Alison I Weber
- Graduate Program in Neuroscience, University of Washington, Seattle, WA 98195
| | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| |
Collapse
|
13
|
Kim G, Jang J, Paik SB. Periodic clustering of simple and complex cells in visual cortex. Neural Netw 2021; 143:148-160. [PMID: 34146895 DOI: 10.1016/j.neunet.2021.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 05/31/2021] [Accepted: 06/01/2021] [Indexed: 10/21/2022]
Abstract
Neurons in the primary visual cortex (V1) are often classified as simple or complex cells, but it is debated whether they are discrete hierarchical classes of neurons or if they represent a continuum of variation within a single class of cells. Herein, we show that simple and complex cells may arise commonly from the feedforward projections from the retina. From analysis of the cortical receptive fields in cats, we show evidence that simple and complex cells originate from the periodic variation of ON-OFF segregation in the feedforward projection of retinal mosaics, by which they organize into periodic clusters in V1. From data in cats, we observed that clusters of simple and complex receptive fields correlate topographically with orientation maps, which supports our model prediction. Our results suggest that simple and complex cells are not two distinct neural populations but arise from common retinal afferents, simultaneous with orientation tuning.
Collapse
Affiliation(s)
- Gwangsu Kim
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Jaeson Jang
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Se-Bum Paik
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea.
| |
Collapse
|
14
|
Sachdeva PS, Livezey JA, Dougherty ME, Gu BM, Berke JD, Bouchard KE. Improved inference in coupling, encoding, and decoding models and its consequence for neuroscientific interpretation. J Neurosci Methods 2021; 358:109195. [PMID: 33905791 DOI: 10.1016/j.jneumeth.2021.109195] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 04/08/2021] [Accepted: 04/10/2021] [Indexed: 10/21/2022]
Abstract
BACKGROUND A central goal of systems neuroscience is to understand the relationships amongst constituent units in neural populations, and their modulation by external factors, using high-dimensional and stochastic neural recordings. Parametric statistical models (e.g., coupling, encoding, and decoding models), play an instrumental role in accomplishing this goal. However, extracting conclusions from a parametric model requires that it is fit using an inference algorithm capable of selecting the correct parameters and properly estimating their values. Traditional approaches to parameter inference have been shown to suffer from failures in both selection and estimation. The recent development of algorithms that ameliorate these deficiencies raises the question of whether past work relying on such inference procedures have produced inaccurate systems neuroscience models, thereby impairing their interpretation. NEW METHOD We used algorithms based on Union of Intersections, a statistical inference framework based on stability principles, capable of improved selection and estimation. COMPARISON We fit functional coupling, encoding, and decoding models across a battery of neural datasets using both UoI and baseline inference procedures (e.g., ℓ1-penalized GLMs), and compared the structure of their fitted parameters. RESULTS Across recording modality, brain region, and task, we found that UoI inferred models with increased sparsity, improved stability, and qualitatively different parameter distributions, while maintaining predictive performance. We obtained highly sparse functional coupling networks with substantially different community structure, more parsimonious encoding models, and decoding models that relied on fewer single-units. CONCLUSIONS Together, these results demonstrate that improved parameter inference, achieved via UoI, reshapes interpretation in diverse neuroscience contexts.
Collapse
Affiliation(s)
- Pratik S Sachdeva
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Department of Physics, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Jesse A Livezey
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Maximilian E Dougherty
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Bon-Mi Gu
- Department of Neurology, University of California, San Francisco, San Francisco, 94143, CA, USA
| | - Joshua D Berke
- Department of Neurology, University of California, San Francisco, San Francisco, 94143, CA, USA; Department of Psychiatry; Neuroscience Graduate Program; Kavli Institute for Fundamental Neuroscience; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, 94143, CA, USA
| | - Kristofer E Bouchard
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA; Computational Resources Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, 94720, CA, USA
| |
Collapse
|
15
|
Azeredo da Silveira R, Rieke F. The Geometry of Information Coding in Correlated Neural Populations. Annu Rev Neurosci 2021; 44:403-424. [PMID: 33863252 DOI: 10.1146/annurev-neuro-120320-082744] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code.
Collapse
Affiliation(s)
| | - Fred Rieke
- Department of Physics, Ecole Normale Supérieure, 75005 Paris, France;
| |
Collapse
|
16
|
Jang J, Song M, Paik SB. Retino-Cortical Mapping Ratio Predicts Columnar and Salt-and-Pepper Organization in Mammalian Visual Cortex. Cell Rep 2021; 30:3270-3279.e3. [PMID: 32160536 DOI: 10.1016/j.celrep.2020.02.038] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 12/27/2019] [Accepted: 02/07/2020] [Indexed: 12/22/2022] Open
Abstract
In the mammalian primary visual cortex, neural tuning to stimulus orientation is organized in either columnar or salt-and-pepper patterns across species. For decades, this sharp contrast has spawned fundamental questions about the origin of functional architectures in visual cortex. However, it is unknown whether these patterns reflect disparate developmental mechanisms across mammalian taxa or simply originate from variation of biological parameters under a universal development process. In this work, after the analysis of data from eight mammalian species, we show that cortical organization is predictable by a single factor, the retino-cortical mapping ratio. Groups of species with or without columnar clustering are distinguished by the feedforward sampling ratio, and model simulations with controlled mapping conditions reproduce both types of organization. Prediction from the Nyquist theorem explains this parametric division of the patterns with high accuracy. Our results imply that evolutionary variation of physical parameters may induce development of distinct functional circuitry.
Collapse
Affiliation(s)
- Jaeson Jang
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Min Song
- Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Se-Bum Paik
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea.
| |
Collapse
|
17
|
Talyansky S, Brinkman BAW. Dysregulation of excitatory neural firing replicates physiological and functional changes in aging visual cortex. PLoS Comput Biol 2021; 17:e1008620. [PMID: 33497380 PMCID: PMC7864437 DOI: 10.1371/journal.pcbi.1008620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 02/05/2021] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
The mammalian visual system has been the focus of countless experimental and theoretical studies designed to elucidate principles of neural computation and sensory coding. Most theoretical work has focused on networks intended to reflect developing or mature neural circuitry, in both health and disease. Few computational studies have attempted to model changes that occur in neural circuitry as an organism ages non-pathologically. In this work we contribute to closing this gap, studying how physiological changes correlated with advanced age impact the computational performance of a spiking network model of primary visual cortex (V1). Our results demonstrate that deterioration of homeostatic regulation of excitatory firing, coupled with long-term synaptic plasticity, is a sufficient mechanism to reproduce features of observed physiological and functional changes in neural activity data, specifically declines in inhibition and in selectivity to oriented stimuli. This suggests a potential causality between dysregulation of neuron firing and age-induced changes in brain physiology and functional performance. While this does not rule out deeper underlying causes or other mechanisms that could give rise to these changes, our approach opens new avenues for exploring these underlying mechanisms in greater depth and making predictions for future experiments.
Collapse
Affiliation(s)
- Seth Talyansky
- Catlin Gabel School, Portland, Oregon, United States of America
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
| | - Braden A. W. Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
18
|
Song M, Jang J, Kim G, Paik SB. Projection of Orthogonal Tiling from the Retina to the Visual Cortex. Cell Rep 2021; 34:108581. [PMID: 33406438 DOI: 10.1016/j.celrep.2020.108581] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 10/22/2020] [Accepted: 12/09/2020] [Indexed: 10/22/2022] Open
Abstract
In higher mammals, the primary visual cortex (V1) is organized into diverse tuning maps of visual features. The topography of these maps intersects orthogonally, but it remains unclear how such a systematic relationship can develop. Here, we show that the orthogonal organization already exists in retinal ganglion cell (RGC) mosaics, providing a blueprint of the organization in V1. From analysis of the RGC mosaics data in monkeys and cats, we find that the ON-OFF RGC distance and ON-OFF angle of neighboring RGCs are organized into a topographic tiling across mosaics, analogous to the orthogonal intersection of cortical tuning maps. Our model simulation shows that the ON-OFF distance and angle in RGC mosaics correspondingly initiate ocular dominance/spatial frequency tuning and orientation tuning, resulting in the orthogonal intersection of cortical tuning maps. These findings suggest that the regularly structured ON-OFF patterns mirrored from the retina initiate the uniform representation of combinations of map features over the visual space.
Collapse
Affiliation(s)
- Min Song
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Jaeson Jang
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Gwangsu Kim
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Se-Bum Paik
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea.
| |
Collapse
|
19
|
Keeley SL, Zoltowski DM, Aoi MC, Pillow JW. Modeling statistical dependencies in multi-region spike train data. Curr Opin Neurobiol 2020; 65:194-202. [PMID: 33334641 PMCID: PMC7769979 DOI: 10.1016/j.conb.2020.11.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 11/10/2020] [Accepted: 11/10/2020] [Indexed: 11/17/2022]
Abstract
Neural computations underlying cognition and behavior rely on the coordination of neural activity across multiple brain areas. Understanding how brain areas interact to process information or generate behavior is thus a central question in neuroscience. Here we provide an overview of statistical approaches for characterizing statistical dependencies in multi-region spike train recordings. We focus on two classes of models in particular: regression-based models and shared latent variable models. Regression-based models describe interactions in terms of a directed transformation of information from one region to another. Shared latent variable models, on the other hand, seek to describe interactions in terms of sources that capture common fluctuations in spiking activity across regions. We discuss the advantages and limitations of each of these approaches and future directions for the field. We intend this review to be an introduction to the statistical methods in multi-region models for computational neuroscientists and experimentalists alike.
Collapse
Affiliation(s)
- Stephen L Keeley
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - David M Zoltowski
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Mikio C Aoi
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
| |
Collapse
|
20
|
Triplett MA, Pujic Z, Sun B, Avitan L, Goodhill GJ. Model-based decoupling of evoked and spontaneous neural activity in calcium imaging data. PLoS Comput Biol 2020; 16:e1008330. [PMID: 33253161 PMCID: PMC7728401 DOI: 10.1371/journal.pcbi.1008330] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 12/10/2020] [Accepted: 09/10/2020] [Indexed: 11/19/2022] Open
Abstract
The pattern of neural activity evoked by a stimulus can be substantially affected by ongoing spontaneous activity. Separating these two types of activity is particularly important for calcium imaging data given the slow temporal dynamics of calcium indicators. Here we present a statistical model that decouples stimulus-driven activity from low dimensional spontaneous activity in this case. The model identifies hidden factors giving rise to spontaneous activity while jointly estimating stimulus tuning properties that account for the confounding effects that these factors introduce. By applying our model to data from zebrafish optic tectum and mouse visual cortex, we obtain quantitative measurements of the extent that neurons in each case are driven by evoked activity, spontaneous activity, and their interaction. By not averaging away potentially important information encoded in spontaneous activity, this broadly applicable model brings new insight into population-level neural activity within single trials.
Collapse
Affiliation(s)
- Marcus A. Triplett
- Queensland Brain Institute, The University of Queensland, St Lucia, Australia
- School of Mathematics and Physics, The University of Queensland, St Lucia, Australia
| | - Zac Pujic
- Queensland Brain Institute, The University of Queensland, St Lucia, Australia
| | - Biao Sun
- Queensland Brain Institute, The University of Queensland, St Lucia, Australia
| | - Lilach Avitan
- Queensland Brain Institute, The University of Queensland, St Lucia, Australia
| | - Geoffrey J. Goodhill
- Queensland Brain Institute, The University of Queensland, St Lucia, Australia
- School of Mathematics and Physics, The University of Queensland, St Lucia, Australia
| |
Collapse
|
21
|
Das A, Fiete IR. Systematic errors in connectivity inferred from activity in strongly recurrent networks. Nat Neurosci 2020; 23:1286-1296. [DOI: 10.1038/s41593-020-0699-2] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 07/28/2020] [Indexed: 11/09/2022]
|
22
|
Ahn J, Phan HL, Cha S, Koo KI, Yoo Y, Goo YS. Synchrony of Spontaneous Burst Firing between Retinal Ganglion Cells Across Species. Exp Neurobiol 2020; 29:285-299. [PMID: 32921641 PMCID: PMC7492847 DOI: 10.5607/en20025] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 08/27/2020] [Accepted: 08/31/2020] [Indexed: 01/16/2023] Open
Abstract
Neurons communicate with other neurons in response to environmental changes. Their goal is to transmit information to their targets reliably. A burst, which consists of multiple spikes within a short time interval, plays an essential role in enhancing the reliability of information transmission through synapses. In the visual system, retinal ganglion cells (RGCs), the output neurons of the retina, show bursting activity and transmit retinal information to the lateral geniculate neuron of the thalamus. In this study, to extend our interest to the population level, the burstings of multiple RGCs were simultaneously recorded using a multi-channel recording system. As the first step in network analysis, we focused on investigating the pairwise burst correlation between two RGCs. Furthermore, to assess if the population bursting is preserved across species, we compared the synchronized bursting of RGCs between marmoset monkey (callithrix jacchus), one species of the new world monkeys and mouse (C57BL/6J strain). First, monkey RGCs showed a larger number of spikes within a burst, while the inter-spike interval, burst duration, and inter-burst interval were smaller compared with mouse RGCs. Monkey RGCs showed a strong burst synchronization between RGCs, whereas mouse RGCs showed no correlated burst firing. Monkey RGC pairs showed significantly higher burst synchrony and mutual information than mouse RGC pairs did. Comprehensively, through this study, we emphasize that two species have a different bursting activity of RGCs and different burst synchronization suggesting two species have distinctive retinal processing.
Collapse
Affiliation(s)
- Jungryul Ahn
- Department of Physiology, Chungbuk National University School of Medicine, Cheongju 28644, Korea
| | - Huu Lam Phan
- Department of Biomedical Engineering, University of Ulsan, Ulsan 44610, Korea
| | - Seongkwang Cha
- Department of Physiology, Chungbuk National University School of Medicine, Cheongju 28644, Korea
| | - Kyo-In Koo
- Department of Biomedical Engineering, University of Ulsan, Ulsan 44610, Korea
| | - Yongseok Yoo
- Department of Electronics Engineering, Incheon National University, Incheon 22012, Korea
| | - Yong Sook Goo
- Department of Physiology, Chungbuk National University School of Medicine, Cheongju 28644, Korea
| |
Collapse
|
23
|
Sachdeva PS, Livezey JA, DeWeese MR. Heterogeneous Synaptic Weighting Improves Neural Coding in the Presence of Common Noise. Neural Comput 2020; 32:1239-1276. [PMID: 32433901 DOI: 10.1162/neco_a_01287] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Simultaneous recordings from the cortex have revealed that neural activity is highly variable and that some variability is shared across neurons in a population. Further experimental work has demonstrated that the shared component of a neuronal population's variability is typically comparable to or larger than its private component. Meanwhile, an abundance of theoretical work has assessed the impact that shared variability has on a population code. For example, shared input noise is understood to have a detrimental impact on a neural population's coding fidelity. However, other contributions to variability, such as common noise, can also play a role in shaping correlated variability. We present a network of linear-nonlinear neurons in which we introduce a common noise input to model-for instance, variability resulting from upstream action potentials that are irrelevant to the task at hand. We show that by applying a heterogeneous set of synaptic weights to the neural inputs carrying the common noise, the network can improve its coding ability as measured by both Fisher information and Shannon mutual information, even in cases where this results in amplification of the common noise. With a broad and heterogeneous distribution of synaptic weights, a population of neurons can remove the harmful effects imposed by afferents that are uninformative about a stimulus. We demonstrate that some nonlinear networks benefit from weight diversification up to a certain population size, above which the drawbacks from amplified noise dominate over the benefits of diversification. We further characterize these benefits in terms of the relative strength of shared and private variability sources. Finally, we studied the asymptotic behavior of the mutual information and Fisher information analytically in our various networks as a function of population size. We find some surprising qualitative changes in the asymptotic behavior as we make seemingly minor changes in the synaptic weight distributions.
Collapse
Affiliation(s)
- Pratik S Sachdeva
- Redwood Center for Theoretical Neuroscience and Department of Physics, University of California, Berkeley, Berkeley, CA 94720 U.S.A., and Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, U.S.A.
| | - Jesse A Livezey
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A., and Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, U.S.A.
| | - Michael R DeWeese
- Redwood Center for Theoretical Neuroscience, Department of Physics, and Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720 U.S.A.
| |
Collapse
|
24
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
25
|
Gardella C, Marre O, Mora T. Modeling the Correlated Activity of Neural Populations: A Review. Neural Comput 2018; 31:233-269. [PMID: 30576613 DOI: 10.1162/neco_a_01154] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The principles of neural encoding and computations are inherently collective and usually involve large populations of interacting neurons with highly correlated activities. While theories of neural function have long recognized the importance of collective effects in populations of neurons, only in the past two decades has it become possible to record from many cells simultaneously using advanced experimental techniques with single-spike resolution and to relate these correlations to function and behavior. This review focuses on the modeling and inference approaches that have been recently developed to describe the correlated spiking activity of populations of neurons. We cover a variety of models describing correlations between pairs of neurons, as well as between larger groups, synchronous or delayed in time, with or without the explicit influence of the stimulus, and including or not latent variables. We discuss the advantages and drawbacks or each method, as well as the computational challenges related to their application to recordings of ever larger populations.
Collapse
Affiliation(s)
- Christophe Gardella
- Laboratoire de physique statistique, CNRS, Sorbonne Université, Université Paris-Diderot, and École normale supérieure, 75005 Paris, France, and Institut de la Vision, INSERM, CNRS, and Sorbonne Université, 75012 Paris, France
| | - Olivier Marre
- Institut de la Vision, INSERM, CNRS, and Sorbonne Université, 75012 Paris, France
| | - Thierry Mora
- Laboratoire de physique statistique, CNRS, Sorbonne Université, Université Paris-Diderot, and École normale supérieure, 75005 Paris, France
| |
Collapse
|
26
|
Brinkman BAW, Rieke F, Shea-Brown E, Buice MA. Predicting how and when hidden neurons skew measured synaptic interactions. PLoS Comput Biol 2018; 14:e1006490. [PMID: 30346943 PMCID: PMC6219819 DOI: 10.1371/journal.pcbi.1006490] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 11/06/2018] [Accepted: 09/05/2018] [Indexed: 11/18/2022] Open
Abstract
A major obstacle to understanding neural coding and computation is the fact that experimental recordings typically sample only a small fraction of the neurons in a circuit. Measured neural properties are skewed by interactions between recorded neurons and the “hidden” portion of the network. To properly interpret neural data and determine how biological structure gives rise to neural circuit function, we thus need a better understanding of the relationships between measured effective neural properties and the true underlying physiological properties. Here, we focus on how the effective spatiotemporal dynamics of the synaptic interactions between neurons are reshaped by coupling to unobserved neurons. We find that the effective interactions from a pre-synaptic neuron r′ to a post-synaptic neuron r can be decomposed into a sum of the true interaction from r′ to r plus corrections from every directed path from r′ to r through unobserved neurons. Importantly, the resulting formula reveals when the hidden units have—or do not have—major effects on reshaping the interactions among observed neurons. As a particular example of interest, we derive a formula for the impact of hidden units in random networks with “strong” coupling—connection weights that scale with 1/N, where N is the network size, precisely the scaling observed in recent experiments. With this quantitative relationship between measured and true interactions, we can study how network properties shape effective interactions, which properties are relevant for neural computations, and how to manipulate effective interactions. No experiment in neuroscience can record from more than a tiny fraction of the total number of neurons present in a circuit. This severely complicates measurement of a network’s true properties, as unobserved neurons skew measurements away from what would be measured if all neurons were observed. For example, the measured post-synaptic response of a neuron to a spike from a particular pre-synaptic neuron incorporates direct connections between the two neurons as well as the effect of any number of indirect connections, including through unobserved neurons. To understand how measured quantities are distorted by unobserved neurons, we calculate a general relationship between measured “effective” synaptic interactions and the ground-truth interactions in the network. This allows us to identify conditions under which hidden neurons substantially alter measured interactions. Moreover, it provides a foundation for future work on manipulating effective interactions between neurons to better understand and potentially alter circuit function—or dysfunction.
Collapse
Affiliation(s)
- Braden A W Brinkman
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A Buice
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| |
Collapse
|
27
|
Abstract
Generalized linear models (GLMs) have a wide range of applications in systems neuroscience describing the encoding of stimulus and behavioral variables, as well as the dynamics of single neurons. However, in any given experiment, many variables that have an impact on neural activity are not observed or not modeled. Here we demonstrate, in both theory and practice, how these omitted variables can result in biased parameter estimates for the effects that are included. In three case studies, we estimate tuning functions for common experiments in motor cortex, hippocampus, and visual cortex. We find that including traditionally omitted variables changes estimates of the original parameters and that modulation originally attributed to one variable is reduced after new variables are included. In GLMs describing single-neuron dynamics, we then demonstrate how postspike history effects can also be biased by omitted variables. Here we find that omitted variable bias can lead to mistaken conclusions about the stability of single-neuron firing. Omitted variable bias can appear in any model with confounders-where omitted variables modulate neural activity and the effects of the omitted variables covary with the included effects. Understanding how and to what extent omitted variable bias affects parameter estimates is likely to be important for interpreting the parameters and predictions of many neural encoding models.
Collapse
Affiliation(s)
- Ian H Stevenson
- Department of Psychological Sciences, Department of Biomedical Engineering, and CT Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, U.S.A.
| |
Collapse
|
28
|
Lawlor PN, Perich MG, Miller LE, Kording KP. Linear-nonlinear-time-warp-poisson models of neural activity. J Comput Neurosci 2018; 45:173-191. [PMID: 30294750 DOI: 10.1007/s10827-018-0696-6] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Revised: 08/13/2018] [Accepted: 09/10/2018] [Indexed: 01/15/2023]
Abstract
Prominent models of spike trains assume only one source of variability - stochastic (Poisson) spiking - when stimuli and behavior are fixed. However, spike trains may also reflect variability due to internal processes such as planning. For example, we can plan a movement at one point in time and execute it at some arbitrary later time. Neurons involved in planning may thus share an underlying time course that is not precisely locked to the actual movement. Here we combine the standard Linear-Nonlinear-Poisson (LNP) model with Dynamic Time Warping (DTW) to account for shared temporal variability. When applied to recordings from macaque premotor cortex, we find that time warping considerably improves predictions of neural activity. We suggest that such temporal variability is a widespread phenomenon in the brain which should be modeled.
Collapse
Affiliation(s)
- Patrick N Lawlor
- Division of Child Neurology, Children's Hospital of Philadelphia, Philadelphia, PA, USA.
| | | | - Lee E Miller
- Department of Physiology, Northwestern University, Chicago, IL, USA
| | - Konrad P Kording
- Departments of Bioengineering and Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
29
|
A Moment-Based Maximum Entropy Model for Fitting Higher-Order Interactions in Neural Data. ENTROPY 2018; 20:e20070489. [PMID: 33265579 PMCID: PMC7513015 DOI: 10.3390/e20070489] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 06/15/2018] [Accepted: 06/19/2018] [Indexed: 11/22/2022]
Abstract
Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide spiking patterns. Maximum entropy models are an increasingly popular method for capturing collective neural activity by including successively higher-order interaction terms. However, incorporating higher-order interactions in these models is difficult in practice due to two factors. First, the number of parameters exponentially increases as higher orders are added. Second, because triplet (and higher) spiking events occur infrequently, estimates of higher-order statistics may be contaminated by sampling noise. To address this, we extend previous work on the Reliable Interaction class of models to develop a normalized variant that adaptively identifies the specific pairwise and higher-order moments that can be estimated from a given dataset for a specified confidence level. The resulting “Reliable Moment” model is able to capture cortical-like distributions of population spiking patterns. Finally, we show that, compared with the Reliable Interaction model, the Reliable Moment model infers fewer strong spurious higher-order interactions and is better able to predict the frequencies of previously unobserved spiking patterns.
Collapse
|
30
|
Magrans de Abril I, Yoshimoto J, Doya K. Connectivity inference from neural recording data: Challenges, mathematical bases and research directions. Neural Netw 2018; 102:120-137. [PMID: 29571122 DOI: 10.1016/j.neunet.2018.02.016] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 02/23/2018] [Accepted: 02/26/2018] [Indexed: 11/30/2022]
Abstract
This article presents a review of computational methods for connectivity inference from neural activity data derived from multi-electrode recordings or fluorescence imaging. We first identify biophysical and technical challenges in connectivity inference along the data processing pipeline. We then review connectivity inference methods based on two major mathematical foundations, namely, descriptive model-free approaches and generative model-based approaches. We investigate representative studies in both categories and clarify which challenges have been addressed by which method. We further identify critical open issues and possible research directions.
Collapse
Affiliation(s)
| | | | - Kenji Doya
- Okinawa Institute of Science and Technology, Graduate University, Japan
| |
Collapse
|
31
|
Pernice V, da Silveira RA. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits. PLoS Comput Biol 2018; 14:e1005979. [PMID: 29408930 PMCID: PMC5833435 DOI: 10.1371/journal.pcbi.1005979] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Revised: 03/01/2018] [Accepted: 01/10/2018] [Indexed: 11/18/2022] Open
Abstract
Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. The response of neurons to a stimulus is variable across trials. A natural solution for reliable coding in the face of noise is the averaging across a neural population. The nature of this averaging depends on the structure of noise correlations in the neural population. In turn, the correlation structure depends on the way noise and correlations are generated in neural circuits. It is in general difficult to identify the origin of correlations from the observed population activity alone. In this article, we explore different theoretical scenarios of the way in which correlations can be generated, and we relate these to the architecture of feed-forward and recurrent neural circuits. Analyzing population recordings of the activity in mouse auditory cortex in response to sound stimuli, we find that population statistics are consistent with those generated in a recurrent network model. Using this model, we can then quantify the effects of network properties on average population responses, noise correlations, and the representation of sensory information.
Collapse
Affiliation(s)
- Volker Pernice
- Department of Physics, Ecole Normale Supérieure, Paris, France
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University; Université Paris Diderot Sorbonne Paris-Cité, Sorbonne Universités UPMC Univ Paris 06; CNRS, Paris, France
| | - Rava Azeredo da Silveira
- Department of Physics, Ecole Normale Supérieure, Paris, France
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University; Université Paris Diderot Sorbonne Paris-Cité, Sorbonne Universités UPMC Univ Paris 06; CNRS, Paris, France
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
- * E-mail:
| |
Collapse
|
32
|
Zhu J, Liu X. Measuring spike timing distance in the Hindmarsh-Rose neurons. Cogn Neurodyn 2017; 12:225-234. [PMID: 29564030 DOI: 10.1007/s11571-017-9466-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2017] [Revised: 11/28/2017] [Accepted: 12/19/2017] [Indexed: 11/28/2022] Open
Abstract
In the present paper, a simple spike timing distance is defined which can be used to measure the degree of synchronization with the information only encoded in the precise timing of the spike trains. Via calculating the spike timing distance defined in this paper, the spike train similarity of uncoupled Hindmarsh-Rose neurons in bursting or spiking states with different initial conditions is investigated and the results are compared with other spike train distance measures. Later, the spike timing distance measure is applied to study the synchronization of coupled or common noise-stimulated neurons. Counterintuitively, the addition of weak coupling or common noise doesn't enhance the degree of synchronization although after critical values, both of them can induce complete synchronizations. More interestingly, the common noise plays opposite roles for weak and strong enough couplings. Finally, it should be noted that the measure defined in this paper can be extended to measure large neuronal ensembles and the lag synchronization.
Collapse
Affiliation(s)
- Jinjie Zhu
- State Key Laboratory of Mechanics and Control of Mechanical Structures, College of Aerospace Engineering, Nanjing University of Aeronautics and Astronautics, 29 YuDao Street, Nanjing, 210016 Jiangsu Province People's Republic of China
| | - Xianbin Liu
- State Key Laboratory of Mechanics and Control of Mechanical Structures, College of Aerospace Engineering, Nanjing University of Aeronautics and Astronautics, 29 YuDao Street, Nanjing, 210016 Jiangsu Province People's Republic of China
| |
Collapse
|
33
|
Loback A, Prentice J, Ioffe M, Berry Ii M. Noise-Robust Modes of the Retinal Population Code Have the Geometry of "Ridges" and Correspond to Neuronal Communities. Neural Comput 2017; 29:3119-3180. [PMID: 28957022 DOI: 10.1162/neco_a_01011] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
An appealing new principle for neural population codes is that correlations among neurons organize neural activity patterns into a discrete set of clusters, which can each be viewed as a noise-robust population codeword. Previous studies assumed that these codewords corresponded geometrically with local peaks in the probability landscape of neural population responses. Here, we analyze multiple data sets of the responses of approximately 150 retinal ganglion cells and show that local probability peaks are absent under broad, nonrepeated stimulus ensembles, which are characteristic of natural behavior. However, we find that neural activity still forms noise-robust clusters in this regime, albeit clusters with a different geometry. We start by defining a soft local maximum, which is a local probability maximum when constrained to a fixed spike count. Next, we show that soft local maxima are robustly present and can, moreover, be linked across different spike count levels in the probability landscape to form a ridge. We found that these ridges comprise combinations of spiking and silence in the neural population such that all of the spiking neurons are members of the same neuronal community, a notion from network theory. We argue that a neuronal community shares many of the properties of Donald Hebb's classic cell assembly and show that a simple, biologically plausible decoding algorithm can recognize the presence of a specific neuronal community.
Collapse
Affiliation(s)
- Adrianna Loback
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Jason Prentice
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Mark Ioffe
- Physics Department, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Michael Berry Ii
- Princeton Neuroscience Institute and Molecular Biology Department, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
34
|
Ghanbari A, Malyshev A, Volgushev M, Stevenson IH. Estimating short-term synaptic plasticity from pre- and postsynaptic spiking. PLoS Comput Biol 2017; 13:e1005738. [PMID: 28873406 PMCID: PMC5600391 DOI: 10.1371/journal.pcbi.1005738] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2017] [Revised: 09/15/2017] [Accepted: 08/18/2017] [Indexed: 01/27/2023] Open
Abstract
Short-term synaptic plasticity (STP) critically affects the processing of information in neuronal circuits by reversibly changing the effective strength of connections between neurons on time scales from milliseconds to a few seconds. STP is traditionally studied using intracellular recordings of postsynaptic potentials or currents evoked by presynaptic spikes. However, STP also affects the statistics of postsynaptic spikes. Here we present two model-based approaches for estimating synaptic weights and short-term plasticity from pre- and postsynaptic spike observations alone. We extend a generalized linear model (GLM) that predicts postsynaptic spiking as a function of the observed pre- and postsynaptic spikes and allow the connection strength (coupling term in the GLM) to vary as a function of time based on the history of presynaptic spikes. Our first model assumes that STP follows a Tsodyks-Markram description of vesicle depletion and recovery. In a second model, we introduce a functional description of STP where we estimate the coupling term as a biophysically unrestrained function of the presynaptic inter-spike intervals. To validate the models, we test the accuracy of STP estimation using the spiking of pre- and postsynaptic neurons with known synaptic dynamics. We first test our models using the responses of layer 2/3 pyramidal neurons to simulated presynaptic input with different types of STP, and then use simulated spike trains to examine the effects of spike-frequency adaptation, stochastic vesicle release, spike sorting errors, and common input. We find that, using only spike observations, both model-based methods can accurately reconstruct the time-varying synaptic weights of presynaptic inputs for different types of STP. Our models also capture the differences in postsynaptic spike responses to presynaptic spikes following short vs long inter-spike intervals, similar to results reported for thalamocortical connections. These models may thus be useful tools for characterizing short-term plasticity from multi-electrode spike recordings in vivo.
Collapse
Affiliation(s)
- Abed Ghanbari
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
| | - Aleksey Malyshev
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Science, Moscow, Russia
| | - Maxim Volgushev
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| | - Ian H. Stevenson
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| |
Collapse
|
35
|
Eleftheriou CG, Cehajic-Kapetanovic J, Martial FP, Milosavljevic N, Bedford RA, Lucas RJ. Meclofenamic acid improves the signal to noise ratio for visual responses produced by ectopic expression of human rod opsin. Mol Vis 2017; 23:334-345. [PMID: 28659709 PMCID: PMC5479694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Accepted: 06/14/2017] [Indexed: 11/23/2022] Open
Abstract
PURPOSE Retinal dystrophy through outer photoreceptor cell death affects 1 in 2,500 people worldwide with severe impairment of vision in advanced stages of the disease. Optogenetic strategies to restore visual function to animal models of retinal degeneration by introducing photopigments to neurons spared degeneration in the inner retina have been explored, with variable degrees of success. It has recently been shown that the non-steroidal anti-inflammatory and non-selective gap-junction blocker meclofenamic acid (MFA) can enhance the visual responses produced by an optogenetic actuator (channelrhodopsin) expressed in retinal ganglion cells (RGCs) in the degenerate retina. Here, we set out to determine whether MFA could also enhance photoreception by another optogenetic strategy in which ectopic human rod opsin is expressed in ON bipolar cells. METHODS We used in vitro multielectrode array (MEA) recordings to characterize the light responses of RGCs in the rd1 mouse model of advanced retinal degeneration following intravitreal injection of an adenoassociated virus (AAV2) driving the expression of human rod opsin under a minimal grm6 promoter active in ON bipolar cells. RESULTS We found treated retinas were light responsive over five decades of irradiance (from 1011 to 1015 photons/cm2/s) with individual RGCs covering up to four decades. Application of MFA reduced the spontaneous firing rate of the visually responsive neurons under light- and dark-adapted conditions. The change in the firing rate produced by the 2 s light pulses was increased across all intensities following MFA treatment, and there was a concomitant increase in the signal to noise ratio for the visual response. Restored light responses were abolished by agents inhibiting glutamatergic or gamma-aminobutyric acid (GABA)ergic signaling in the MFA-treated preparation. CONCLUSIONS These results confirm the potential of MFA to inhibit spontaneous activity and enhance the signal to noise ratio of visual responses in optogenetic therapies to restore sight.
Collapse
|
36
|
Rahnama Rad K, Machado TA, Paninski L. Robust and scalable Bayesian analysis of spatial neural tuning function data. Ann Appl Stat 2017. [DOI: 10.1214/16-aoas996] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
37
|
Whiteway MR, Butts DA. Revealing unobserved factors underlying cortical activity with a rectified latent variable model applied to neural population recordings. J Neurophysiol 2017; 117:919-936. [PMID: 27927786 PMCID: PMC5338625 DOI: 10.1152/jn.00698.2016] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Accepted: 12/05/2016] [Indexed: 01/11/2023] Open
Abstract
The activity of sensory cortical neurons is not only driven by external stimuli but also shaped by other sources of input to the cortex. Unlike external stimuli, these other sources of input are challenging to experimentally control, or even observe, and as a result contribute to variability of neural responses to sensory stimuli. However, such sources of input are likely not "noise" and may play an integral role in sensory cortex function. Here we introduce the rectified latent variable model (RLVM) in order to identify these sources of input using simultaneously recorded cortical neuron populations. The RLVM is novel in that it employs nonnegative (rectified) latent variables and is much less restrictive in the mathematical constraints on solutions because of the use of an autoencoder neural network to initialize model parameters. We show that the RLVM outperforms principal component analysis, factor analysis, and independent component analysis, using simulated data across a range of conditions. We then apply this model to two-photon imaging of hundreds of simultaneously recorded neurons in mouse primary somatosensory cortex during a tactile discrimination task. Across many experiments, the RLVM identifies latent variables related to both the tactile stimulation as well as nonstimulus aspects of the behavioral task, with a majority of activity explained by the latter. These results suggest that properly identifying such latent variables is necessary for a full understanding of sensory cortical function and demonstrate novel methods for leveraging large population recordings to this end.NEW & NOTEWORTHY The rapid development of neural recording technologies presents new opportunities for understanding patterns of activity across neural populations. Here we show how a latent variable model with appropriate nonlinear form can be used to identify sources of input to a neural population and infer their time courses. Furthermore, we demonstrate how these sources are related to behavioral contexts outside of direct experimental control.
Collapse
Affiliation(s)
- Matthew R Whiteway
- Applied Mathematics and Statistics and Scientific Computation Program, University of Maryland, College Park, Maryland; and
| | - Daniel A Butts
- Applied Mathematics and Statistics and Scientific Computation Program, University of Maryland, College Park, Maryland; and
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland
| |
Collapse
|
38
|
Error-Robust Modes of the Retinal Population Code. PLoS Comput Biol 2016; 12:e1005148. [PMID: 27855154 PMCID: PMC5113862 DOI: 10.1371/journal.pcbi.1005148] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2015] [Accepted: 09/15/2016] [Indexed: 01/23/2023] Open
Abstract
Across the nervous system, certain population spiking patterns are observed far more frequently than others. A hypothesis about this structure is that these collective activity patterns function as population codewords–collective modes–carrying information distinct from that of any single cell. We investigate this phenomenon in recordings of ∼150 retinal ganglion cells, the retina’s output. We develop a novel statistical model that decomposes the population response into modes; it predicts the distribution of spiking activity in the ganglion cell population with high accuracy. We found that the modes represent localized features of the visual stimulus that are distinct from the features represented by single neurons. Modes form clusters of activity states that are readily discriminated from one another. When we repeated the same visual stimulus, we found that the same mode was robustly elicited. These results suggest that retinal ganglion cells’ collective signaling is endowed with a form of error-correcting code–a principle that may hold in brain areas beyond retina. Neurons in most parts of the nervous system represent and process information in a collective fashion, yet the nature of this collective code is poorly understood. An important constraint placed on any such collective processing comes from the fact that individual neurons’ signaling is prone to corruption by noise. The information theory and engineering literatures have studied error-correcting codes that allow individual noise-prone coding units to “check” each other, forming an overall representation that is robust to errors. In this paper, we have analyzed the population code of one of the best-studied neural systems, the retina, and found that it is structured in a manner analogous to error-correcting schemes. Indeed, we found that the complex activity patterns over ~150 retinal ganglion cells, the output neurons of the retina, could be mapped onto collective code words, and that these code words represented precise visual information while suppressing noise. In order to analyze this coding scheme, we introduced a novel quantitative model of the retinal output that predicted neural activity patterns more accurately than existing state-of-the-art approaches.
Collapse
|
39
|
Consistent estimation of complete neuronal connectivity in large neuronal populations using sparse "shotgun" neuronal activity sampling. J Comput Neurosci 2016; 41:157-84. [PMID: 27515518 DOI: 10.1007/s10827-016-0611-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2015] [Revised: 06/09/2016] [Accepted: 06/13/2016] [Indexed: 11/27/2022]
Abstract
We investigate the properties of recently proposed "shotgun" sampling approach for the common inputs problem in the functional estimation of neuronal connectivity. We study the asymptotic correctness, the speed of convergence, and the data size requirements of such an approach. We show that the shotgun approach can be expected to allow the inference of complete connectivity matrix in large neuronal populations under some rather general conditions. However, we find that the posterior error of the shotgun connectivity estimator grows quickly with the size of unobserved neuronal populations, the square of average connectivity strength, and the square of observation sparseness. This implies that the shotgun connectivity estimation will require significantly larger amounts of neuronal activity data whenever the number of neurons in observed neuronal populations remains small. We present a numerical approach for solving the shotgun estimation problem in general settings and use it to demonstrate the shotgun connectivity inference in the examples of simulated synfire and weakly coupled cortical neuronal networks.
Collapse
|
40
|
Abstract
As information flows through the brain, neuronal firing progresses from encoding the world as sensed by the animal to driving the motor output of subsequent behavior. One of the more tractable goals of quantitative neuroscience is to develop predictive models that relate the sensory or motor streams with neuronal firing. Here we review and contrast analytical tools used to accomplish this task. We focus on classes of models in which the external variable is compared with one or more feature vectors to extract a low-dimensional representation, the history of spiking and other variables are potentially incorporated, and these factors are nonlinearly transformed to predict the occurrences of spikes. We illustrate these techniques in application to datasets of different degrees of complexity. In particular, we address the fitting of models in the presence of strong correlations in the external variable, as occurs in natural sensory stimuli and in movement. Spectral correlation between predicted and measured spike trains is introduced to contrast the relative success of different methods.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Department of Physics, University of California, San Diego, San Diego, CA 92093, USA; Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA.
| | - Benjamin J Lansdell
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA; WRF UW Institute for Neuroengineering, University of Washington, Seattle, WA 98195, USA
| | - David Kleinfeld
- Department of Physics, University of California, San Diego, San Diego, CA 92093, USA; Section of Neurobiology, University of California, San Diego, La Jolla, CA 92093, USA; Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
41
|
Luo X, Gee S, Sohal V, Small D. A point-process response model for spike trains from single neurons in neural circuits under optogenetic stimulation. Stat Med 2016; 35:455-74. [PMID: 26411923 PMCID: PMC4713323 DOI: 10.1002/sim.6742] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2014] [Accepted: 09/02/2015] [Indexed: 11/12/2022]
Abstract
Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high-frequency point process (neuronal spikes) while the input is another high-frequency point process (light flashes). We further develop a generalized linear model approach to model the relationships between two point processes, employing additive point-process response functions. The resulting model, point-process responses for optogenetics (PRO), provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior area-under-the-curve value as high as 93% for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions and/or experimental conditions by comparing the PRO parameters.
Collapse
Affiliation(s)
- X. Luo
- Department of Biostatistics, Brown University, Providence, Rhode Island 02912, USA
| | - S. Gee
- Department of Psychiatry and Neuroscience Graduate Program, University of California, San Francisco, California 94143, USA
| | - V. Sohal
- Department of Psychiatry and Neuroscience Graduate Program, University of California, San Francisco, California 94143, USA
| | - D. Small
- Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| |
Collapse
|
42
|
Surace SC, Pfister JP. A Statistical Model for In Vivo Neuronal Dynamics. PLoS One 2015; 10:e0142435. [PMID: 26571371 PMCID: PMC4646699 DOI: 10.1371/journal.pone.0142435] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 10/21/2015] [Indexed: 11/19/2022] Open
Abstract
Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.
Collapse
Affiliation(s)
- Simone Carlo Surace
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- * E-mail:
| | - Jean-Pascal Pfister
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
43
|
Rabinowitz NC, Goris RL, Cohen M, Simoncelli EP. Attention stabilizes the shared gain of V4 populations. eLife 2015; 4:e08998. [PMID: 26523390 PMCID: PMC4758958 DOI: 10.7554/elife.08998] [Citation(s) in RCA: 119] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2015] [Accepted: 11/01/2015] [Indexed: 12/31/2022] Open
Abstract
Responses of sensory neurons represent stimulus information, but are also influenced by internal state. For example, when monkeys direct their attention to a visual stimulus, the response gain of specific subsets of neurons in visual cortex changes. Here, we develop a functional model of population activity to investigate the structure of this effect. We fit the model to the spiking activity of bilateral neural populations in area V4, recorded while the animal performed a stimulus discrimination task under spatial attention. The model reveals four separate time-varying shared modulatory signals, the dominant two of which each target task-relevant neurons in one hemisphere. In attention-directed conditions, the associated shared modulatory signal decreases in variance. This finding provides an interpretable and parsimonious explanation for previous observations that attention reduces variability and noise correlations of sensory neurons. Finally, the recovered modulatory signals reflect previous reward, and are predictive of subsequent choice behavior. DOI:http://dx.doi.org/10.7554/eLife.08998.001 Our brains receive an enormous amount of information from our senses. However, we can’t deal with it all at once; the brain must selectively focus on a portion of this information. This process of selective focus is generally called “attention”. In the visual system, this is believed to operate as a kind of amplifier that selectively boosts the signals of a particular subset of nerve cells (also known as “neurons”). Rabinowitz et al. built a model to study the activity of large populations of neurons in an area of the visual cortex known as V4. This model made it possible to detect hidden signals that control the attentional boosting of these neurons. Rabinowitz et al. show that when a monkey carries out a visual task, the neurons in V4 are under the influence of a small number of shared amplification signals that fluctuate in strength. These amplification signals selectively affect V4 neurons that process different parts of the visual scene. Furthermore, when the monkey directs their attention to a part of the visual scene, the associated amplifier reduces its fluctuations. This has the side effect of both boosting and stabilizing the responses of the affected V4 neurons, as well as increasing their independence. Rabinowitz et al.’s findings suggest that when we focus our attention on incoming information, we make the responses of particular neurons larger and reduce unwanted variability to improve the quality of the represented information. The next challenge is to understand what causes these fluctuations in the amplification signals. DOI:http://dx.doi.org/10.7554/eLife.08998.002
Collapse
Affiliation(s)
- Neil C Rabinowitz
- Center for Neural Science, Howard Hughes Medical Institute, New York University, New York, United States
| | - Robbe L Goris
- Center for Neural Science, Howard Hughes Medical Institute, New York University, New York, United States
| | - Marlene Cohen
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, United States
| | - Eero P Simoncelli
- Center for Neural Science, Howard Hughes Medical Institute, New York University, New York, United States
| |
Collapse
|
44
|
Soudry D, Keshri S, Stinson P, Oh MH, Iyengar G, Paninski L. Efficient "Shotgun" Inference of Neural Connectivity from Highly Sub-sampled Activity Data. PLoS Comput Biol 2015; 11:e1004464. [PMID: 26465147 PMCID: PMC4605541 DOI: 10.1371/journal.pcbi.1004464] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2014] [Accepted: 07/09/2015] [Indexed: 11/19/2022] Open
Abstract
Inferring connectivity in neuronal networks remains a key challenge in statistical neuroscience. The “common input” problem presents a major roadblock: it is difficult to reliably distinguish causal connections between pairs of observed neurons versus correlations induced by common input from unobserved neurons. Available techniques allow us to simultaneously record, with sufficient temporal resolution, only a small fraction of the network. Consequently, naive connectivity estimators that neglect these common input effects are highly biased. This work proposes a “shotgun” experimental design, in which we observe multiple sub-networks briefly, in a serial manner. Thus, while the full network cannot be observed simultaneously at any given time, we may be able to observe much larger subsets of the network over the course of the entire experiment, thus ameliorating the common input problem. Using a generalized linear model for a spiking recurrent neural network, we develop a scalable approximate expected loglikelihood-based Bayesian method to perform network inference given this type of data, in which only a small fraction of the network is observed in each time bin. We demonstrate in simulation that the shotgun experimental design can eliminate the biases induced by common input effects. Networks with thousands of neurons, in which only a small fraction of the neurons is observed in each time bin, can be quickly and accurately estimated, achieving orders of magnitude speed up over previous approaches. Optical imaging of the activity in a neuronal network is limited by the scanning speed of the imaging device. Therefore, typically, only a small fixed part of the network is observed during the entire experiment. However, in such an experiment, it can be hard to infer from the observed activity patterns whether (1) a neuron A directly affects neuron B, or (2) another, unobserved neuron C affects both A and B. To deal with this issue, we propose a “shotgun” observation scheme, in which, at each time point, we observe a small changing subset of the neurons from the network. Consequently, many fewer neurons remain completely unobserved during the entire experiment, enabling us to eventually distinguish between cases (1) and (2) given sufficiently long experiments. Since previous inference algorithms cannot efficiently handle so many missing observations, we develop a scalable algorithm for data acquired using the shotgun observation scheme, in which only a small fraction of the neurons are observed in each time bin. Using this kind of simulated data, we show the algorithm is able to quickly infer connectivity in spiking recurrent networks with thousands of neurons.
Collapse
Affiliation(s)
- Daniel Soudry
- Department of Statistics, Department of Neuroscience, the Center for Theoretical Neuroscience, the Grossman Center for the Statistics of Mind, the Kavli Institute for Brain Science, and the NeuroTechnology Center, Columbia University, New York, New York, United States of America
| | - Suraj Keshri
- Department of Industrial Engineering and Operations Research, Columbia University, New York, New York, United States of America
| | - Patrick Stinson
- Department of Statistics, Department of Neuroscience, the Center for Theoretical Neuroscience, the Grossman Center for the Statistics of Mind, the Kavli Institute for Brain Science, and the NeuroTechnology Center, Columbia University, New York, New York, United States of America
| | - Min-Hwan Oh
- Department of Industrial Engineering and Operations Research, Columbia University, New York, New York, United States of America
| | - Garud Iyengar
- Department of Industrial Engineering and Operations Research, Columbia University, New York, New York, United States of America
| | - Liam Paninski
- Department of Statistics, Department of Neuroscience, the Center for Theoretical Neuroscience, the Grossman Center for the Statistics of Mind, the Kavli Institute for Brain Science, and the NeuroTechnology Center, Columbia University, New York, New York, United States of America
| |
Collapse
|
45
|
Ganmor E, Segev R, Schneidman E. A thesaurus for a neural population code. eLife 2015; 4. [PMID: 26347983 PMCID: PMC4562117 DOI: 10.7554/elife.06134] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 08/02/2015] [Indexed: 11/15/2022] Open
Abstract
Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns. DOI:http://dx.doi.org/10.7554/eLife.06134.001 Our ability to perceive the world is dependent on information from our senses being passed between different parts of the brain. The information is encoded as patterns of electrical pulses or ‘spikes’, which other brain regions must be able to decipher. Cracking this code would thus enable us to predict the patterns of nerve impulses that would occur in response to specific stimuli, and ‘decode’ which stimuli had produced particular patterns of impulses. This task is challenging in part because of its scale—vast numbers of stimuli are encoded by huge numbers of neurons that can send their spikes in many different combinations. Furthermore, neurons are inherently noisy and their response to identical stimuli may vary considerably in the number of spikes and their timing. This means that the brain cannot simply link a single unchanging pattern of firing with each stimulus, because these firing patterns are often distorted by biophysical noise. Ganmor et al. have now modeled the effects of noise in a network of neurons in the retina (found at the back of the eye), and, in doing so, have provided insights into how the brain solves this problem. This has brought us a step closer to cracking the neural code. First, 10 second video clips of natural scenes and artificial stimuli were played on a loop to a sample of retina taken from a salamander, and the responses of nearly 100 neurons in the sample were recorded for two hours. Dividing the 10 second clip into short segments provided a series of 500 stimuli, which the network had been exposed to more than 600 times. Ganmor et al. analyzed the responses of groups of 20 cells to each stimulus and found that physically similar firing patterns were not particularly likely to encode the same stimulus. This can be likened to the way that words such as ‘light’ and ‘night’ have similar structures but different meanings. Instead, the model reveals that each stimulus was represented by a cluster of firing patterns that bore little physical resemblance to one another, but which nevertheless conveyed the same meaning. To continue on with the previous example, this is similar to way that ‘light’ and ‘illumination’ have the same meaning but different structures. Ganmor et al. use these new data to map the organization of the ‘vocabulary’ of populations of cells the retina, and put together a kind of ‘thesaurus’ that enables new activity patterns of the retina to be decoded and could be used to crack the neural code. Furthermore, the organization of ‘synonyms’ is strikingly similar to codes that are favored in many forms of telecommunication. In these man-made codes, codewords that represent different items are chosen to be so distinct from each other that even if they were corrupted by noise, they could be correctly deciphered. Correspondingly, in the retina, patterns that carry the same meaning occupy a distinct area, and new patterns can be interpreted based on their proximity to these clusters. DOI:http://dx.doi.org/10.7554/eLife.06134.002
Collapse
Affiliation(s)
- Elad Ganmor
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Ronen Segev
- Department of Life Sciences, Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Elad Schneidman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
46
|
Lakshmanan KC, Sadtler PT, Tyler-Kabara EC, Batista AP, Yu BM. Extracting Low-Dimensional Latent Structure from Time Series in the Presence of Delays. Neural Comput 2015; 27:1825-56. [PMID: 26079746 PMCID: PMC4545403 DOI: 10.1162/neco_a_00759] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Noisy, high-dimensional time series observations can often be described by a set of low-dimensional latent variables. Commonly used methods to extract these latent variables typically assume instantaneous relationships between the latent and observed variables. In many physical systems, changes in the latent variables manifest as changes in the observed variables after time delays. Techniques that do not account for these delays can recover a larger number of latent variables than are present in the system, thereby making the latent representation more difficult to interpret. In this work, we introduce a novel probabilistic technique, time-delay gaussian-process factor analysis (TD-GPFA), that performs dimensionality reduction in the presence of a different time delay between each pair of latent and observed variables. We demonstrate how using a gaussian process to model the evolution of each latent variable allows us to tractably learn these delays over a continuous domain. Additionally, we show how TD-GPFA combines temporal smoothing and dimensionality reduction into a common probabilistic framework. We present an expectation/conditional maximization either (ECME) algorithm to learn the model parameters. Our simulations demonstrate that when time delays are present, TD-GPFA is able to correctly identify these delays and recover the latent space. We then applied TD-GPFA to the activity of tens of neurons recorded simultaneously in the macaque motor cortex during a reaching task. TD-GPFA is able to better describe the neural activity using a more parsimonious latent space than GPFA, a method that has been used to interpret motor cortex data but does not account for time delays. More broadly, TD-GPFA can help to unravel the mechanisms underlying high-dimensional time series data by taking into account physical delays in the system.
Collapse
Affiliation(s)
- Karthik C Lakshmanan
- Robotics Institute and Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A.
| | - Patrick T Sadtler
- Department of Bioengineering, Center for the Neural Basis of Cognition, and Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA 15261, U.S.A.
| | - Elizabeth C Tyler-Kabara
- Department of Neurological Surgery, Department of Bioengineering, and Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA 15261, U.S.A.
| | - Aaron P Batista
- Department of Bioengineering, Center for the Neural Basis of Cognition, and Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA 15261, U.S.A.
| | - Byron M Yu
- Department of Electrical Engineering and Computer Engineering, Department of Biomedical Engineering, and Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A.
| |
Collapse
|
47
|
Chen SC, Morley JW, Solomon SG. Spatial precision of population activity in primate area MT. J Neurophysiol 2015; 114:869-78. [PMID: 26041825 PMCID: PMC4533107 DOI: 10.1152/jn.00152.2015] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2015] [Accepted: 06/01/2015] [Indexed: 11/22/2022] Open
Abstract
The middle temporal (MT) area is a cortical area integral to the "where" pathway of primate visual processing, signaling the movement and position of objects in the visual world. The receptive field of a single MT neuron is sensitive to the direction of object motion but is too large to signal precise spatial position. Here, we asked if the activity of MT neurons could be combined to support the high spatial precision required in the where pathway. With the use of multielectrode arrays, we recorded simultaneously neural activity at 24-65 sites in area MT of anesthetized marmoset monkeys. We found that although individual receptive fields span more than 5° of the visual field, the combined population response can support fine spatial discriminations (<0.2°). This is because receptive fields at neighboring sites overlapped substantially, and changes in spatial position are therefore projected onto neural activity in a large ensemble of neurons. This fine spatial discrimination is supported primarily by neurons with receptive fields flanking the target locations. Population performance is degraded (by 13-22%) when correlations in neural activity are ignored, further reflecting the contribution of population neural interactions. Our results show that population signals can provide high spatial precision despite large receptive fields, allowing area MT to represent both the motion and the position of objects in the visual world.
Collapse
Affiliation(s)
- Spencer C Chen
- Australian Research Council Centre of Excellence for Integrative Brain Function, The University of Sydney, New South Wales, Australia; School of Medical Sciences, The University of Sydney, New South Wales, Australia;
| | - John W Morley
- School of Medicine, University of Western Sydney, Penrith, New South Wales, Australia; and
| | - Samuel G Solomon
- School of Medical Sciences, The University of Sydney, New South Wales, Australia; Institute for Behavioural Neuroscience, University College London, London, United Kingdom
| |
Collapse
|
48
|
Zaytsev YV, Morrison A, Deger M. Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity. J Comput Neurosci 2015; 39:77-103. [PMID: 26041729 PMCID: PMC4493949 DOI: 10.1007/s10827-015-0565-5] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2014] [Revised: 04/18/2015] [Accepted: 04/22/2015] [Indexed: 10/30/2022]
Abstract
Dynamics and function of neuronal networks are determined by their synaptic connectivity. Current experimental methods to analyze synaptic network structure on the cellular level, however, cover only small fractions of functional neuronal circuits, typically without a simultaneous record of neuronal spiking activity. Here we present a method for the reconstruction of large recurrent neuronal networks from thousands of parallel spike train recordings. We employ maximum likelihood estimation of a generalized linear model of the spiking activity in continuous time. For this model the point process likelihood is concave, such that a global optimum of the parameters can be obtained by gradient ascent. Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities. We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters. For a simulated balanced random network of 1000 neurons, synaptic connectivity is recovered with a misclassification error rate of less than 1 % under ideal conditions. We show that the error rate remains low in a series of example cases under progressively less ideal conditions. Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups. Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.
Collapse
Affiliation(s)
- Yury V. Zaytsev
- Simulation Laboratory Neuroscience – Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center, Jülich, Germany
- Faculty of Biology, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Forschungszentrum Jülich GmbH, Jülich Supercomputing Center (JSC), 52425 Jülich, Germany
| | - Abigail Morrison
- Simulation Laboratory Neuroscience – Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center, Jülich, Germany
- Institute for Advanced Simulation (IAS-6), Theoretical Neuroscience & Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Jülich Research Center and JARA, Jülich, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| | - Moritz Deger
- School of Life Sciences, Brain Mind Institute and School of Computer and Communication Sciences, École polytechnique fédérale de Lausanne, 1015 Lausanne, EPFL Switzerland
| |
Collapse
|
49
|
Roudi Y, Dunn B, Hertz J. Multi-neuronal activity and functional connectivity in cell assemblies. Curr Opin Neurobiol 2015; 32:38-44. [DOI: 10.1016/j.conb.2014.10.011] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2014] [Revised: 10/20/2014] [Accepted: 10/20/2014] [Indexed: 12/01/2022]
|
50
|
Simultaneous silence organizes structured higher-order interactions in neural populations. Sci Rep 2015; 5:9821. [PMID: 25919985 PMCID: PMC4412118 DOI: 10.1038/srep09821] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2014] [Accepted: 03/18/2015] [Indexed: 11/18/2022] Open
Abstract
Activity patterns of neural population are constrained by underlying biological
mechanisms. These patterns are characterized not only by individual activity rates
and pairwise correlations but also by statistical dependencies among groups of
neurons larger than two, known as higher-order interactions (HOIs). While HOIs are
ubiquitous in neural activity, primary characteristics of HOIs remain unknown. Here,
we report that simultaneous silence (SS) of neurons concisely summarizes neural
HOIs. Spontaneously active neurons in cultured hippocampal slices express SS that is
more frequent than predicted by their individual activity rates and pairwise
correlations. The SS explains structured HOIs seen in the data, namely, alternating
signs at successive interaction orders. Inhibitory neurons are necessary to maintain
significant SS. The structured HOIs predicted by SS were observed in a simple neural
population model characterized by spiking nonlinearity and correlated input. These
results suggest that SS is a ubiquitous feature of HOIs that constrain neural
activity patterns and can influence information processing.
Collapse
|