1
|
Geadah V, Barello G, Greenidge D, Charles AS, Pillow JW. Sparse-Coding Variational Autoencoders. Neural Comput 2024; 36:2571-2601. [PMID: 39383030 DOI: 10.1162/neco_a_01715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 05/28/2024] [Indexed: 10/11/2024]
Abstract
The sparse coding model posits that the visual system has evolved to efficiently code natural stimuli using a sparse set of features from an overcomplete dictionary. The original sparse coding model suffered from two key limitations; however: (1) computing the neural response to an image patch required minimizing a nonlinear objective function via recurrent dynamics and (2) fitting relied on approximate inference methods that ignored uncertainty. Although subsequent work has developed several methods to overcome these obstacles, we propose a novel solution inspired by the variational autoencoder (VAE) framework. We introduce the sparse coding variational autoencoder (SVAE), which augments the sparse coding model with a probabilistic recognition model parameterized by a deep neural network. This recognition model provides a neurally plausible feedforward implementation for the mapping from image patches to neural activities and enables a principled method for fitting the sparse coding model to data via maximization of the evidence lower bound (ELBO). The SVAE differs from standard VAEs in three key respects: the latent representation is overcomplete (there are more latent dimensions than image pixels), the prior is sparse or heavy-tailed instead of gaussian, and the decoder network is a linear projection instead of a deep network. We fit the SVAE to natural image data under different assumed prior distributions and show that it obtains higher test performance than previous fitting methods. Finally, we examine the response properties of the recognition network and show that it captures important nonlinear properties of neurons in the early visual pathway.
Collapse
Affiliation(s)
- Victor Geadah
- Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Gabriel Barello
- Institute of Neuroscience, University of Oregon, Eugene, OR 97403, U.S.A.
| | - Daniel Greenidge
- Department of Computer Science, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Adam S Charles
- Department of Biomedical Engineering, Department Center for Imaging Science, and Department Kavli Neuroscience Discovery Institute, Baltimore, MD 21218, U.S.A.
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
2
|
Nicola W, Newton TR, Clopath C. The impact of spike timing precision and spike emission reliability on decoding accuracy. Sci Rep 2024; 14:10536. [PMID: 38719897 PMCID: PMC11078995 DOI: 10.1038/s41598-024-58524-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 04/01/2024] [Indexed: 05/12/2024] Open
Abstract
Precisely timed and reliably emitted spikes are hypothesized to serve multiple functions, including improving the accuracy and reproducibility of encoding stimuli, memories, or behaviours across trials. When these spikes occur as a repeating sequence, they can be used to encode and decode a potential time series. Here, we show both analytically and in simulations that the error incurred in approximating a time series with precisely timed and reliably emitted spikes decreases linearly with the number of neurons or spikes used in the decoding. This was verified numerically with synthetically generated patterns of spikes. Further, we found that if spikes were imprecise in their timing, or unreliable in their emission, the error incurred in decoding with these spikes would be sub-linear. However, if the spike precision or spike reliability increased with network size, the error incurred in decoding a time-series with sequences of spikes would maintain a linear decrease with network size. The spike precision had to increase linearly with network size, while the probability of spike failure had to decrease with the square-root of the network size. Finally, we identified a candidate circuit to test this scaling relationship: the repeating sequences of spikes with sub-millisecond precision in area HVC (proper name) of the zebra finch. This scaling relationship can be tested using both neural data and song-spectrogram-based recordings while taking advantage of the natural fluctuation in HVC network size due to neurogenesis.
Collapse
Affiliation(s)
- Wilten Nicola
- University of Calgary, Calgary, Canada.
- Department of Cell Biology and Anatomy, Calgary, Canada.
- Hotchkiss Brain Institute, Calgary, Canada.
| | | | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
3
|
Podlaski WF, Machens CK. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks. Neural Comput 2024; 36:803-857. [PMID: 38658028 DOI: 10.1162/neco_a_01658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
Collapse
Affiliation(s)
- William F Podlaski
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| |
Collapse
|
4
|
Hou B, Ma J, Yang F. Energy-guided synapse coupling between neurons under noise. J Biol Phys 2023; 49:49-76. [PMID: 36640246 PMCID: PMC9958228 DOI: 10.1007/s10867-022-09622-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 12/26/2022] [Indexed: 01/15/2023] Open
Abstract
From a physical viewpoint, any external stimuli including noise disturbance can inject energy into the media, and the electric response is regulated by the equivalent electric stimulus. For example, mode transition in electric activities in neurons occurs and kinds of spatial patterns are formed during the wave propagation. In this paper, a feasible criterion is suggested to explain and control the growth of electric synapse and memristive synapse between Hindmarsh-Rose neurons in the presence of noise. It is claimed that synaptic coupling can be enhanced adaptively due to energy diversity, and the coupling intensity is increased to a saturation value until two neurons reach certain energy balance. Two identical neurons can reach perfect synchronization when electric synapse coupling is further increased. This scheme is also considered in a chain neural network and uniform noise is applied on all neurons. However, reaching synchronization becomes difficult for neurons in presenting spiking, bursting, and chaotic and periodic patterns, even when the local energy balance is corrupted to continue further growth of the coupling intensity. In the presence of noise, energy diversity becomes uncertain because of spatial diversity in excitability, and development of regular patterns is blocked. The similar scheme is used to control the growth of memristive synapse for neurons, and the synchronization stability and pattern formation are controlled by the energy diversity among neurons effectively. These results provide possible guidance for knowing the biophysical mechanism for synapse growth and energy flow can be applied to control the synchronous patterns between neurons.
Collapse
Affiliation(s)
- Bo Hou
- Department of Physics, Lanzhou University of Technology, Lanzhou, 730050, China
| | - Jun Ma
- Department of Physics, Lanzhou University of Technology, Lanzhou, 730050, China.
- School of Science, Chongqing University of Posts and Telecommunications, Chongqing, 430065, China.
- College of Electrical and Information Engineering, Lanzhou University of Technology, Lanzhou, 730050, China.
| | - Feifei Yang
- College of Electrical and Information Engineering, Lanzhou University of Technology, Lanzhou, 730050, China
| |
Collapse
|
5
|
Timcheck J, Kadmon J, Boahen K, Ganguli S. Optimal noise level for coding with tightly balanced networks of spiking neurons in the presence of transmission delays. PLoS Comput Biol 2022; 18:e1010593. [PMID: 36251693 PMCID: PMC9576105 DOI: 10.1371/journal.pcbi.1010593] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 09/21/2022] [Indexed: 11/19/2022] Open
Abstract
Neural circuits consist of many noisy, slow components, with individual neurons subject to ion channel noise, axonal propagation delays, and unreliable and slow synaptic transmission. This raises a fundamental question: how can reliable computation emerge from such unreliable components? A classic strategy is to simply average over a population of N weakly-coupled neurons to achieve errors that scale as [Formula: see text]. But more interestingly, recent work has introduced networks of leaky integrate-and-fire (LIF) neurons that achieve coding errors that scale superclassically as 1/N by combining the principles of predictive coding and fast and tight inhibitory-excitatory balance. However, spike transmission delays preclude such fast inhibition, and computational studies have observed that such delays can cause pathological synchronization that in turn destroys superclassical coding performance. Intriguingly, it has also been observed in simulations that noise can actually improve coding performance, and that there exists some optimal level of noise that minimizes coding error. However, we lack a quantitative theory that describes this fascinating interplay between delays, noise and neural coding performance in spiking networks. In this work, we elucidate the mechanisms underpinning this beneficial role of noise by deriving analytical expressions for coding error as a function of spike propagation delay and noise levels in predictive coding tight-balance networks of LIF neurons. Furthermore, we compute the minimal coding error and the associated optimal noise level, finding that they grow as power-laws with the delay. Our analysis reveals quantitatively how optimal levels of noise can rescue neural coding performance in spiking neural networks with delays by preventing the build up of pathological synchrony without overwhelming the overall spiking dynamics. This analysis can serve as a foundation for the further study of precise computation in the presence of noise and delays in efficient spiking neural circuits.
Collapse
Affiliation(s)
- Jonathan Timcheck
- Department of Physics, Stanford University, Stanford, California, United States of America
| | - Jonathan Kadmon
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
| | - Kwabena Boahen
- Department of Bioengineering, Stanford University, Stanford, California, United States of America
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
| |
Collapse
|
6
|
Duggins P, Eliasmith C. Constructing functional models from biophysically-detailed neurons. PLoS Comput Biol 2022; 18:e1010461. [PMID: 36074765 PMCID: PMC9455888 DOI: 10.1371/journal.pcbi.1010461] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/30/2022] [Indexed: 11/25/2022] Open
Abstract
Improving biological plausibility and functional capacity are two important goals for brain models that connect low-level neural details to high-level behavioral phenomena. We develop a method called “oracle-supervised Neural Engineering Framework” (osNEF) to train biologically-detailed spiking neural networks that realize a variety of cognitively-relevant dynamical systems. Specifically, we train networks to perform computations that are commonly found in cognitive systems (communication, multiplication, harmonic oscillation, and gated working memory) using four distinct neuron models (leaky-integrate-and-fire neurons, Izhikevich neurons, 4-dimensional nonlinear point neurons, and 4-compartment, 6-ion-channel layer-V pyramidal cell reconstructions) connected with various synaptic models (current-based synapses, conductance-based synapses, and voltage-gated synapses). We show that osNEF networks exhibit the target dynamics by accounting for nonlinearities present within the neuron models: performance is comparable across all four systems and all four neuron models, with variance proportional to task and neuron model complexity. We also apply osNEF to build a model of working memory that performs a delayed response task using a combination of pyramidal cells and inhibitory interneurons connected with NMDA and GABA synapses. The baseline performance and forgetting rate of the model are consistent with animal data from delayed match-to-sample tasks (DMTST): we observe a baseline performance of 95% and exponential forgetting with time constant τ = 8.5s, while a recent meta-analysis of DMTST performance across species observed baseline performances of 58 − 99% and exponential forgetting with time constants of τ = 2.4 − 71s. These results demonstrate that osNEF can train functional brain models using biologically-detailed components and open new avenues for investigating the relationship between biophysical mechanisms and functional capabilities. Computational models of biologically realistic neural networks help scientists understand and recreate a wide variety of brain processes, responsible for everything from fish locomotion to human cognition. To be useful, these models must both recreate features of the brain, such as the electrical, chemical, and geometric properties of neurons, and perform useful functional operations, such as storing and retrieving information from a short term memory. Here, we develop a new method for training networks built from biologically detailed components. We simulate networks that contain a variety of complex neurons and synapses, then show that our method successfully trains them to perform a variety of cognitive operations. Most notably, we train a working memory model that contains detailed reconstructions of cortical neurons, and demonstrate that it performs a memory task with performance that is comparable to simple animals. Researchers can use our method to train detailed brain models and investigate how biological features (or deficits thereof) relate to cognition, which may provide insights into the biological basis of mental disorders such as Parkinson’s disease.
Collapse
Affiliation(s)
- Peter Duggins
- Computational Neuroscience Research Group, Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
- * E-mail:
| | - Chris Eliasmith
- Computational Neuroscience Research Group, Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
| |
Collapse
|
7
|
Srinivasan A, Srinivasan A, Riceberg JS, Goodman MR, Guise KG, Shapiro ML. An in silico model for determining the influence of neuronal co-activity on rodent spatial behavior. J Neurosci Methods 2022; 377:109627. [PMID: 35609789 PMCID: PMC11073634 DOI: 10.1016/j.jneumeth.2022.109627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 05/04/2022] [Accepted: 05/18/2022] [Indexed: 11/17/2022]
Abstract
BACKGROUND Neuropsychological and neurophysiological analyses focus on understanding how neuronal activity and co-activity predict behavior. Experimental techniques allow for modulation of neuronal activity, but do not control neuronal ensemble spatiotemporal firing patterns, and there are few, if any, sophisticated in silico techniques which accurately reconstruct physiological neural spike trains and behavior using unit co-activity as an input parameter. NEW METHOD Our approach to simulation of neuronal spike trains is based on using state space modeling to estimate a weighted graph of interaction strengths between pairs of neurons along with separate estimations of spiking threshold voltage and neuronal membrane leakage. These parameters allow us to tune a biophysical model which is then employed to accurately reconstruct spike trains from freely behaving animals and then use these spike trains to estimate an animal's spatial behavior. The reconstructed spatial behavior allows us to confirm the same information is present in both the recorded and simulated spike trains. RESULTS Our method reconstructs spike trains (98 ± 0.0013% like original spike trains, mean ± SEM) and animal position (9.468 ± 0.240 cm, mean ± SEM) with high fidelity. COMPARISON WITH EXISTING METHOD(S) To our knowledge, this is the first method that uses empirically derived network connectivity to constrain biophysical parameters and predict spatial behavior. Together, these methods allow in silico quantification of the contribution of specific unit activity and co-activity to animal spatial behavior. CONCLUSIONS Our novel approach provides a flexible, robust in silico technique for determining the contribution of specific neuronal activity and co-activity to spatial behavior.
Collapse
Affiliation(s)
- Aditya Srinivasan
- Department of Neuroscience and Experimental Therapeutics, Albany Medical College, 47 New Scotland Ave, Mail Code 126, Albany, NY 12208, United States.
| | - Arvind Srinivasan
- Department of Neuroscience and Experimental Therapeutics, Albany Medical College, 47 New Scotland Ave, Mail Code 126, Albany, NY 12208, United States; College of Health Sciences, California Northstate University, 2910 Prospect Park Drive, Rancho Cordova, CA 95670, United States
| | - Justin S Riceberg
- Department of Neuroscience and Experimental Therapeutics, Albany Medical College, 47 New Scotland Ave, Mail Code 126, Albany, NY 12208, United States; Department of Psychiatry, Icahn School of Medicine at Mount Sinai, Hess Center for Science and Medicine, 1470 Madison Avenue, New York, NY 10029, United States
| | - Michael R Goodman
- Department of Neuroscience and Experimental Therapeutics, Albany Medical College, 47 New Scotland Ave, Mail Code 126, Albany, NY 12208, United States
| | - Kevin G Guise
- Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, Hess Center for Science and Medicine, 1470 Madison Avenue, New York, NY 10029, United States
| | - Matthew L Shapiro
- Department of Neuroscience and Experimental Therapeutics, Albany Medical College, 47 New Scotland Ave, Mail Code 126, Albany, NY 12208, United States.
| |
Collapse
|
8
|
Ioannides G, Kourouklides I, Astolfi A. Spatiotemporal dynamics in spiking recurrent neural networks using modified-full-FORCE on EEG signals. Sci Rep 2022; 12:2896. [PMID: 35190579 PMCID: PMC8861015 DOI: 10.1038/s41598-022-06573-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/05/2022] [Indexed: 11/22/2022] Open
Abstract
Methods on modelling the human brain as a Complex System have increased remarkably in the literature as researchers seek to understand the underlying foundations behind cognition, behaviour, and perception. Computational methods, especially Graph Theory-based methods, have recently contributed significantly in understanding the wiring connectivity of the brain, modelling it as a set of nodes connected by edges. Therefore, the brain’s spatiotemporal dynamics can be holistically studied by considering a network, which consists of many neurons, represented by nodes. Various models have been proposed for modelling such neurons. A recently proposed method in training such networks, called full-Force, produces networks that perform tasks with fewer neurons and greater noise robustness than previous least-squares approaches (i.e. FORCE method). In this paper, the first direct applicability of a variant of the full-Force method to biologically-motivated Spiking RNNs (SRNNs) is demonstrated. The SRNN is a graph consisting of modules. Each module is modelled as a Small-World Network (SWN), which is a specific type of a biologically-plausible graph. So, the first direct applicability of a variant of the full-Force method to modular SWNs is demonstrated, evaluated through regression and information theoretic metrics. For the first time, the aforementioned method is applied to spiking neuron models and trained on various real-life Electroencephalography (EEG) signals. To the best of the authors’ knowledge, all the contributions of this paper are novel. Results show that trained SRNNs match EEG signals almost perfectly, while network dynamics can mimic the target dynamics. This demonstrates that the holistic setup of the network model and the neuron model which are both more biologically plausible than previous work, can be tuned into real biological signal dynamics.
Collapse
Affiliation(s)
- Georgios Ioannides
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK.
| | - Ioannis Kourouklides
- Department of Electrical Engineering, Computer Engineering and Informatics, Cyprus University of Technology, 33 Saripolou Street, 3036, Limassol, Cyprus
| | - Alessandro Astolfi
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK
| |
Collapse
|
9
|
Zeldenrust F, Gutkin B, Denéve S. Efficient and robust coding in heterogeneous recurrent networks. PLoS Comput Biol 2021; 17:e1008673. [PMID: 33930016 PMCID: PMC8115785 DOI: 10.1371/journal.pcbi.1008673] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 05/12/2021] [Accepted: 04/07/2021] [Indexed: 11/19/2022] Open
Abstract
Cortical networks show a large heterogeneity of neuronal properties. However, traditional coding models have focused on homogeneous populations of excitatory and inhibitory neurons. Here, we analytically derive a class of recurrent networks of spiking neurons that close to optimally track a continuously varying input online, based on two assumptions: 1) every spike is decoded linearly and 2) the network aims to reduce the mean-squared error between the input and the estimate. From this we derive a class of predictive coding networks, that unifies encoding and decoding and in which we can investigate the difference between homogeneous networks and heterogeneous networks, in which each neurons represents different features and has different spike-generating properties. We find that in this framework, 'type 1' and 'type 2' neurons arise naturally and networks consisting of a heterogeneous population of different neuron types are both more efficient and more robust against correlated noise. We make two experimental predictions: 1) we predict that integrators show strong correlations with other integrators and resonators are correlated with resonators, whereas the correlations are much weaker between neurons with different coding properties and 2) that 'type 2' neurons are more coherent with the overall network activity than 'type 1' neurons.
Collapse
Affiliation(s)
- Fleur Zeldenrust
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Boris Gutkin
- Group for Neural Theory, INSERM U960, Département d’Études Cognitives, École Normal Supérieure PSL University, Paris, France
- Center for Cognition and Decision Making, National Research University Higher School of Economics, Moscow, Russia
| | - Sophie Denéve
- Group for Neural Theory, INSERM U960, Département d’Études Cognitives, École Normal Supérieure PSL University, Paris, France
| |
Collapse
|
10
|
Talyansky S, Brinkman BAW. Dysregulation of excitatory neural firing replicates physiological and functional changes in aging visual cortex. PLoS Comput Biol 2021; 17:e1008620. [PMID: 33497380 PMCID: PMC7864437 DOI: 10.1371/journal.pcbi.1008620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 02/05/2021] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
The mammalian visual system has been the focus of countless experimental and theoretical studies designed to elucidate principles of neural computation and sensory coding. Most theoretical work has focused on networks intended to reflect developing or mature neural circuitry, in both health and disease. Few computational studies have attempted to model changes that occur in neural circuitry as an organism ages non-pathologically. In this work we contribute to closing this gap, studying how physiological changes correlated with advanced age impact the computational performance of a spiking network model of primary visual cortex (V1). Our results demonstrate that deterioration of homeostatic regulation of excitatory firing, coupled with long-term synaptic plasticity, is a sufficient mechanism to reproduce features of observed physiological and functional changes in neural activity data, specifically declines in inhibition and in selectivity to oriented stimuli. This suggests a potential causality between dysregulation of neuron firing and age-induced changes in brain physiology and functional performance. While this does not rule out deeper underlying causes or other mechanisms that could give rise to these changes, our approach opens new avenues for exploring these underlying mechanisms in greater depth and making predictions for future experiments.
Collapse
Affiliation(s)
- Seth Talyansky
- Catlin Gabel School, Portland, Oregon, United States of America
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
| | - Braden A. W. Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
11
|
Rullán Buxó CE, Pillow JW. Poisson balanced spiking networks. PLoS Comput Biol 2020; 16:e1008261. [PMID: 33216741 PMCID: PMC7717583 DOI: 10.1371/journal.pcbi.1008261] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 12/04/2020] [Accepted: 08/14/2020] [Indexed: 11/18/2022] Open
Abstract
An important problem in computational neuroscience is to understand how networks of spiking neurons can carry out various computations underlying behavior. Balanced spiking networks (BSNs) provide a powerful framework for implementing arbitrary linear dynamical systems in networks of integrate-and-fire neurons. However, the classic BSN model requires near-instantaneous transmission of spikes between neurons, which is biologically implausible. Introducing realistic synaptic delays leads to an pathological regime known as "ping-ponging", in which different populations spike maximally in alternating time bins, causing network output to overshoot the target solution. Here we document this phenomenon and provide a novel solution: we show that a network can have realistic synaptic delays while maintaining accuracy and stability if neurons are endowed with conditionally Poisson firing. Formally, we propose two alternate formulations of Poisson balanced spiking networks: (1) a "local" framework, which replaces the hard integrate-and-fire spiking rule within each neuron by a "soft" threshold function, such that firing probability grows as a smooth nonlinear function of membrane potential; and (2) a "population" framework, which reformulates the BSN objective function in terms of expected spike counts over the entire population. We show that both approaches offer improved robustness, allowing for accurate implementation of network dynamics with realistic synaptic delays between neurons. Both Poisson frameworks preserve the coding accuracy and robustness to neuron loss of the original model and, moreover, produce positive correlations between similarly tuned neurons, a feature of real neural populations that is not found in the deterministic BSN. This work unifies balanced spiking networks with Poisson generalized linear models and suggests several promising avenues for future research.
Collapse
Affiliation(s)
| | - Jonathan W. Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, USA
| |
Collapse
|
12
|
Stöckel A, Eliasmith C. Passive Nonlinear Dendritic Interactions as a Computational Resource in Spiking Neural Networks. Neural Comput 2020; 33:96-128. [PMID: 33080158 DOI: 10.1162/neco_a_01338] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Nonlinear interactions in the dendritic tree play a key role in neural computation. Nevertheless, modeling frameworks aimed at the construction of large-scale, functional spiking neural networks, such as the Neural Engineering Framework, tend to assume a linear superposition of postsynaptic currents. In this letter, we present a series of extensions to the Neural Engineering Framework that facilitate the construction of networks incorporating Dale's principle and nonlinear conductance-based synapses. We apply these extensions to a two-compartment LIF neuron that can be seen as a simple model of passive dendritic computation. We show that it is possible to incorporate neuron models with input-dependent nonlinearities into the Neural Engineering Framework without compromising high-level function and that nonlinear postsynaptic currents can be systematically exploited to compute a wide variety of multivariate, band-limited functions, including the Euclidean norm, controlled shunting, and nonnegative multiplication. By avoiding an additional source of spike noise, the function approximation accuracy of a single layer of two-compartment LIF neurons is on a par with or even surpasses that of two-layer spiking neural networks up to a certain target function bandwidth.
Collapse
Affiliation(s)
- Andreas Stöckel
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
| |
Collapse
|
13
|
Little DF, Snyder JS, Elhilali M. Ensemble modeling of auditory streaming reveals potential sources of bistability across the perceptual hierarchy. PLoS Comput Biol 2020; 16:e1007746. [PMID: 32275706 PMCID: PMC7185718 DOI: 10.1371/journal.pcbi.1007746] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Revised: 04/27/2020] [Accepted: 02/25/2020] [Indexed: 11/19/2022] Open
Abstract
Perceptual bistability-the spontaneous, irregular fluctuation of perception between two interpretations of a stimulus-occurs when observing a large variety of ambiguous stimulus configurations. This phenomenon has the potential to serve as a tool for, among other things, understanding how function varies across individuals due to the large individual differences that manifest during perceptual bistability. Yet it remains difficult to interpret the functional processes at work, without knowing where bistability arises during perception. In this study we explore the hypothesis that bistability originates from multiple sources distributed across the perceptual hierarchy. We develop a hierarchical model of auditory processing comprised of three distinct levels: a Peripheral, tonotopic analysis, a Central analysis computing features found more centrally in the auditory system, and an Object analysis, where sounds are segmented into different streams. We model bistable perception within this system by applying adaptation, inhibition and noise into one or all of the three levels of the hierarchy. We evaluate a large ensemble of variations of this hierarchical model, where each model has a different configuration of adaptation, inhibition and noise. This approach avoids the assumption that a single configuration must be invoked to explain the data. Each model is evaluated based on its ability to replicate two hallmarks of bistability during auditory streaming: the selectivity of bistability to specific stimulus configurations, and the characteristic log-normal pattern of perceptual switches. Consistent with a distributed origin, a broad range of model parameters across this hierarchy lead to a plausible form of perceptual bistability.
Collapse
Affiliation(s)
- David F. Little
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Joel S. Snyder
- Department of Psychology, University of Nevada, Las Vegas; Las Vegas, Nevada, United States of America
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
14
|
Chang TY, Doudlah R, Kim B, Sunkara A, Thompson LW, Lowe ME, Rosenberg A. Functional links between sensory representations, choice activity, and sensorimotor associations in parietal cortex. eLife 2020; 9:57968. [PMID: 33078705 PMCID: PMC7641584 DOI: 10.7554/elife.57968] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 10/19/2020] [Indexed: 02/02/2023] Open
Abstract
Three-dimensional (3D) representations of the environment are often critical for selecting actions that achieve desired goals. The success of these goal-directed actions relies on 3D sensorimotor transformations that are experience-dependent. Here we investigated the relationships between the robustness of 3D visual representations, choice-related activity, and motor-related activity in parietal cortex. Macaque monkeys performed an eight-alternative 3D orientation discrimination task and a visually guided saccade task while we recorded from the caudal intraparietal area using laminar probes. We found that neurons with more robust 3D visual representations preferentially carried choice-related activity. Following the onset of choice-related activity, the robustness of the 3D representations further increased for those neurons. We additionally found that 3D orientation and saccade direction preferences aligned, particularly for neurons with choice-related activity, reflecting an experience-dependent sensorimotor association. These findings reveal previously unrecognized links between the fidelity of ecologically relevant object representations, choice-related activity, and motor-related activity.
Collapse
Affiliation(s)
- Ting-Yu Chang
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Raymond Doudlah
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Byounghoon Kim
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | | | - Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Meghan E Lowe
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| |
Collapse
|
15
|
Wärnberg E, Kumar A. Perturbing low dimensional activity manifolds in spiking neuronal networks. PLoS Comput Biol 2019; 15:e1007074. [PMID: 31150376 PMCID: PMC6586365 DOI: 10.1371/journal.pcbi.1007074] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 06/20/2019] [Accepted: 05/07/2019] [Indexed: 11/19/2022] Open
Abstract
Several recent studies have shown that neural activity in vivo tends to be constrained to a low-dimensional manifold. Such activity does not arise in simulated neural networks with homogeneous connectivity and it has been suggested that it is indicative of some other connectivity pattern in neuronal networks. In particular, this connectivity pattern appears to be constraining learning so that only neural activity patterns falling within the intrinsic manifold can be learned and elicited. Here, we use three different models of spiking neural networks (echo-state networks, the Neural Engineering Framework and Efficient Coding) to demonstrate how the intrinsic manifold can be made a direct consequence of the circuit connectivity. Using this relationship between the circuit connectivity and the intrinsic manifold, we show that learning of patterns outside the intrinsic manifold corresponds to much larger changes in synaptic weights than learning of patterns within the intrinsic manifold. Assuming larger changes to synaptic weights requires extensive learning, this observation provides an explanation of why learning is easier when it does not require the neural activity to leave its intrinsic manifold.
Collapse
Affiliation(s)
- Emil Wärnberg
- Dept. of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Dept. of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Arvind Kumar
- Dept. of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
16
|
Duarte R, Morrison A. Leveraging heterogeneity for neural computation with fading memory in layer 2/3 cortical microcircuits. PLoS Comput Biol 2019; 15:e1006781. [PMID: 31022182 PMCID: PMC6504118 DOI: 10.1371/journal.pcbi.1006781] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Revised: 05/07/2019] [Accepted: 01/09/2019] [Indexed: 11/24/2022] Open
Abstract
Complexity and heterogeneity are intrinsic to neurobiological systems, manifest in every process, at every scale, and are inextricably linked to the systems' emergent collective behaviours and function. However, the majority of studies addressing the dynamics and computational properties of biologically inspired cortical microcircuits tend to assume (often for the sake of analytical tractability) a great degree of homogeneity in both neuronal and synaptic/connectivity parameters. While simplification and reductionism are necessary to understand the brain's functional principles, disregarding the existence of the multiple heterogeneities in the cortical composition, which may be at the core of its computational proficiency, will inevitably fail to account for important phenomena and limit the scope and generalizability of cortical models. We address these issues by studying the individual and composite functional roles of heterogeneities in neuronal, synaptic and structural properties in a biophysically plausible layer 2/3 microcircuit model, built and constrained by multiple sources of empirical data. This approach was made possible by the emergence of large-scale, well curated databases, as well as the substantial improvements in experimental methodologies achieved over the last few years. Our results show that variability in single neuron parameters is the dominant source of functional specialization, leading to highly proficient microcircuits with much higher computational power than their homogeneous counterparts. We further show that fully heterogeneous circuits, which are closest to the biophysical reality, owe their response properties to the differential contribution of different sources of heterogeneity.
Collapse
Affiliation(s)
- Renato Duarte
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1 / INM-10), Jülich Research Centre, Jülich, Germany
- Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Faculty of Biology, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1 / INM-10), Jülich Research Centre, Jülich, Germany
- Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
17
|
Nicola W, Clopath C. Supervised learning in spiking neural networks with FORCE training. Nat Commun 2017; 8:2208. [PMID: 29263361 PMCID: PMC5738356 DOI: 10.1038/s41467-017-01827-3] [Citation(s) in RCA: 99] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Accepted: 10/19/2017] [Indexed: 12/31/2022] Open
Abstract
Populations of neurons display an extraordinary diversity in the behaviors they affect and display. Machine learning techniques have recently emerged that allow us to create networks of model neurons that display behaviors of similar complexity. Here we demonstrate the direct applicability of one such technique, the FORCE method, to spiking neural networks. We train these networks to mimic dynamical systems, classify inputs, and store discrete sequences that correspond to the notes of a song. Finally, we use FORCE training to create two biologically motivated model circuits. One is inspired by the zebra finch and successfully reproduces songbird singing. The second network is motivated by the hippocampus and is trained to store and replay a movie scene. FORCE trained networks reproduce behaviors comparable in complexity to their inspired circuits and yield information not easily obtainable with other techniques, such as behavioral responses to pharmacological manipulations and spike timing statistics.
Collapse
Affiliation(s)
- Wilten Nicola
- Department of Bioengineering, Imperial College London, Royal School of Mines, London, SW7 2AZ, UK
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, Royal School of Mines, London, SW7 2AZ, UK.
| |
Collapse
|
18
|
Denève S, Alemi A, Bourdoukan R. The Brain as an Efficient and Robust Adaptive Learner. Neuron 2017; 94:969-977. [PMID: 28595053 DOI: 10.1016/j.neuron.2017.05.016] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 05/08/2017] [Accepted: 05/09/2017] [Indexed: 12/20/2022]
Abstract
Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level.
Collapse
Affiliation(s)
- Sophie Denève
- Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France.
| | - Alireza Alemi
- Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France
| | - Ralph Bourdoukan
- Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France
| |
Collapse
|
19
|
Zylberberg J, Strowbridge BW. Mechanisms of Persistent Activity in Cortical Circuits: Possible Neural Substrates for Working Memory. Annu Rev Neurosci 2017; 40:603-627. [PMID: 28772102 PMCID: PMC5995341 DOI: 10.1146/annurev-neuro-070815-014006] [Citation(s) in RCA: 121] [Impact Index Per Article: 15.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.
Collapse
Affiliation(s)
- Joel Zylberberg
- Department of Physiology and Biophysics, Center for Neuroscience, and Computational Bioscience Program, University of Colorado School of Medicine, Aurora, Colorado 80045
- Department of Applied Mathematics, University of Colorado, Boulder, Colorado 80309
- Learning in Machines and Brains Program, Canadian Institute for Advanced Research, Toronto, Ontario M5G 1Z8, Canada
| | - Ben W Strowbridge
- Department of Neurosciences, Case Western Reserve University School of Medicine, Cleveland, Ohio 44106;
- Department of Physiology and Biophysics, Case Western Reserve University School of Medicine, Cleveland, Ohio 44106
| |
Collapse
|
20
|
Balaguer-Ballester E. Cortical Variability and Challenges for Modeling Approaches. Front Syst Neurosci 2017; 11:15. [PMID: 28420968 PMCID: PMC5378710 DOI: 10.3389/fnsys.2017.00015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Accepted: 03/06/2017] [Indexed: 11/16/2022] Open
Affiliation(s)
- Emili Balaguer-Ballester
- Department of Computing and Informatics, Faculty of Science and Technology, Bournemouth UniversityBournemouth, UK.,Bernstein Center for Computational Neuroscience, Medical Faculty Mannheim and Heidelberg UniversityMannheim, Germany
| |
Collapse
|
21
|
Koren V, Denève S. Computational Account of Spontaneous Activity as a Signature of Predictive Coding. PLoS Comput Biol 2017; 13:e1005355. [PMID: 28114353 PMCID: PMC5293286 DOI: 10.1371/journal.pcbi.1005355] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Revised: 02/06/2017] [Accepted: 01/11/2017] [Indexed: 11/18/2022] Open
Abstract
Spontaneous activity is commonly observed in a variety of cortical states. Experimental evidence suggested that neural assemblies undergo slow oscillations with Up ad Down states even when the network is isolated from the rest of the brain. Here we show that these spontaneous events can be generated by the recurrent connections within the network and understood as signatures of neural circuits that are correcting their internal representation. A noiseless spiking neural network can represent its input signals most accurately when excitatory and inhibitory currents are as strong and as tightly balanced as possible. However, in the presence of realistic neural noise and synaptic delays, this may result in prohibitively large spike counts. An optimal working regime can be found by considering terms that control firing rates in the objective function from which the network is derived and then minimizing simultaneously the coding error and the cost of neural activity. In biological terms, this is equivalent to tuning neural thresholds and after-spike hyperpolarization. In suboptimal working regimes, we observe spontaneous activity even in the absence of feed-forward inputs. In an all-to-all randomly connected network, the entire population is involved in Up states. In spatially organized networks with local connectivity, Up states spread through local connections between neurons of similar selectivity and take the form of a traveling wave. Up states are observed for a wide range of parameters and have similar statistical properties in both active and quiescent state. In the optimal working regime, Up states are vanishing, leaving place to asynchronous activity, suggesting that this working regime is a signature of maximally efficient coding. Although they result in a massive increase in the firing activity, the read-out of spontaneous Up states is in fact orthogonal to the stimulus representation, therefore interfering minimally with the network function. Spontaneous bursts of activity, commonly observed in the brain, can be understood in terms of error-correcting computation within a neural network. Bursts arise automatically in a network that is inefficiently correcting its internal representation.
Collapse
Affiliation(s)
- Veronika Koren
- Group for Neural Theory, Département d’Études Cognitives, École Normale Supérieure, Paris, France
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- * E-mail: (VK); (SD)
| | - Sophie Denève
- Group for Neural Theory, Département d’Études Cognitives, École Normale Supérieure, Paris, France
- * E-mail: (VK); (SD)
| |
Collapse
|
22
|
Komer B, Eliasmith C. A unified theoretical approach for biological cognition and learning. Curr Opin Behav Sci 2016. [DOI: 10.1016/j.cobeha.2016.03.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
23
|
Building functional networks of spiking model neurons. Nat Neurosci 2016; 19:350-5. [PMID: 26906501 DOI: 10.1038/nn.4241] [Citation(s) in RCA: 101] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Accepted: 01/11/2016] [Indexed: 12/14/2022]
Abstract
Most of the networks used by computer scientists and many of those studied by modelers in neuroscience represent unit activities as continuous variables. Neurons, however, communicate primarily through discontinuous spiking. We review methods for transferring our ability to construct interesting networks that perform relevant tasks from the artificial continuous domain to more realistic spiking network models. These methods raise a number of issues that warrant further theoretical and experimental study.
Collapse
|
24
|
Denève S, Machens CK. Efficient codes and balanced networks. Nat Neurosci 2016; 19:375-82. [PMID: 26906504 DOI: 10.1038/nn.4243] [Citation(s) in RCA: 265] [Impact Index Per Article: 29.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2015] [Accepted: 01/13/2016] [Indexed: 12/12/2022]
Abstract
Recent years have seen a growing interest in inhibitory interneurons and their circuits. A striking property of cortical inhibition is how tightly it balances excitation. Inhibitory currents not only match excitatory currents on average, but track them on a millisecond time scale, whether they are caused by external stimuli or spontaneous fluctuations. We review, together with experimental evidence, recent theoretical approaches that investigate the advantages of such tight balance for coding and computation. These studies suggest a possible revision of the dominant view that neurons represent information with firing rates corrupted by Poisson noise. Instead, tight excitatory/inhibitory balance may be a signature of a highly cooperative code, orders of magnitude more precise than a Poisson rate code. Moreover, tight balance may provide a template that allows cortical neurons to construct high-dimensional population codes and learn complex functions of their inputs.
Collapse
Affiliation(s)
- Sophie Denève
- Laboratoire de Neurosciences Cognitives, École Normale Supérieure, Paris, France
| | | |
Collapse
|
25
|
Chalk M, Gutkin B, Denève S. Neural oscillations as a signature of efficient coding in the presence of synaptic delays. eLife 2016; 5. [PMID: 27383272 PMCID: PMC4959845 DOI: 10.7554/elife.13824] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2015] [Accepted: 07/05/2016] [Indexed: 12/03/2022] Open
Abstract
Cortical networks exhibit 'global oscillations', in which neural spike times are entrained to an underlying oscillatory rhythm, but where individual neurons fire irregularly, on only a fraction of cycles. While the network dynamics underlying global oscillations have been well characterised, their function is debated. Here, we show that such global oscillations are a direct consequence of optimal efficient coding in spiking networks with synaptic delays and noise. To avoid firing unnecessary spikes, neurons need to share information about the network state. Ideally, membrane potentials should be strongly correlated and reflect a 'prediction error' while the spikes themselves are uncorrelated and occur rarely. We show that the most efficient representation is when: (i) spike times are entrained to a global Gamma rhythm (implying a consistent representation of the error); but (ii) few neurons fire on each cycle (implying high efficiency), while (iii) excitation and inhibition are tightly balanced. This suggests that cortical networks exhibiting such dynamics are tuned to achieve a maximally efficient population code. DOI:http://dx.doi.org/10.7554/eLife.13824.001
Collapse
Affiliation(s)
- Matthew Chalk
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Boris Gutkin
- École Normale Supérieure, Paris, France.,Center for Cognition and Decision Making, National Research University Higher School of Economics, Moscow, Russia
| | | |
Collapse
|
26
|
Thalmeier D, Uhlmann M, Kappen HJ, Memmesheimer RM. Learning Universal Computations with Spikes. PLoS Comput Biol 2016; 12:e1004895. [PMID: 27309381 PMCID: PMC4911146 DOI: 10.1371/journal.pcbi.1004895] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2015] [Accepted: 04/01/2016] [Indexed: 11/19/2022] Open
Abstract
Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them.
Collapse
Affiliation(s)
- Dominik Thalmeier
- Donders Institute, Department of Biophysics, Radboud University, Nijmegen, Netherlands
| | - Marvin Uhlmann
- Max Planck Institute for Psycholinguistics, Department for Neurobiology of Language, Nijmegen, Netherlands
- Donders Institute, Department for Neuroinformatics, Radboud University, Nijmegen, Netherlands
| | - Hilbert J. Kappen
- Donders Institute, Department of Biophysics, Radboud University, Nijmegen, Netherlands
| | - Raoul-Martin Memmesheimer
- Donders Institute, Department for Neuroinformatics, Radboud University, Nijmegen, Netherlands
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- * E-mail:
| |
Collapse
|
27
|
Graded, Dynamically Routable Information Processing with Synfire-Gated Synfire Chains. PLoS Comput Biol 2016; 12:e1004979. [PMID: 27310184 PMCID: PMC4911121 DOI: 10.1371/journal.pcbi.1004979] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 05/09/2016] [Indexed: 02/01/2023] Open
Abstract
Coherent neural spiking and local field potentials are believed to be signatures of the binding and transfer of information in the brain. Coherent activity has now been measured experimentally in many regions of mammalian cortex. Recently experimental evidence has been presented suggesting that neural information is encoded and transferred in packets, i.e., in stereotypical, correlated spiking patterns of neural activity. Due to their relevance to coherent spiking, synfire chains are one of the main theoretical constructs that have been appealed to in order to describe coherent spiking and information transfer phenomena. However, for some time, it has been known that synchronous activity in feedforward networks asymptotically either approaches an attractor with fixed waveform and amplitude, or fails to propagate. This has limited the classical synfire chain’s ability to explain graded neuronal responses. Recently, we have shown that pulse-gated synfire chains are capable of propagating graded information coded in mean population current or firing rate amplitudes. In particular, we showed that it is possible to use one synfire chain to provide gating pulses and a second, pulse-gated synfire chain to propagate graded information. We called these circuits synfire-gated synfire chains (SGSCs). Here, we present SGSCs in which graded information can rapidly cascade through a neural circuit, and show a correspondence between this type of transfer and a mean-field model in which gating pulses overlap in time. We show that SGSCs are robust in the presence of variability in population size, pulse timing and synaptic strength. Finally, we demonstrate the computational capabilities of SGSC-based information coding by implementing a self-contained, spike-based, modular neural circuit that is triggered by streaming input, processes the input, then makes a decision based on the processed information and shuts itself down. Cognitive tasks are associated with the dynamic excitation of neural assemblies. When we consider how quickly and flexibly such collectives may be formed and incorporated in a task, a persistent question has been: how can the brain rapidly evoke and involve different neural assemblies in a computation, when synaptic coupling changes only slowly? Here, we demonstrate mechanisms whereby information may be rapidly and selectively routed through a neural circuit, and sub-circuits may be turned on and off. The resulting information processing framework achieves the goal that has been pursued, but until now largely not attained, of achieving faithful, flexible information transfer across many synapses and dynamic excitation of neural assemblies with fixed connectivities.
Collapse
|
28
|
Deneve S, Chalk M. Efficiency turns the table on neural encoding, decoding and noise. Curr Opin Neurobiol 2016; 37:141-148. [PMID: 27065340 DOI: 10.1016/j.conb.2016.03.002] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2015] [Revised: 03/04/2016] [Accepted: 03/04/2016] [Indexed: 11/18/2022]
Abstract
Sensory neurons are usually described with an encoding model, for example, a function that predicts their response from the sensory stimulus using a receptive field (RF) or a tuning curve. However, central to theories of sensory processing is the notion of 'efficient coding'. We argue here that efficient coding implies a completely different neural coding strategy. Instead of a fixed encoding model, neural populations would be described by a fixed decoding model (i.e. a model reconstructing the stimulus from the neural responses). Because the population solves a global optimization problem, individual neurons are variable, but not noisy, and have no truly invariant tuning curve or receptive field. We review recent experimental evidence and implications for neural noise correlations, robustness and adaptation.
Collapse
Affiliation(s)
- Sophie Deneve
- Institut d'études cognitives, Ecole Normale Supèrieure, Paris, France.
| | - Matthew Chalk
- Institut d'études cognitives, Ecole Normale Supèrieure, Paris, France; Vision Institute, Paris, France
| |
Collapse
|
29
|
Moreno-Bote R, Drugowitsch J. Causal Inference and Explaining Away in a Spiking Network. Sci Rep 2015; 5:17531. [PMID: 26621426 PMCID: PMC4664919 DOI: 10.1038/srep17531] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2015] [Accepted: 10/30/2015] [Indexed: 11/30/2022] Open
Abstract
While the brain uses spiking neurons for communication, theoretical research on brain computations has mostly focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, is largely unknown. Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons. The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities. The network infers the set of most likely causes from an observation using explaining away, which is dynamically implemented by spike-based, tuned inhibition. The algorithm performs remarkably well even when the network intrinsically generates variable spike trains, the timing of spikes is scrambled by external sources of noise, or the network is mistuned. This type of network might underlie tasks such as odor identification and classification.
Collapse
Affiliation(s)
- Rubén Moreno-Bote
- Department of Technologies of Information and Communication, University Pompeu Fabra, 08018 Barcelona, Spain.,Serra Húnter Fellow Programme, 08018, Barcelona, Spain.,Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), 08018, Barcelona, Spain
| | - Jan Drugowitsch
- Department of Basic Neuroscience, University of Geneva, Switzerland
| |
Collapse
|