1
|
Koren V, Blanco Malerba S, Schwalger T, Panzeri S. Efficient coding in biophysically realistic excitatory-inhibitory spiking networks. eLife 2025; 13:RP99545. [PMID: 40053385 PMCID: PMC11888603 DOI: 10.7554/elife.99545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2025] Open
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we derive the structural, coding, and biophysical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. We assumed that the network encodes a number of independent stimulus features varying with a time scale equal to the membrane time constant of excitatory and inhibitory neurons. The optimal network has biologically plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-specific excitatory external input. The excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implements feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal ratio of excitatory vs inhibitory neurons and the ratio of mean inhibitory-to-inhibitory vs excitatory-to-inhibitory connectivity are comparable to those of cortical sensory networks. The efficient network solution exhibits an instantaneous balance between excitation and inhibition. The network can perform efficient coding even when external stimuli vary over multiple time scales. Together, these results suggest that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
- Institute of Mathematics, Technische Universität BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| | - Simone Blanco Malerba
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
| | - Tilo Schwalger
- Institute of Mathematics, Technische Universität BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| | - Stefano Panzeri
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
| |
Collapse
|
2
|
Pjanovic V, Zavatone-Veth J, Masset P, Keemink S, Nardin M. Combining Sampling Methods with Attractor Dynamics in Spiking Models of Head-Direction Systems. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.02.25.640158. [PMID: 40060526 PMCID: PMC11888369 DOI: 10.1101/2025.02.25.640158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/15/2025]
Abstract
Uncertainty is a fundamental aspect of the natural environment, requiring the brain to infer and integrate noisy signals to guide behavior effectively. Sampling-based inference has been proposed as a mechanism for dealing with uncertainty, particularly in early sensory processing. However, it is unclear how to reconcile sampling-based methods with operational principles of higher-order brain areas, such as attractor dynamics of persistent neural representations. In this study, we present a spiking neural network model for the head-direction (HD) system that combines sampling-based inference with attractor dynamics. To achieve this, we derive the required spiking neural network dynamics and interactions to perform sampling from a large family of probability distributions-including variables encoded with Poisson noise. We then propose a method that allows the network to update its estimate of the current head direction by integrating angular velocity samples-derived from noisy inputs-with a pull towards a circular manifold, thereby maintaining consistent attractor dynamics. This model makes specific, testable predictions about the HD system that can be examined in future neurophysiological experiments: it predicts correlated subthreshold voltage fluctuations; distinctive short- and long-term firing correlations among neurons; and characteristic statistics of the movement of the neural activity "bump" representing the head direction. Overall, our approach extends previous theories on probabilistic sampling with spiking neurons, offers a novel perspective on the computations responsible for orientation and navigation, and supports the hypothesis that sampling-based methods can be combined with attractor dynamics to provide a viable framework for studying neural dynamics across the brain.
Collapse
Affiliation(s)
- Vojko Pjanovic
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Department of Machine Learning and Neural Computing, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Netherlands
| | - Jacob Zavatone-Veth
- Society of Fellows and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Paul Masset
- Department of Psychology, McGill University, Montréal QC, Canada
| | - Sander Keemink
- Department of Machine Learning and Neural Computing, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Netherlands
| | - Michele Nardin
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| |
Collapse
|
3
|
Koren V, Malerba SB, Schwalger T, Panzeri S. Efficient coding in biophysically realistic excitatory-inhibitory spiking networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.04.24.590955. [PMID: 38712237 PMCID: PMC11071478 DOI: 10.1101/2024.04.24.590955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we derive the structural, coding, and biophysical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. We assumed that the network encodes a number of independent stimulus features varying with a time scale equal to the membrane time constant of excitatory and inhibitory neurons. The optimal network has biologically-plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-specific excitatory external input. The excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implements feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal ratio of excitatory vs inhibitory neurons and the ratio of mean inhibitory-to-inhibitory vs excitatory-to-inhibitory connectivity are comparable to those of cortical sensory networks. The efficient network solution exhibits an instantaneous balance between excitation and inhibition. The network can perform efficient coding even when external stimuli vary over multiple time scales. Together, these results suggest that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Simone Blanco Malerba
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| | - Tilo Schwalger
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Stefano Panzeri
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| |
Collapse
|
4
|
Ravichandran N, Lansner A, Herman P. Spiking representation learning for associative memories. Front Neurosci 2024; 18:1439414. [PMID: 39371606 PMCID: PMC11450452 DOI: 10.3389/fnins.2024.1439414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Accepted: 08/29/2024] [Indexed: 10/08/2024] Open
Abstract
Networks of interconnected neurons communicating through spiking signals offer the bedrock of neural computations. Our brain's spiking neural networks have the computational capacity to achieve complex pattern recognition and cognitive functions effortlessly. However, solving real-world problems with artificial spiking neural networks (SNNs) has proved to be difficult for a variety of reasons. Crucially, scaling SNNs to large networks and processing large-scale real-world datasets have been challenging, especially when compared to their non-spiking deep learning counterparts. The critical operation that is needed of SNNs is the ability to learn distributed representations from data and use these representations for perceptual, cognitive and memory operations. In this work, we introduce a novel SNN that performs unsupervised representation learning and associative memory operations leveraging Hebbian synaptic and activity-dependent structural plasticity coupled with neuron-units modelled as Poisson spike generators with sparse firing (~1 Hz mean and ~100 Hz maximum firing rate). Crucially, the architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories. We evaluated the model on properties relevant for attractor-based associative memories such as pattern completion, perceptual rivalry, distortion resistance, and prototype extraction.
Collapse
Affiliation(s)
- Naresh Ravichandran
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anders Lansner
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Department of Mathematics, Stockholm University, Stockholm, Sweden
| | - Pawel Herman
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Digital Futures, KTH Royal Institute of Technology, Stockholm, Sweden
- Swedish e-Science Research Centre (SeRC), Stockholm, Sweden
| |
Collapse
|
5
|
Podlaski WF, Machens CK. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks. Neural Comput 2024; 36:803-857. [PMID: 38658028 DOI: 10.1162/neco_a_01658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
Collapse
Affiliation(s)
- William F Podlaski
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| |
Collapse
|
6
|
Dezhina Z, Smallwood J, Xu T, Turkheimer FE, Moran RJ, Friston KJ, Leech R, Fagerholm ED. Establishing brain states in neuroimaging data. PLoS Comput Biol 2023; 19:e1011571. [PMID: 37844124 PMCID: PMC10602380 DOI: 10.1371/journal.pcbi.1011571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 10/26/2023] [Accepted: 10/04/2023] [Indexed: 10/18/2023] Open
Abstract
The definition of a brain state remains elusive, with varying interpretations across different sub-fields of neuroscience-from the level of wakefulness in anaesthesia, to activity of individual neurons, voltage in EEG, and blood flow in fMRI. This lack of consensus presents a significant challenge to the development of accurate models of neural dynamics. However, at the foundation of dynamical systems theory lies a definition of what constitutes the 'state' of a system-i.e., a specification of the system's future. Here, we propose to adopt this definition to establish brain states in neuroimaging timeseries by applying Dynamic Causal Modelling (DCM) to low-dimensional embedding of resting and task condition fMRI data. We find that ~90% of subjects in resting conditions are better described by first-order models, whereas ~55% of subjects in task conditions are better described by second-order models. Our work calls into question the status quo of using first-order equations almost exclusively within computational neuroscience and provides a new way of establishing brain states, as well as their associated phase space representations, in neuroimaging datasets.
Collapse
Affiliation(s)
- Zalina Dezhina
- Department of Neuroimaging, King’s College London, United Kingdom
| | | | - Ting Xu
- Child Mind Institute, New York, United States of America
| | | | - Rosalyn J. Moran
- Department of Neuroimaging, King’s College London, United Kingdom
| | | | - Robert Leech
- Department of Neuroimaging, King’s College London, United Kingdom
| | | |
Collapse
|
7
|
Garnier Artiñano T, Andalibi V, Atula I, Maestri M, Vanni S. Biophysical parameters control signal transfer in spiking network. Front Comput Neurosci 2023; 17:1011814. [PMID: 36761840 PMCID: PMC9905747 DOI: 10.3389/fncom.2023.1011814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Introduction Information transmission and representation in both natural and artificial networks is dependent on connectivity between units. Biological neurons, in addition, modulate synaptic dynamics and post-synaptic membrane properties, but how these relate to information transmission in a population of neurons is still poorly understood. A recent study investigated local learning rules and showed how a spiking neural network can learn to represent continuous signals. Our study builds on their model to explore how basic membrane properties and synaptic delays affect information transfer. Methods The system consisted of three input and output units and a hidden layer of 300 excitatory and 75 inhibitory leaky integrate-and-fire (LIF) or adaptive integrate-and-fire (AdEx) units. After optimizing the connectivity to accurately replicate the input patterns in the output units, we transformed the model to more biologically accurate units and included synaptic delay and concurrent action potential generation in distinct neurons. We examined three different parameter regimes which comprised either identical physiological values for both excitatory and inhibitory units (Comrade), more biologically accurate values (Bacon), or the Comrade regime whose output units were optimized for low reconstruction error (HiFi). We evaluated information transmission and classification accuracy of the network with four distinct metrics: coherence, Granger causality, transfer entropy, and reconstruction error. Results Biophysical parameters showed a major impact on information transfer metrics. The classification was surprisingly robust, surviving very low firing and information rates, whereas information transmission overall and particularly low reconstruction error were more dependent on higher firing rates in LIF units. In AdEx units, the firing rates were lower and less information was transferred, but interestingly the highest information transmission rates were no longer overlapping with the highest firing rates. Discussion Our findings can be reflected on the predictive coding theory of the cerebral cortex and may suggest information transfer qualities as a phenomenological quality of biological cells.
Collapse
Affiliation(s)
- Tomás Garnier Artiñano
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland
| | - Vafa Andalibi
- Department of Computer Science, Indiana University Bloomington, Bloomington, IN, United States
| | - Iiris Atula
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland
| | - Matteo Maestri
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland,Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Simo Vanni
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland,Department of Physiology, Medicum, University of Helsinki, Helsinki, Finland,*Correspondence: Simo Vanni,
| |
Collapse
|
8
|
Mikulasch FA, Rudelt L, Wibral M, Priesemann V. Where is the error? Hierarchical predictive coding through dendritic error computation. Trends Neurosci 2023; 46:45-59. [PMID: 36577388 DOI: 10.1016/j.tins.2022.09.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 09/28/2022] [Accepted: 09/28/2022] [Indexed: 11/19/2022]
Abstract
Top-down feedback in cortex is critical for guiding sensory processing, which has prominently been formalized in the theory of hierarchical predictive coding (hPC). However, experimental evidence for error units, which are central to the theory, is inconclusive and it remains unclear how hPC can be implemented with spiking neurons. To address this, we connect hPC to existing work on efficient coding in balanced networks with lateral inhibition and predictive computation at apical dendrites. Together, this work points to an efficient implementation of hPC with spiking neurons, where prediction errors are computed not in separate units, but locally in dendritic compartments. We then discuss the correspondence of this model to experimentally observed connectivity patterns, plasticity, and dynamics in cortex.
Collapse
Affiliation(s)
- Fabian A Mikulasch
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany.
| | - Lucas Rudelt
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Michael Wibral
- Göttingen Campus Institute for Dynamics of Biological Networks, Georg-August University, Göttingen, Germany
| | - Viola Priesemann
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience (BCCN), Göttingen, Germany; Department of Physics, Georg-August University, Göttingen, Germany
| |
Collapse
|
9
|
Timcheck J, Kadmon J, Boahen K, Ganguli S. Optimal noise level for coding with tightly balanced networks of spiking neurons in the presence of transmission delays. PLoS Comput Biol 2022; 18:e1010593. [PMID: 36251693 PMCID: PMC9576105 DOI: 10.1371/journal.pcbi.1010593] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 09/21/2022] [Indexed: 11/19/2022] Open
Abstract
Neural circuits consist of many noisy, slow components, with individual neurons subject to ion channel noise, axonal propagation delays, and unreliable and slow synaptic transmission. This raises a fundamental question: how can reliable computation emerge from such unreliable components? A classic strategy is to simply average over a population of N weakly-coupled neurons to achieve errors that scale as [Formula: see text]. But more interestingly, recent work has introduced networks of leaky integrate-and-fire (LIF) neurons that achieve coding errors that scale superclassically as 1/N by combining the principles of predictive coding and fast and tight inhibitory-excitatory balance. However, spike transmission delays preclude such fast inhibition, and computational studies have observed that such delays can cause pathological synchronization that in turn destroys superclassical coding performance. Intriguingly, it has also been observed in simulations that noise can actually improve coding performance, and that there exists some optimal level of noise that minimizes coding error. However, we lack a quantitative theory that describes this fascinating interplay between delays, noise and neural coding performance in spiking networks. In this work, we elucidate the mechanisms underpinning this beneficial role of noise by deriving analytical expressions for coding error as a function of spike propagation delay and noise levels in predictive coding tight-balance networks of LIF neurons. Furthermore, we compute the minimal coding error and the associated optimal noise level, finding that they grow as power-laws with the delay. Our analysis reveals quantitatively how optimal levels of noise can rescue neural coding performance in spiking neural networks with delays by preventing the build up of pathological synchrony without overwhelming the overall spiking dynamics. This analysis can serve as a foundation for the further study of precise computation in the presence of noise and delays in efficient spiking neural circuits.
Collapse
Affiliation(s)
- Jonathan Timcheck
- Department of Physics, Stanford University, Stanford, California, United States of America
| | - Jonathan Kadmon
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
| | - Kwabena Boahen
- Department of Bioengineering, Stanford University, Stanford, California, United States of America
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
| |
Collapse
|
10
|
Calaim N, Dehmelt FA, Gonçalves PJ, Machens CK. The geometry of robustness in spiking neural networks. eLife 2022; 11:73276. [PMID: 35635432 PMCID: PMC9307274 DOI: 10.7554/elife.73276] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 05/22/2022] [Indexed: 11/18/2022] Open
Abstract
Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a 'bounding box'. Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks — low-dimensional representations, heterogeneity of tuning, and precise negative feedback — may be key to understanding the robustness of neural systems at the circuit level.
Collapse
Affiliation(s)
| | | | - Pedro J Gonçalves
- Department of Electrical and Computer Engineering, University of Tübingen, Tübingen, Germany
| | | |
Collapse
|
11
|
Masset P, Zavatone-Veth JA, Connor JP, Murthy VN, Pehlevan C. Natural gradient enables fast sampling in spiking neural networks. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2022; 35:22018-22034. [PMID: 37476623 PMCID: PMC10358281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
For animals to navigate an uncertain world, their brains need to estimate uncertainty at the timescales of sensations and actions. Sampling-based algorithms afford a theoretically-grounded framework for probabilistic inference in neural circuits, but it remains unknown how one can implement fast sampling algorithms in biologically-plausible spiking networks. Here, we propose to leverage the population geometry, controlled by the neural code and the neural dynamics, to implement fast samplers in spiking neural networks. We first show that two classes of spiking samplers-efficient balanced spiking networks that simulate Langevin sampling, and networks with probabilistic spike rules that implement Metropolis-Hastings sampling-can be unified within a common framework. We then show that careful choice of population geometry, corresponding to the natural space of parameters, enables rapid inference of parameters drawn from strongly-correlated high-dimensional distributions in both networks. Our results suggest design principles for algorithms for sampling-based probabilistic inference in spiking neural networks, yielding potential inspiration for neuromorphic computing and testable predictions for neurobiology.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Physics, Harvard University Cambridge, MA 02138
| | - J Patrick Connor
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
| | - Venkatesh N Murthy
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Cengiz Pehlevan
- Center for Brain Science, Harvard University Cambridge, MA 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
| |
Collapse
|
12
|
Local dendritic balance enables learning of efficient representations in networks of spiking neurons. Proc Natl Acad Sci U S A 2021; 118:2021925118. [PMID: 34876505 PMCID: PMC8685685 DOI: 10.1073/pnas.2021925118] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/26/2021] [Indexed: 11/18/2022] Open
Abstract
How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that feedforward weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity works only under unrealistic requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, recurrent connections learn to locally balance feedforward input in individual dendritic compartments and thereby can modulate synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex high-dimensional inputs and with inhibitory transmission delays, where Hebbian-like plasticity fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo and suggest that both are crucial for representation learning.
Collapse
|