1
|
Koren V, Blanco Malerba S, Schwalger T, Panzeri S. Efficient coding in biophysically realistic excitatory-inhibitory spiking networks. eLife 2025; 13:RP99545. [PMID: 40053385 PMCID: PMC11888603 DOI: 10.7554/elife.99545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2025] Open
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we derive the structural, coding, and biophysical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. We assumed that the network encodes a number of independent stimulus features varying with a time scale equal to the membrane time constant of excitatory and inhibitory neurons. The optimal network has biologically plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-specific excitatory external input. The excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implements feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal ratio of excitatory vs inhibitory neurons and the ratio of mean inhibitory-to-inhibitory vs excitatory-to-inhibitory connectivity are comparable to those of cortical sensory networks. The efficient network solution exhibits an instantaneous balance between excitation and inhibition. The network can perform efficient coding even when external stimuli vary over multiple time scales. Together, these results suggest that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
- Institute of Mathematics, Technische Universität BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| | - Simone Blanco Malerba
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
| | - Tilo Schwalger
- Institute of Mathematics, Technische Universität BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| | - Stefano Panzeri
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
| |
Collapse
|
2
|
Vinck M, Uran C, Dowdall JR, Rummell B, Canales-Johnson A. Large-scale interactions in predictive processing: oscillatory versus transient dynamics. Trends Cogn Sci 2025; 29:133-148. [PMID: 39424521 PMCID: PMC7616854 DOI: 10.1016/j.tics.2024.09.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 09/17/2024] [Accepted: 09/26/2024] [Indexed: 10/21/2024]
Abstract
How do the two main types of neural dynamics, aperiodic transients and oscillations, contribute to the interactions between feedforward (FF) and feedback (FB) pathways in sensory inference and predictive processing? We discuss three theoretical perspectives. First, we critically evaluate the theory that gamma and alpha/beta rhythms play a role in classic hierarchical predictive coding (HPC) by mediating FF and FB communication, respectively. Second, we outline an alternative functional model in which rapid sensory inference is mediated by aperiodic transients, whereas oscillations contribute to the stabilization of neural representations over time and plasticity processes. Third, we propose that the strong dependence of oscillations on predictability can be explained based on a biologically plausible alternative to classic HPC, namely dendritic HPC.
Collapse
Affiliation(s)
- Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience, in Cooperation with the Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University, 6525 Nijmegen, The Netherlands.
| | - Cem Uran
- Ernst Strüngmann Institute (ESI) for Neuroscience, in Cooperation with the Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University, 6525 Nijmegen, The Netherlands.
| | - Jarrod R Dowdall
- Robarts Research Institute, Western University, London, ON, Canada
| | - Brian Rummell
- Ernst Strüngmann Institute (ESI) for Neuroscience, in Cooperation with the Max Planck Society, 60528 Frankfurt am Main, Germany
| | - Andres Canales-Johnson
- Facultad de Ciencias de la Salud, Universidad Catolica del Maule, 3480122 Talca, Chile; Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK.
| |
Collapse
|
3
|
Koren V, Malerba SB, Schwalger T, Panzeri S. Efficient coding in biophysically realistic excitatory-inhibitory spiking networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.04.24.590955. [PMID: 38712237 PMCID: PMC11071478 DOI: 10.1101/2024.04.24.590955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we derive the structural, coding, and biophysical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. We assumed that the network encodes a number of independent stimulus features varying with a time scale equal to the membrane time constant of excitatory and inhibitory neurons. The optimal network has biologically-plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-specific excitatory external input. The excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implements feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal ratio of excitatory vs inhibitory neurons and the ratio of mean inhibitory-to-inhibitory vs excitatory-to-inhibitory connectivity are comparable to those of cortical sensory networks. The efficient network solution exhibits an instantaneous balance between excitation and inhibition. The network can perform efficient coding even when external stimuli vary over multiple time scales. Together, these results suggest that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Simone Blanco Malerba
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| | - Tilo Schwalger
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Stefano Panzeri
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| |
Collapse
|
4
|
Safavi S, Chalk M, Logothetis NK, Levina A. Signatures of criticality in efficient coding networks. Proc Natl Acad Sci U S A 2024; 121:e2302730121. [PMID: 39352933 PMCID: PMC11474077 DOI: 10.1073/pnas.2302730121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 07/24/2024] [Indexed: 10/04/2024] Open
Abstract
The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory processing (e.g., sensitivity to input) can be optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient coding. We consider a spike-coding network of leaky integrate-and-fire neurons with synaptic transmission delays. Previously, it was shown that the performance of such networks varies nonmonotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibit some signatures of criticality, namely, scale-free dynamics of the spiking and the presence of crackling noise relation. Our work suggests that two influential, and previously disparate theories of neural processing optimization (efficient coding and criticality) may be intimately related.
Collapse
Affiliation(s)
- Shervin Safavi
- Computational Neuroscience, Department of Child and Adolescent Psychiatry, Faculty of Medicine, Technische Universität Dresden, Dresden01307, Germany
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen72076, Germany
| | - Matthew Chalk
- Institut de la Vision, INSERM, CNRS, Sorbonne Université, Paris75014, France
| | - Nikos K. Logothetis
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen72076, Germany
- International Center for Primate Brain Research, Shanghai201602, China
| | - Anna Levina
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen72076, Germany
- Department of Computer Science, University of Tübingen, Tübingen72076, Germany
- Bernstein Center for Computational Neuroscience Tübingen, Tübingen72076, Germany
| |
Collapse
|
5
|
Qi Y. Moment neural network and an efficient numerical method for modeling irregular spiking activity. Phys Rev E 2024; 110:024310. [PMID: 39295055 DOI: 10.1103/physreve.110.024310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 07/01/2024] [Indexed: 09/21/2024]
Abstract
Continuous rate-based neural networks have been widely applied to modeling the dynamics of cortical circuits. However, cortical neurons in the brain exhibit irregular spiking activity with complex correlation structures that cannot be captured by mean firing rate alone. To close this gap, we consider a framework for modeling irregular spiking activity, called the moment neural network, which naturally generalizes rate models to second-order moments and can accurately capture the firing statistics of spiking neural networks. We propose an efficient numerical method that allows for rapid evaluation of moment mappings for neuronal activations without solving the underlying Fokker-Planck equation. This allows simulation of coupled interactions of mean firing rate and firing variability of large-scale neural circuits while retaining the advantage of analytical tractability of continuous rate models. We demonstrate how the moment neural network can explain a range of phenomena including diverse Fano factor in networks with quenched disorder and the emergence of irregular oscillatory dynamics in excitation-inhibition networks with delay.
Collapse
Affiliation(s)
- Yang Qi
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai 200433, China; and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200433, China
| |
Collapse
|
6
|
Podlaski WF, Machens CK. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks. Neural Comput 2024; 36:803-857. [PMID: 38658028 DOI: 10.1162/neco_a_01658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
Collapse
Affiliation(s)
- William F Podlaski
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| |
Collapse
|
7
|
Boudkkazi S, Debanne D. Enhanced Release Probability without Changes in Synaptic Delay during Analogue-Digital Facilitation. Cells 2024; 13:573. [PMID: 38607012 PMCID: PMC11011503 DOI: 10.3390/cells13070573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 03/15/2024] [Accepted: 03/22/2024] [Indexed: 04/13/2024] Open
Abstract
Neuronal timing with millisecond precision is critical for many brain functions such as sensory perception, learning and memory formation. At the level of the chemical synapse, the synaptic delay is determined by the presynaptic release probability (Pr) and the waveform of the presynaptic action potential (AP). For instance, paired-pulse facilitation or presynaptic long-term potentiation are associated with reductions in the synaptic delay, whereas paired-pulse depression or presynaptic long-term depression are associated with an increased synaptic delay. Parallelly, the AP broadening that results from the inactivation of voltage gated potassium (Kv) channels responsible for the repolarization phase of the AP delays the synaptic response, and the inactivation of sodium (Nav) channels by voltage reduces the synaptic latency. However, whether synaptic delay is modulated during depolarization-induced analogue-digital facilitation (d-ADF), a form of context-dependent synaptic facilitation induced by prolonged depolarization of the presynaptic neuron and mediated by the voltage-inactivation of presynaptic Kv1 channels, remains unclear. We show here that despite Pr being elevated during d-ADF at pyramidal L5-L5 cell synapses, the synaptic delay is surprisingly unchanged. This finding suggests that both Pr- and AP-dependent changes in synaptic delay compensate for each other during d-ADF. We conclude that, in contrast to other short- or long-term modulations of presynaptic release, synaptic timing is not affected during d-ADF because of the opposite interaction of Pr- and AP-dependent modulations of synaptic delay.
Collapse
Affiliation(s)
- Sami Boudkkazi
- Physiology Institute, University of Freiburg, 79104 Freiburg, Germany
- Unité de Neurobiologie des Canaux Ioniques et de la Synapse (UNIS), Institut National de la Santé et de la Recherche Médicale (INSERM), Aix-Marseille University, 13015 Marseille, France
| | - Dominique Debanne
- Unité de Neurobiologie des Canaux Ioniques et de la Synapse (UNIS), Institut National de la Santé et de la Recherche Médicale (INSERM), Aix-Marseille University, 13015 Marseille, France
| |
Collapse
|
8
|
Champion KP, Gozel O, Lankow BS, Ermentrout GB, Goldman MS. An oscillatory mechanism for multi-level storage in short-term memory. Commun Biol 2023; 6:829. [PMID: 37563448 PMCID: PMC10415352 DOI: 10.1038/s42003-023-05200-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 08/01/2023] [Indexed: 08/12/2023] Open
Abstract
Oscillatory activity is commonly observed during the maintenance of information in short-term memory, but its role remains unclear. Non-oscillatory models of short-term memory storage are able to encode stimulus identity through their spatial patterns of activity, but are typically limited to either an all-or-none representation of stimulus amplitude or exhibit a biologically implausible exact-tuning condition. Here we demonstrate a simple mechanism by which oscillatory input enables a circuit to generate persistent or sequential activity that encodes information not only in the spatial pattern of activity, but also in the amplitude of activity. This is accomplished through a phase-locking phenomenon that permits many different amplitudes of persistent activity to be stored without requiring exact tuning of model parameters. Altogether, this work proposes a class of models for the storage of information in working memory, a potential role for brain oscillations, and a dynamical mechanism for maintaining multi-stable neural representations.
Collapse
Affiliation(s)
- Kathleen P Champion
- Department of Applied Mathematics, University of Washington, Seattle, WA, 98195, USA
| | - Olivia Gozel
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL, 60637, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, 60637, USA
| | - Benjamin S Lankow
- Center for Neuroscience, University of California, Davis, Davis, CA, 95618, USA
| | - G Bard Ermentrout
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
| | - Mark S Goldman
- Center for Neuroscience, University of California, Davis, Davis, CA, 95618, USA.
- Department of Neurobiology, Physiology, and Behavior, and Department of Ophthalmology and Vision Science, University of California, Davis, Davis, CA, 95618, USA.
| |
Collapse
|
9
|
Bergoin R, Torcini A, Deco G, Quoy M, Zamora-López G. Inhibitory neurons control the consolidation of neural assemblies via adaptation to selective stimuli. Sci Rep 2023; 13:6949. [PMID: 37117236 PMCID: PMC10147639 DOI: 10.1038/s41598-023-34165-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 04/25/2023] [Indexed: 04/30/2023] Open
Abstract
Brain circuits display modular architecture at different scales of organization. Such neural assemblies are typically associated to functional specialization but the mechanisms leading to their emergence and consolidation still remain elusive. In this paper we investigate the role of inhibition in structuring new neural assemblies driven by the entrainment to various inputs. In particular, we focus on the role of partially synchronized dynamics for the creation and maintenance of structural modules in neural circuits by considering a network of excitatory and inhibitory [Formula: see text]-neurons with plastic Hebbian synapses. The learning process consists of an entrainment to temporally alternating stimuli that are applied to separate regions of the network. This entrainment leads to the emergence of modular structures. Contrary to common practice in artificial neural networks-where the acquired weights are typically frozen after the learning session-we allow for synaptic adaptation even after the learning phase. We find that the presence of inhibitory neurons in the network is crucial for the emergence and the post-learning consolidation of the modular structures. Indeed networks made of purely excitatory neurons or of neurons not respecting Dale's principle are unable to form or to maintain the modular architecture induced by the stimuli. We also demonstrate that the number of inhibitory neurons in the network is directly related to the maximal number of neural assemblies that can be consolidated, supporting the idea that inhibition has a direct impact on the memory capacity of the neural network.
Collapse
Affiliation(s)
- Raphaël Bergoin
- ETIS, UMR 8051, ENSEA, CY Cergy Paris Université, CNRS, 6 Av. du Ponceau, 95000, Cergy-Pontoise, France.
- Center for Brain and Cognition, Department of Information and Communications Technologies, Pompeu Fabra University, Carrer Ramón Trias i Fargas 25-27, 08005, Barcelona, Spain.
| | - Alessandro Torcini
- Laboratoire de Physique Théorique et Modélisation, UMR 8089, CY Cergy Paris Université, CNRS, 2 Av. Adolphe Chauvin, 95032, Cergy-Pontoise, France
| | - Gustavo Deco
- Center for Brain and Cognition, Department of Information and Communications Technologies, Pompeu Fabra University, Carrer Ramón Trias i Fargas 25-27, 08005, Barcelona, Spain
- Instituciò Catalana de Recerca i Estudis Avançats (ICREA), Passeig Lluis Companys 23, 08010, Barcelona, Spain
| | - Mathias Quoy
- ETIS, UMR 8051, ENSEA, CY Cergy Paris Université, CNRS, 6 Av. du Ponceau, 95000, Cergy-Pontoise, France
- IPAL, CNRS, 1 Fusionopolis Way #21-01 Connexis (South Tower), Singapore, 138632, Singapore
| | - Gorka Zamora-López
- Center for Brain and Cognition, Department of Information and Communications Technologies, Pompeu Fabra University, Carrer Ramón Trias i Fargas 25-27, 08005, Barcelona, Spain
| |
Collapse
|
10
|
Vinck M, Uran C, Spyropoulos G, Onorato I, Broggini AC, Schneider M, Canales-Johnson A. Principles of large-scale neural interactions. Neuron 2023; 111:987-1002. [PMID: 37023720 DOI: 10.1016/j.neuron.2023.03.015] [Citation(s) in RCA: 53] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/27/2023] [Accepted: 03/09/2023] [Indexed: 04/08/2023]
Abstract
What mechanisms underlie flexible inter-areal communication in the cortex? We consider four mechanisms for temporal coordination and their contributions to communication: (1) Oscillatory synchronization (communication-through-coherence); (2) communication-through-resonance; (3) non-linear integration; and (4) linear signal transmission (coherence-through-communication). We discuss major challenges for communication-through-coherence based on layer- and cell-type-specific analyses of spike phase-locking, heterogeneity of dynamics across networks and states, and computational models for selective communication. We argue that resonance and non-linear integration are viable alternative mechanisms that facilitate computation and selective communication in recurrent networks. Finally, we consider communication in relation to cortical hierarchy and critically examine the hypothesis that feedforward and feedback communication use fast (gamma) and slow (alpha/beta) frequencies, respectively. Instead, we propose that feedforward propagation of prediction errors relies on the non-linear amplification of aperiodic transients, whereas gamma and beta rhythms represent rhythmic equilibrium states that facilitate sustained and efficient information encoding and amplification of short-range feedback via resonance.
Collapse
Affiliation(s)
- Martin Vinck
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands.
| | - Cem Uran
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands
| | - Georgios Spyropoulos
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany
| | - Irene Onorato
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands
| | - Ana Clara Broggini
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany
| | - Marius Schneider
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands
| | - Andres Canales-Johnson
- Department of Psychology, University of Cambridge, CB2 3EB Cambridge, UK; Centro de Investigacion en Neuropsicologia y Neurociencias Cognitivas, Facultad de Ciencias de la Salud, Universidad Catolica del Maule, 3480122 Talca, Chile.
| |
Collapse
|
11
|
Safavi S, Panagiotaropoulos TI, Kapoor V, Ramirez-Villegas JF, Logothetis NK, Besserve M. Uncovering the organization of neural circuits with Generalized Phase Locking Analysis. PLoS Comput Biol 2023; 19:e1010983. [PMID: 37011110 PMCID: PMC10109521 DOI: 10.1371/journal.pcbi.1010983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 04/17/2023] [Accepted: 02/27/2023] [Indexed: 04/05/2023] Open
Abstract
Despite the considerable progress of in vivo neural recording techniques, inferring the biophysical mechanisms underlying large scale coordination of brain activity from neural data remains challenging. One obstacle is the difficulty to link high dimensional functional connectivity measures to mechanistic models of network activity. We address this issue by investigating spike-field coupling (SFC) measurements, which quantify the synchronization between, on the one hand, the action potentials produced by neurons, and on the other hand mesoscopic "field" signals, reflecting subthreshold activities at possibly multiple recording sites. As the number of recording sites gets large, the amount of pairwise SFC measurements becomes overwhelmingly challenging to interpret. We develop Generalized Phase Locking Analysis (GPLA) as an interpretable dimensionality reduction of this multivariate SFC. GPLA describes the dominant coupling between field activity and neural ensembles across space and frequencies. We show that GPLA features are biophysically interpretable when used in conjunction with appropriate network models, such that we can identify the influence of underlying circuit properties on these features. We demonstrate the statistical benefits and interpretability of this approach in various computational models and Utah array recordings. The results suggest that GPLA, used jointly with biophysical modeling, can help uncover the contribution of recurrent microcircuits to the spatio-temporal dynamics observed in multi-channel experimental recordings.
Collapse
Affiliation(s)
- Shervin Safavi
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- IMPRS for Cognitive and Systems Neuroscience, University of Tübingen, Tübingen, Germany
| | - Theofanis I. Panagiotaropoulos
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, 91191 Gif/Yvette, France
| | - Vishal Kapoor
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai 201602, China
| | - Juan F. Ramirez-Villegas
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Institute of Science and Technology Austria (IST Austria), Klosterneuburg, Austria
| | - Nikos K. Logothetis
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai 201602, China
- Centre for Imaging Sciences, Biomedical Imaging Institute, The University of Manchester, Manchester, United Kingdom
| | - Michel Besserve
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Empirical Inference, Max Planck Institute for Intelligent Systems and MPI-ETH Center for Learning Systems, Tübingen, Germany
| |
Collapse
|
12
|
Koren V, Bondanelli G, Panzeri S. Computational methods to study information processing in neural circuits. Comput Struct Biotechnol J 2023; 21:910-922. [PMID: 36698970 PMCID: PMC9851868 DOI: 10.1016/j.csbj.2023.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 01/13/2023] Open
Abstract
The brain is an information processing machine and thus naturally lends itself to be studied using computational tools based on the principles of information theory. For this reason, computational methods based on or inspired by information theory have been a cornerstone of practical and conceptual progress in neuroscience. In this Review, we address how concepts and computational tools related to information theory are spurring the development of principled theories of information processing in neural circuits and the development of influential mathematical methods for the analyses of neural population recordings. We review how these computational approaches reveal mechanisms of essential functions performed by neural circuits. These functions include efficiently encoding sensory information and facilitating the transmission of information to downstream brain areas to inform and guide behavior. Finally, we discuss how further progress and insights can be achieved, in particular by studying how competing requirements of neural encoding and readout may be optimally traded off to optimize neural information processing.
Collapse
Affiliation(s)
- Veronika Koren
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany
| | | | - Stefano Panzeri
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany
- Istituto Italiano di Tecnologia, Via Melen 83, Genova 16152, Italy
| |
Collapse
|
13
|
Mikulasch FA, Rudelt L, Wibral M, Priesemann V. Where is the error? Hierarchical predictive coding through dendritic error computation. Trends Neurosci 2023; 46:45-59. [PMID: 36577388 DOI: 10.1016/j.tins.2022.09.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 09/28/2022] [Accepted: 09/28/2022] [Indexed: 11/19/2022]
Abstract
Top-down feedback in cortex is critical for guiding sensory processing, which has prominently been formalized in the theory of hierarchical predictive coding (hPC). However, experimental evidence for error units, which are central to the theory, is inconclusive and it remains unclear how hPC can be implemented with spiking neurons. To address this, we connect hPC to existing work on efficient coding in balanced networks with lateral inhibition and predictive computation at apical dendrites. Together, this work points to an efficient implementation of hPC with spiking neurons, where prediction errors are computed not in separate units, but locally in dendritic compartments. We then discuss the correspondence of this model to experimentally observed connectivity patterns, plasticity, and dynamics in cortex.
Collapse
Affiliation(s)
- Fabian A Mikulasch
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany.
| | - Lucas Rudelt
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Michael Wibral
- Göttingen Campus Institute for Dynamics of Biological Networks, Georg-August University, Göttingen, Germany
| | - Viola Priesemann
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience (BCCN), Göttingen, Germany; Department of Physics, Georg-August University, Göttingen, Germany
| |
Collapse
|
14
|
Timcheck J, Kadmon J, Boahen K, Ganguli S. Optimal noise level for coding with tightly balanced networks of spiking neurons in the presence of transmission delays. PLoS Comput Biol 2022; 18:e1010593. [PMID: 36251693 PMCID: PMC9576105 DOI: 10.1371/journal.pcbi.1010593] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 09/21/2022] [Indexed: 11/19/2022] Open
Abstract
Neural circuits consist of many noisy, slow components, with individual neurons subject to ion channel noise, axonal propagation delays, and unreliable and slow synaptic transmission. This raises a fundamental question: how can reliable computation emerge from such unreliable components? A classic strategy is to simply average over a population of N weakly-coupled neurons to achieve errors that scale as [Formula: see text]. But more interestingly, recent work has introduced networks of leaky integrate-and-fire (LIF) neurons that achieve coding errors that scale superclassically as 1/N by combining the principles of predictive coding and fast and tight inhibitory-excitatory balance. However, spike transmission delays preclude such fast inhibition, and computational studies have observed that such delays can cause pathological synchronization that in turn destroys superclassical coding performance. Intriguingly, it has also been observed in simulations that noise can actually improve coding performance, and that there exists some optimal level of noise that minimizes coding error. However, we lack a quantitative theory that describes this fascinating interplay between delays, noise and neural coding performance in spiking networks. In this work, we elucidate the mechanisms underpinning this beneficial role of noise by deriving analytical expressions for coding error as a function of spike propagation delay and noise levels in predictive coding tight-balance networks of LIF neurons. Furthermore, we compute the minimal coding error and the associated optimal noise level, finding that they grow as power-laws with the delay. Our analysis reveals quantitatively how optimal levels of noise can rescue neural coding performance in spiking neural networks with delays by preventing the build up of pathological synchrony without overwhelming the overall spiking dynamics. This analysis can serve as a foundation for the further study of precise computation in the presence of noise and delays in efficient spiking neural circuits.
Collapse
Affiliation(s)
- Jonathan Timcheck
- Department of Physics, Stanford University, Stanford, California, United States of America
| | - Jonathan Kadmon
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
| | - Kwabena Boahen
- Department of Bioengineering, Stanford University, Stanford, California, United States of America
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
| |
Collapse
|
15
|
Calaim N, Dehmelt FA, Gonçalves PJ, Machens CK. The geometry of robustness in spiking neural networks. eLife 2022; 11:73276. [PMID: 35635432 PMCID: PMC9307274 DOI: 10.7554/elife.73276] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 05/22/2022] [Indexed: 11/18/2022] Open
Abstract
Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a 'bounding box'. Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks — low-dimensional representations, heterogeneity of tuning, and precise negative feedback — may be key to understanding the robustness of neural systems at the circuit level.
Collapse
Affiliation(s)
| | | | - Pedro J Gonçalves
- Department of Electrical and Computer Engineering, University of Tübingen, Tübingen, Germany
| | | |
Collapse
|
16
|
Uran C, Peter A, Lazar A, Barnes W, Klon-Lipok J, Shapcott KA, Roese R, Fries P, Singer W, Vinck M. Predictive coding of natural images by V1 firing rates and rhythmic synchronization. Neuron 2022; 110:1240-1257.e8. [PMID: 35120628 PMCID: PMC8992798 DOI: 10.1016/j.neuron.2022.01.002] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 11/22/2021] [Accepted: 01/04/2022] [Indexed: 01/12/2023]
Abstract
Predictive coding is an important candidate theory of self-supervised learning in the brain. Its central idea is that sensory responses result from comparisons between bottom-up inputs and contextual predictions, a process in which rates and synchronization may play distinct roles. We recorded from awake macaque V1 and developed a technique to quantify stimulus predictability for natural images based on self-supervised, generative neural networks. We find that neuronal firing rates were mainly modulated by the contextual predictability of higher-order image features, which correlated strongly with human perceptual similarity judgments. By contrast, V1 gamma (γ)-synchronization increased monotonically with the contextual predictability of low-level image features and emerged exclusively for larger stimuli. Consequently, γ-synchronization was induced by natural images that are highly compressible and low-dimensional. Natural stimuli with low predictability induced prominent, late-onset beta (β)-synchronization, likely reflecting cortical feedback. Our findings reveal distinct roles of synchronization and firing rates in the predictive coding of natural images.
Collapse
Affiliation(s)
- Cem Uran
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany; Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, 6525 AJ Nijmegen, the Netherlands.
| | - Alina Peter
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany
| | - Andreea Lazar
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany
| | - William Barnes
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany; Max Planck Institute for Brain Research, 60438 Frankfurt, Germany
| | - Johanna Klon-Lipok
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany; Max Planck Institute for Brain Research, 60438 Frankfurt, Germany
| | - Katharine A Shapcott
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany; Frankfurt Institute for Advanced Studies, 60438 Frankfurt, Germany
| | - Rasmus Roese
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany
| | - Pascal Fries
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany; Donders Institute for Brain, Cognition and Behaviour, Department of Biophysics, Radboud University Nijmegen, 6525 AJ Nijmegen, the Netherlands
| | - Wolf Singer
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany; Max Planck Institute for Brain Research, 60438 Frankfurt, Germany; Frankfurt Institute for Advanced Studies, 60438 Frankfurt, Germany
| | - Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany; Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, 6525 AJ Nijmegen, the Netherlands.
| |
Collapse
|
17
|
Winkler M, Dumont G, Schöll E, Gutkin B. Phase response approaches to neural activity models with distributed delay. BIOLOGICAL CYBERNETICS 2022; 116:191-203. [PMID: 34853889 DOI: 10.1007/s00422-021-00910-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 10/31/2021] [Indexed: 06/13/2023]
Abstract
In weakly coupled neural oscillator networks describing brain dynamics, the coupling delay is often distributed. We present a theoretical framework to calculate the phase response curve of distributed-delay induced limit cycles with infinite-dimensional phase space. Extending previous works, in which non-delayed or discrete-delay systems were investigated, we develop analytical results for phase response curves of oscillatory systems with distributed delay using Gaussian and log-normal delay distributions. We determine the scalar product and normalization condition for the linearized adjoint of the system required for the calculation of the phase response curve. As a paradigmatic example, we apply our technique to the Wilson-Cowan oscillator model of excitatory and inhibitory neuronal populations under the two delay distributions. We calculate and compare the phase response curves for the Gaussian and log-normal delay distributions. The phase response curves obtained from our adjoint calculations match those compiled by the direct perturbation method, thereby proving that the theory of weakly coupled oscillators can be applied successfully for distributed-delay-induced limit cycles. We further use the obtained phase response curves to derive phase interaction functions and determine the possible phase locked states of multiple inter-coupled populations to illuminate different synchronization scenarios. In numerical simulations, we show that the coupling delay distribution can impact the stability of the synchronization between inter-coupled gamma-oscillatory networks.
Collapse
Affiliation(s)
- Marius Winkler
- Group for Neural Theory, LNC INSERM U960, DEC, Ecole Normale Supérieure PSL* University, 24 rue Lhomond, 75005, Paris, France
- Institut für Theoretische Physik, Technische Universität Berlin, Hardenbergstrasse 36, 10623, Berlin, Germany
| | - Grégory Dumont
- Group for Neural Theory, LNC INSERM U960, DEC, Ecole Normale Supérieure PSL* University, 24 rue Lhomond, 75005, Paris, France
| | - Eckehard Schöll
- Institut für Theoretische Physik, Technische Universität Berlin, Hardenbergstrasse 36, 10623, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Humboldt-Universität, Philippstraße 13, 10115, Berlin, Germany
- Potsdam Institute for Climate Impact Research, Telegrafenberg A 31, 14473, Potsdam, Germany
| | - Boris Gutkin
- Group for Neural Theory, LNC INSERM U960, DEC, Ecole Normale Supérieure PSL* University, 24 rue Lhomond, 75005, Paris, France.
- Center for Cognition and Decision Making, Institue for Cognitive Neuroscience, NRU Higher School of Economics, Krivokolenniy sidewalk 3, 101000, Moscow, Russia.
| |
Collapse
|
18
|
Liang J, Zhou C. Criticality enhances the multilevel reliability of stimulus responses in cortical neural networks. PLoS Comput Biol 2022; 18:e1009848. [PMID: 35100254 PMCID: PMC8830719 DOI: 10.1371/journal.pcbi.1009848] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 02/10/2022] [Accepted: 01/18/2022] [Indexed: 11/18/2022] Open
Abstract
Cortical neural networks exhibit high internal variability in spontaneous dynamic activities and they can robustly and reliably respond to external stimuli with multilevel features–from microscopic irregular spiking of neurons to macroscopic oscillatory local field potential. A comprehensive study integrating these multilevel features in spontaneous and stimulus–evoked dynamics with seemingly distinct mechanisms is still lacking. Here, we study the stimulus–response dynamics of biologically plausible excitation–inhibition (E–I) balanced networks. We confirm that networks around critical synchronous transition states can maintain strong internal variability but are sensitive to external stimuli. In this dynamical region, applying a stimulus to the network can reduce the trial-to-trial variability and shift the network oscillatory frequency while preserving the dynamical criticality. These multilevel features widely observed in different experiments cannot simultaneously occur in non-critical dynamical states. Furthermore, the dynamical mechanisms underlying these multilevel features are revealed using a semi-analytical mean-field theory that derives the macroscopic network field equations from the microscopic neuronal networks, enabling the analysis by nonlinear dynamics theory and linear noise approximation. The generic dynamical principle revealed here contributes to a more integrative understanding of neural systems and brain functions and incorporates multimodal and multilevel experimental observations. The E–I balanced neural network in combination with the effective mean-field theory can serve as a mechanistic modeling framework to study the multilevel neural dynamics underlying neural information and cognitive processes. The complexity and variability of brain dynamical activity range from neuronal spiking and neural avalanches to oscillatory local field potentials of local neural circuits in both spontaneous and stimulus-evoked states. Such multilevel variable brain dynamics are functionally and behaviorally relevant and are principal components of the underlying circuit organization. To more comprehensively clarify their neural mechanisms, we use a bottom-up approach to study the stimulus–response dynamics of neural circuits. Our model assumes the following key biologically plausible components: excitation–inhibition (E–I) neuronal interaction and chemical synaptic coupling. We show that the circuits with E–I balance have a special dynamic sub-region, the critical region. Circuits around this region could account for the emergence of multilevel brain response patterns, both ongoing and stimulus-induced, observed in different experiments, including the reduction of trial-to-trial variability, effective modulation of gamma frequency, and preservation of criticality in the presence of a stimulus. We further analyze the corresponding nonlinear dynamical principles using a novel and highly generalizable semi-analytical mean-field theory. Our computational and theoretical studies explain the cross-level brain dynamical organization of spontaneous and evoked states in a more integrative manner.
Collapse
Affiliation(s)
- Junhao Liang
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong SAR, China
- Centre for Integrative Neuroscience, Eberhard Karls University of Tübingen, Tübingen, Germany
- Department for Sensory and Sensorimotor Systems, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong SAR, China
- Department of Physics, Zhejiang University, Hangzhou, China
- * E-mail:
| |
Collapse
|
19
|
Local dendritic balance enables learning of efficient representations in networks of spiking neurons. Proc Natl Acad Sci U S A 2021; 118:2021925118. [PMID: 34876505 PMCID: PMC8685685 DOI: 10.1073/pnas.2021925118] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/26/2021] [Indexed: 11/18/2022] Open
Abstract
How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that feedforward weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity works only under unrealistic requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, recurrent connections learn to locally balance feedforward input in individual dendritic compartments and thereby can modulate synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex high-dimensional inputs and with inhibitory transmission delays, where Hebbian-like plasticity fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo and suggest that both are crucial for representation learning.
Collapse
|
20
|
Rullán Buxó CE, Pillow JW. Poisson balanced spiking networks. PLoS Comput Biol 2020; 16:e1008261. [PMID: 33216741 PMCID: PMC7717583 DOI: 10.1371/journal.pcbi.1008261] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 12/04/2020] [Accepted: 08/14/2020] [Indexed: 11/18/2022] Open
Abstract
An important problem in computational neuroscience is to understand how networks of spiking neurons can carry out various computations underlying behavior. Balanced spiking networks (BSNs) provide a powerful framework for implementing arbitrary linear dynamical systems in networks of integrate-and-fire neurons. However, the classic BSN model requires near-instantaneous transmission of spikes between neurons, which is biologically implausible. Introducing realistic synaptic delays leads to an pathological regime known as "ping-ponging", in which different populations spike maximally in alternating time bins, causing network output to overshoot the target solution. Here we document this phenomenon and provide a novel solution: we show that a network can have realistic synaptic delays while maintaining accuracy and stability if neurons are endowed with conditionally Poisson firing. Formally, we propose two alternate formulations of Poisson balanced spiking networks: (1) a "local" framework, which replaces the hard integrate-and-fire spiking rule within each neuron by a "soft" threshold function, such that firing probability grows as a smooth nonlinear function of membrane potential; and (2) a "population" framework, which reformulates the BSN objective function in terms of expected spike counts over the entire population. We show that both approaches offer improved robustness, allowing for accurate implementation of network dynamics with realistic synaptic delays between neurons. Both Poisson frameworks preserve the coding accuracy and robustness to neuron loss of the original model and, moreover, produce positive correlations between similarly tuned neurons, a feature of real neural populations that is not found in the deterministic BSN. This work unifies balanced spiking networks with Poisson generalized linear models and suggests several promising avenues for future research.
Collapse
Affiliation(s)
| | - Jonathan W. Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, USA
| |
Collapse
|
21
|
Oscillations in the auditory system and their possible role. Neurosci Biobehav Rev 2020; 113:507-528. [PMID: 32298712 DOI: 10.1016/j.neubiorev.2020.03.030] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 03/25/2020] [Accepted: 03/30/2020] [Indexed: 12/26/2022]
Abstract
GOURÉVITCH, B., C. Martin, O. Postal, J.J. Eggermont. Oscillations in the auditory system, their possible role. NEUROSCI BIOBEHAV REV XXX XXX-XXX, 2020. - Neural oscillations are thought to have various roles in brain processing such as, attention modulation, neuronal communication, motor coordination, memory consolidation, decision-making, or feature binding. The role of oscillations in the auditory system is less clear, especially due to the large discrepancy between human and animal studies. Here we describe many methodological issues that confound the results of oscillation studies in the auditory field. Moreover, we discuss the relationship between neural entrainment and oscillations that remains unclear. Finally, we aim to identify which kind of oscillations could be specific or salient to the auditory areas and their processing. We suggest that the role of oscillations might dramatically differ between the primary auditory cortex and the more associative auditory areas. Despite the moderate presence of intrinsic low frequency oscillations in the primary auditory cortex, rhythmic components in the input seem crucial for auditory processing. This allows the phase entrainment between the oscillatory phase and rhythmic input, which is an integral part of stimulus selection within the auditory system.
Collapse
|
22
|
Brendel W, Bourdoukan R, Vertechi P, Machens CK, Denève S. Learning to represent signals spike by spike. PLoS Comput Biol 2020; 16:e1007692. [PMID: 32176682 PMCID: PMC7135338 DOI: 10.1371/journal.pcbi.1007692] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 04/06/2020] [Accepted: 01/27/2020] [Indexed: 12/31/2022] Open
Abstract
Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks’ convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations. Spiking neural networks can encode information with high efficiency in the spike trains of individual neurons if the synaptic weights between neurons are set to specific, optimal values. In this regime, the networks exhibit irregular spike trains, high trial-to-trial variability, and stimulus tuning, as typically observed in cortex. The strong variability on the level of single neurons paradoxically coincides with a precise, non-redundant, and spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these spiking networks can be learnt with local learning rules. In this study, we show how the required architecture can be learnt. We derive local and biophysically plausible learning rules for recurrent neural networks from first principles. We show both mathematically and using numerical simulations that these learning rules drive the networks into the optimal state, and we show that the optimal state is governed by the statistics of the input signals. After learning, the voltages of individual neurons can be interpreted as measuring the instantaneous error of the code, given by the error between the desired output signal and the actual output signal.
Collapse
Affiliation(s)
- Wieland Brendel
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- Tübingen AI Center, University of Tübingen, Germany
| | - Ralph Bourdoukan
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Pietro Vertechi
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Christian K. Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- * E-mail: (CKM); (SD)
| | - Sophie Denève
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- * E-mail: (CKM); (SD)
| |
Collapse
|
23
|
Koren V, Andrei AR, Hu M, Dragoi V, Obermayer K. Reading-out task variables as a low-dimensional reconstruction of neural spike trains in single trials. PLoS One 2019; 14:e0222649. [PMID: 31622346 PMCID: PMC6797168 DOI: 10.1371/journal.pone.0222649] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Accepted: 09/03/2019] [Indexed: 11/18/2022] Open
Abstract
We propose a new model of the read-out of spike trains that exploits the multivariate structure of responses of neural ensembles. Assuming the point of view of a read-out neuron that receives synaptic inputs from a population of projecting neurons, synaptic inputs are weighted with a heterogeneous set of weights. We propose that synaptic weights reflect the role of each neuron within the population for the computational task that the network has to solve. In our case, the computational task is discrimination of binary classes of stimuli, and weights are such as to maximize the discrimination capacity of the network. We compute synaptic weights as the feature weights of an optimal linear classifier. Once weights have been learned, they weight spike trains and allow to compute the post-synaptic current that modulates the spiking probability of the read-out unit in real time. We apply the model on parallel spike trains from V1 and V4 areas in the behaving monkey macaca mulatta, while the animal is engaged in a visual discrimination task with binary classes of stimuli. The read-out of spike trains with our model allows to discriminate the two classes of stimuli, while population PSTH entirely fails to do so. Splitting neurons in two subpopulations according to the sign of the weight, we show that population signals of the two functional subnetworks are negatively correlated. Disentangling the superficial, the middle and the deep layer of the cortex, we show that in both V1 and V4, superficial layers are the most important in discriminating binary classes of stimuli.
Collapse
Affiliation(s)
- Veronika Koren
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
- * E-mail:
| | - Ariana R. Andrei
- Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, Texas, United States of America
| | - Ming Hu
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Valentin Dragoi
- Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, Texas, United States of America
| | - Klaus Obermayer
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| |
Collapse
|
24
|
Alamia A, VanRullen R. Alpha oscillations and traveling waves: Signatures of predictive coding? PLoS Biol 2019; 17:e3000487. [PMID: 31581198 PMCID: PMC6776260 DOI: 10.1371/journal.pbio.3000487] [Citation(s) in RCA: 96] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Accepted: 08/30/2019] [Indexed: 12/30/2022] Open
Abstract
Predictive coding is a key mechanism to understand the computational processes underlying brain functioning: in a hierarchical network, higher levels predict the activity of lower levels, and the unexplained residuals (i.e., prediction errors) are passed back to higher layers. Because of its recursive nature, we wondered whether predictive coding could be related to brain oscillatory dynamics. First, we show that a simple 2-level predictive coding model of visual cortex, with physiological communication delays between levels, naturally gives rise to alpha-band rhythms, similar to experimental observations. Then, we demonstrate that a multilevel version of the same model can explain the occurrence of oscillatory traveling waves across levels, both forward (during visual stimulation) and backward (during rest). Remarkably, the predictions of our model are matched by the analysis of 2 independent electroencephalography (EEG) datasets, in which we observed oscillatory traveling waves in both directions.
Collapse
Affiliation(s)
- Andrea Alamia
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS, Université de Toulouse, Toulouse, France
| | - Rufin VanRullen
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS, Université de Toulouse, Toulouse, France
| |
Collapse
|
25
|
Gamma Oscillations in the Basolateral Amygdala: Biophysical Mechanisms and Computational Consequences. eNeuro 2019; 6:eN-NWR-0388-18. [PMID: 30805556 PMCID: PMC6361623 DOI: 10.1523/eneuro.0388-18.2018] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 12/12/2018] [Accepted: 12/22/2018] [Indexed: 01/04/2023] Open
Abstract
The basolateral nucleus of the amygdala (BL) is thought to support numerous emotional behaviors through specific microcircuits. These are often thought to be comprised of feedforward networks of principal cells (PNs) and interneurons. Neither well-understood nor often considered are recurrent and feedback connections, which likely engender oscillatory dynamics within BL. Indeed, oscillations in the gamma frequency range (40 − 100 Hz) are known to occur in the BL, and yet their origin and effect on local circuits remains unknown. To address this, we constructed a biophysically and anatomically detailed model of the rat BL and its local field potential (LFP) based on the physiological and anatomical literature, along with in vivo and in vitro data we collected on the activities of neurons within the rat BL. Remarkably, the model produced intermittent gamma oscillations (∼50 − 70 Hz) whose properties matched those recorded in vivo, including their entrainment of spiking. BL gamma-band oscillations were generated by the intrinsic circuitry, depending upon reciprocal interactions between PNs and fast-spiking interneurons (FSIs), while connections within these cell types affected the rhythm’s frequency. The model allowed us to conduct experimentally impossible tests to characterize the synaptic and spatial properties of gamma. The entrainment of individual neurons to gamma depended on the number of afferent connections they received, and gamma bursts were spatially restricted in the BL. Importantly, the gamma rhythm synchronized PNs and mediated competition between ensembles. Together, these results indicate that the recurrent connectivity of BL expands its computational and communication repertoire.
Collapse
|
26
|
Jerath R, Beveridge C. Multimodal Integration and Phenomenal Spatiotemporal Binding: A Perspective From the Default Space Theory. Front Integr Neurosci 2019; 13:2. [PMID: 30804763 PMCID: PMC6371768 DOI: 10.3389/fnint.2019.00002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Accepted: 01/17/2019] [Indexed: 01/08/2023] Open
Abstract
How does the integrated and unified conscious experience arise from the vastly distributed activities of the nervous system? How is the information from the many cones of the retina bound with information coming from the cochlea to create the association of sounds with objects in visual space? In this perspective article, we assert a novel viewpoint on the "binding problem" in which we explain a metastable operation of the brain and body that may provide insight into this problem. In our view which is a component of the Default Space Theory (DST), consciousness arises from a metastable synchronization of local computations into a global coherence by a framework of widespread slow and ultraslow oscillations coordinated by the thalamus. We reinforce a notion shared by some consciousness researchers such as Revonsuo and the Fingelkurts that a spatiotemporal matrix is the foundation of phenomenological experience and that this phenomenology is directly tied to bioelectric operations of the nervous system. Through the oscillatory binding system we describe, cognitive neuroscientists may be able to more accurately correlate bioelectric activity of the brain and body with the phenomenology of human experience.
Collapse
Affiliation(s)
- Ravinder Jerath
- Charitable Medical Healthcare Foundation, Augusta, GA, United States
| | - Connor Beveridge
- Charitable Medical Healthcare Foundation, Augusta, GA, United States
| |
Collapse
|
27
|
Peter A, Uran C, Klon-Lipok J, Roese R, van Stijn S, Barnes W, Dowdall JR, Singer W, Fries P, Vinck M. Surface color and predictability determine contextual modulation of V1 firing and gamma oscillations. eLife 2019; 8:42101. [PMID: 30714900 PMCID: PMC6391066 DOI: 10.7554/elife.42101] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 01/30/2019] [Indexed: 12/03/2022] Open
Abstract
The integration of direct bottom-up inputs with contextual information is a core feature of neocortical circuits. In area V1, neurons may reduce their firing rates when their receptive field input can be predicted by spatial context. Gamma-synchronized (30–80 Hz) firing may provide a complementary signal to rates, reflecting stronger synchronization between neuronal populations receiving mutually predictable inputs. We show that large uniform surfaces, which have high spatial predictability, strongly suppressed firing yet induced prominent gamma synchronization in macaque V1, particularly when they were colored. Yet, chromatic mismatches between center and surround, breaking predictability, strongly reduced gamma synchronization while increasing firing rates. Differences between responses to different colors, including strong gamma-responses to red, arose from stimulus adaptation to a full-screen background, suggesting prominent differences in adaptation between M- and L-cone signaling pathways. Thus, synchrony signaled whether RF inputs were predicted from spatial context, while firing rates increased when stimuli were unpredicted from context.
Collapse
Affiliation(s)
- Alina Peter
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany.,International Max Planck Research School for Neural Circuits, Frankfurt, Germany
| | - Cem Uran
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| | - Johanna Klon-Lipok
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany.,Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Rasmus Roese
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| | - Sylvia van Stijn
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany.,Max Planck Institute for Brain Research, Frankfurt, Germany
| | - William Barnes
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| | - Jarrod R Dowdall
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| | - Wolf Singer
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany.,Frankfurt Institute for Advanced Studies, Frankfurt, Germany
| | - Pascal Fries
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany.,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| |
Collapse
|
28
|
Levy J, Goldstein A, Pratt M, Feldman R. Maturation of Pain Empathy from Child to Adult Shifts from Single to Multiple Neural Rhythms to Support Interoceptive Representations. Sci Rep 2018; 8:1810. [PMID: 29379042 PMCID: PMC5788915 DOI: 10.1038/s41598-018-19810-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 01/05/2018] [Indexed: 01/26/2023] Open
Abstract
While empathy to the pain of conspecific is evolutionary-ancient and is observed in rodents and in primates, it also integrates higher-order affective representations. Yet, it is unclear whether human empathy for pain is inborn or matures during development and what neural processes underpin its maturation. Using magnetoencephalography, we monitored the brain response of children, adolescents, and adults (n = 209) to others' pain, testing the shift from childhood to adult functioning. Results indicate that children's vicarious empathy for pain operates via rudimentary sensory predictions involving alpha oscillations in somatosensory cortex, while adults' response recruits advanced mechanisms of updating sensory predictions and activating affective empathy in viceromotor cortex via higher-level representations involving beta- and gamma-band activity. Our findings suggest that full-blown empathy to others' pain emerges only in adulthood and involves a shift from sensory self-based to interoceptive other-focused mechanisms that support human altruism, maintain self-other differentiation, modulate feedback to monitor other's state, and activate a plan of action to alleviate other's suffering.
Collapse
Affiliation(s)
- Jonathan Levy
- Interdisciplinary Center, Herzliya, 46150, Israel.
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, 5290002, Israel.
| | - Abraham Goldstein
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, 5290002, Israel
- Department of Psychology, Bar-Ilan University, Ramat Gan, 5290002, Israel
| | - Maayan Pratt
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, 5290002, Israel
- Department of Psychology, Bar-Ilan University, Ramat Gan, 5290002, Israel
| | - Ruth Feldman
- Interdisciplinary Center, Herzliya, 46150, Israel.
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, 5290002, Israel.
- Department of Psychology, Bar-Ilan University, Ramat Gan, 5290002, Israel.
- Yale University, Child Study Center, New Haven, CT, 06520, USA.
| |
Collapse
|
29
|
Abstract
We expand the theory of Hawkes processes to the nonstationary case, in which the mutually exciting point processes receive time-dependent inputs. We derive an analytical expression for the time-dependent correlations, which can be applied to networks with arbitrary connectivity, and inputs with arbitrary statistics. The expression shows how the network correlations are determined by the interplay between the network topology, the transfer functions relating units within the network, and the pattern and statistics of the external inputs. We illustrate the correlation structure using several examples in which neural network dynamics are modeled as a Hawkes process. In particular, we focus on the interplay between internally and externally generated oscillations and their signatures in the spike and rate correlation functions.
Collapse
Affiliation(s)
- Neta Ravid Tannenbaum
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel and Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| |
Collapse
|
30
|
How linear response shaped models of neural circuits and the quest for alternatives. Curr Opin Neurobiol 2017; 46:234-240. [DOI: 10.1016/j.conb.2017.09.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 09/07/2017] [Indexed: 11/23/2022]
|
31
|
Lowet E, Roberts MJ, Peter A, Gips B, De Weerd P. A quantitative theory of gamma synchronization in macaque V1. eLife 2017; 6:26642. [PMID: 28857743 PMCID: PMC5779232 DOI: 10.7554/elife.26642] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Accepted: 08/21/2017] [Indexed: 12/13/2022] Open
Abstract
Gamma-band synchronization coordinates brief periods of excitability in oscillating neuronal populations to optimize information transmission during sensation and cognition. Commonly, a stable, shared frequency over time is considered a condition for functional neural synchronization. Here, we demonstrate the opposite: instantaneous frequency modulations are critical to regulate phase relations and synchronization. In monkey visual area V1, nearby local populations driven by different visual stimulation showed different gamma frequencies. When similar enough, these frequencies continually attracted and repulsed each other, which enabled preferred phase relations to be maintained in periods of minimized frequency difference. Crucially, the precise dynamics of frequencies and phases across a wide range of stimulus conditions was predicted from a physics theory that describes how weakly coupled oscillators influence each other's phase relations. Hence, the fundamental mathematical principle of synchronization through instantaneous frequency modulations applies to gamma in V1 and is likely generalizable to other brain regions and rhythms.
Collapse
Affiliation(s)
- Eric Lowet
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Mark J Roberts
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Alina Peter
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| | - Bart Gips
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Peter De Weerd
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
32
|
Denève S, Alemi A, Bourdoukan R. The Brain as an Efficient and Robust Adaptive Learner. Neuron 2017; 94:969-977. [PMID: 28595053 DOI: 10.1016/j.neuron.2017.05.016] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 05/08/2017] [Accepted: 05/09/2017] [Indexed: 12/20/2022]
Abstract
Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level.
Collapse
Affiliation(s)
- Sophie Denève
- Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France.
| | - Alireza Alemi
- Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France
| | - Ralph Bourdoukan
- Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France
| |
Collapse
|
33
|
State-dependent alpha peak frequency shifts: Experimental evidence, potential mechanisms and functional implications. Neuroscience 2017; 360:146-154. [PMID: 28739525 DOI: 10.1016/j.neuroscience.2017.07.037] [Citation(s) in RCA: 131] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Revised: 06/30/2017] [Accepted: 07/16/2017] [Indexed: 11/20/2022]
Abstract
Neural populations produce complex oscillatory patterns thought to implement brain function. The dominant rhythm in the healthy adult human brain is formed by alpha oscillations with a typical power peak most commonly found between 8 and 12Hz. This alpha peak frequency has been repeatedly discussed as a highly heritable and stable neurophysiological "trait" marker reflecting anatomical properties of the brain, and individuals' general cognitive capacity. However, growing evidence suggests that the alpha peak frequency is highly volatile at shorter time scales, dependent on the individuals' "state". Based on the converging experimental and theoretical results from numerous recent studies, here we propose that alpha frequency variability forms the basis of an adaptive mechanism mirroring the activation level of neural populations which has important functional implications. We here integrate experimental and computational perspectives to shed new light on the potential role played by shifts in alpha peak frequency and discuss resulting implications. We further propose a potential mechanism by which alpha oscillations are regulated in a noisy network of spiking neurons in presence of delayed feedback.
Collapse
|
34
|
Koren V, Denève S. Computational Account of Spontaneous Activity as a Signature of Predictive Coding. PLoS Comput Biol 2017; 13:e1005355. [PMID: 28114353 PMCID: PMC5293286 DOI: 10.1371/journal.pcbi.1005355] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Revised: 02/06/2017] [Accepted: 01/11/2017] [Indexed: 11/18/2022] Open
Abstract
Spontaneous activity is commonly observed in a variety of cortical states. Experimental evidence suggested that neural assemblies undergo slow oscillations with Up ad Down states even when the network is isolated from the rest of the brain. Here we show that these spontaneous events can be generated by the recurrent connections within the network and understood as signatures of neural circuits that are correcting their internal representation. A noiseless spiking neural network can represent its input signals most accurately when excitatory and inhibitory currents are as strong and as tightly balanced as possible. However, in the presence of realistic neural noise and synaptic delays, this may result in prohibitively large spike counts. An optimal working regime can be found by considering terms that control firing rates in the objective function from which the network is derived and then minimizing simultaneously the coding error and the cost of neural activity. In biological terms, this is equivalent to tuning neural thresholds and after-spike hyperpolarization. In suboptimal working regimes, we observe spontaneous activity even in the absence of feed-forward inputs. In an all-to-all randomly connected network, the entire population is involved in Up states. In spatially organized networks with local connectivity, Up states spread through local connections between neurons of similar selectivity and take the form of a traveling wave. Up states are observed for a wide range of parameters and have similar statistical properties in both active and quiescent state. In the optimal working regime, Up states are vanishing, leaving place to asynchronous activity, suggesting that this working regime is a signature of maximally efficient coding. Although they result in a massive increase in the firing activity, the read-out of spontaneous Up states is in fact orthogonal to the stimulus representation, therefore interfering minimally with the network function. Spontaneous bursts of activity, commonly observed in the brain, can be understood in terms of error-correcting computation within a neural network. Bursts arise automatically in a network that is inefficiently correcting its internal representation.
Collapse
Affiliation(s)
- Veronika Koren
- Group for Neural Theory, Département d’Études Cognitives, École Normale Supérieure, Paris, France
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- * E-mail: (VK); (SD)
| | - Sophie Denève
- Group for Neural Theory, Département d’Études Cognitives, École Normale Supérieure, Paris, France
- * E-mail: (VK); (SD)
| |
Collapse
|
35
|
Benito N, Martín-Vázquez G, Makarova J, Makarov VA, Herreras O. The right hippocampus leads the bilateral integration of gamma-parsed lateralized information. eLife 2016; 5. [PMID: 27599221 PMCID: PMC5050016 DOI: 10.7554/elife.16658] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Accepted: 09/05/2016] [Indexed: 12/26/2022] Open
Abstract
It is unclear whether the two hippocampal lobes convey similar or different activities and how they cooperate. Spatial discrimination of electric fields in anesthetized rats allowed us to compare the pathway-specific field potentials corresponding to the gamma-paced CA3 output (CA1 Schaffer potentials) and CA3 somatic inhibition within and between sides. Bilateral excitatory Schaffer gamma waves are generally larger and lead from the right hemisphere with only moderate covariation of amplitude, and drive CA1 pyramidal units more strongly than unilateral waves. CA3 waves lock to the ipsilateral Schaffer potentials, although bilateral coherence was weak. Notably, Schaffer activity may run laterally, as seen after the disruption of the connecting pathways. Thus, asymmetric operations promote the entrainment of CA3-autonomous gamma oscillators bilaterally, synchronizing lateralized gamma strings to converge optimally on CA1 targets. The findings support the view that interhippocampal connections integrate different aspects of information that flow through the left and right lobes. DOI:http://dx.doi.org/10.7554/eLife.16658.001 In humans and other backboned animals, the brain is divided into the left and right hemispheres, which are connected by several large bundles of nerve fibers. Thanks to these fiber tracts, sensory information from each side of the body can reach both sides of the brain. However, although many areas of the brain work with a counterpart on the opposite hemisphere to process this sensory information, they do not necessarily perform the same tasks, or perform them at the same time as their partner. The hippocampus is a brain region that helps to support navigation, to detect novelty, and to produce memories. In fact, our brains contain two hippocampi – one in each hemisphere. Previous studies of the hippocampus have tended to record from only one side of the brain. Benito, Martín-Vázquez, Makarova et al. now compare the activity of the left and right hippocampi, and consider how the two structures might work together. Recordings of the electrical activity of the hippocampi of anesthetized rats show that different groups of neurons fire in rhythmic sequence, forming waves called gamma waves. Successive waves have different amplitudes, and can be thought to form ‘strings’. The recordings made by Benito et al. show that the two hippocampi produce parallel strings of waves, although the waves that originate in the right hemisphere are generally larger than those that originate in the left. Right-hemisphere waves also tend to begin slightly earlier than their left-hemisphere counterparts. Further experiments revealed that disrupting the fiber tracts between the hemispheres uncouples the waves that no longer occur at the same time, and the strings of waves may remain constrained to one side of the brain. In healthy animals, however, the right-hand dominance acts as a master-slave device, and makes the waves from the two hemispheres pair up and merge in the neurons that receive them both. Thus the information running in both hippocampi can be integrated or compared before sending to the cortex for task execution or storage. Overall, the findings reported by Benito et al. suggest that different types of information flow through the left and right hemispheres, and that the brain integrates these two streams using asymmetric connections. The next challenge is to identify how the information in the two streams differs: whether each stream reflects different sensory stimuli, different features of a scene, or the difference between recalled and perceived information. DOI:http://dx.doi.org/10.7554/eLife.16658.002
Collapse
Affiliation(s)
- Nuria Benito
- Department of Translational Neuroscience, Cajal Institute - CSIC, Madrid, Spain
| | | | - Julia Makarova
- Department of Translational Neuroscience, Cajal Institute - CSIC, Madrid, Spain.,N.I. Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Valeri A Makarov
- N.I. Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Department of Applied Mathematics, Faculty of Mathematics, Universidad Complutense de Madrid, Madrid, Spain
| | - Oscar Herreras
- Department of Translational Neuroscience, Cajal Institute - CSIC, Madrid, Spain
| |
Collapse
|
36
|
Denève S, Machens CK. Efficient codes and balanced networks. Nat Neurosci 2016; 19:375-82. [PMID: 26906504 DOI: 10.1038/nn.4243] [Citation(s) in RCA: 265] [Impact Index Per Article: 29.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2015] [Accepted: 01/13/2016] [Indexed: 12/12/2022]
Abstract
Recent years have seen a growing interest in inhibitory interneurons and their circuits. A striking property of cortical inhibition is how tightly it balances excitation. Inhibitory currents not only match excitatory currents on average, but track them on a millisecond time scale, whether they are caused by external stimuli or spontaneous fluctuations. We review, together with experimental evidence, recent theoretical approaches that investigate the advantages of such tight balance for coding and computation. These studies suggest a possible revision of the dominant view that neurons represent information with firing rates corrupted by Poisson noise. Instead, tight excitatory/inhibitory balance may be a signature of a highly cooperative code, orders of magnitude more precise than a Poisson rate code. Moreover, tight balance may provide a template that allows cortical neurons to construct high-dimensional population codes and learn complex functions of their inputs.
Collapse
Affiliation(s)
- Sophie Denève
- Laboratoire de Neurosciences Cognitives, École Normale Supérieure, Paris, France
| | | |
Collapse
|