1
|
Koren V, Blanco Malerba S, Schwalger T, Panzeri S. Efficient coding in biophysically realistic excitatory-inhibitory spiking networks. eLife 2025; 13:RP99545. [PMID: 40053385 PMCID: PMC11888603 DOI: 10.7554/elife.99545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2025] Open
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we derive the structural, coding, and biophysical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. We assumed that the network encodes a number of independent stimulus features varying with a time scale equal to the membrane time constant of excitatory and inhibitory neurons. The optimal network has biologically plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-specific excitatory external input. The excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implements feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal ratio of excitatory vs inhibitory neurons and the ratio of mean inhibitory-to-inhibitory vs excitatory-to-inhibitory connectivity are comparable to those of cortical sensory networks. The efficient network solution exhibits an instantaneous balance between excitation and inhibition. The network can perform efficient coding even when external stimuli vary over multiple time scales. Together, these results suggest that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
- Institute of Mathematics, Technische Universität BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| | - Simone Blanco Malerba
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
| | - Tilo Schwalger
- Institute of Mathematics, Technische Universität BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| | - Stefano Panzeri
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
| |
Collapse
|
2
|
Murakami T. Spatial dynamics of spontaneous activity in the developing and adult cortices. Neurosci Res 2025; 212:1-10. [PMID: 39653148 DOI: 10.1016/j.neures.2024.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 11/29/2024] [Accepted: 12/02/2024] [Indexed: 12/16/2024]
Abstract
Even in the absence of external stimuli, the brain remains remarkably active, with neurons continuously firing and communicating with each other. It is not merely random firing of individual neurons but rather orchestrated patterns of activity that propagate throughout the intricate network. Over two decades, advancements in neuroscience observation tools for hemodynamics, membrane potential, and neural calcium signals, have allowed researchers to analyze the dynamics of spontaneous activity across different spatial scales, from individual neurons to macroscale brain networks. One of the remarkable findings from these studies is that the spatial patterns of spontaneous activity in the developing brain are vastly different from those in the mature adult brain. Spatial patterns of spontaneous activity during development are essential for connection refinement between brain regions, whereas the functional role in the adult brain is still controversial. In this paper, I review the differences in spatial dynamics of spontaneous activity between developing and adult cortices. Then, I delve into the cellular mechanisms underlying spontaneous activity, especially its generation and propagation manner, to contribute to a deeper understanding of brain function and its development.
Collapse
Affiliation(s)
- Tomonari Murakami
- Department of Physiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; Institute for AI and Beyond, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
3
|
Koren V, Malerba SB, Schwalger T, Panzeri S. Efficient coding in biophysically realistic excitatory-inhibitory spiking networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.04.24.590955. [PMID: 38712237 PMCID: PMC11071478 DOI: 10.1101/2024.04.24.590955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we derive the structural, coding, and biophysical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. We assumed that the network encodes a number of independent stimulus features varying with a time scale equal to the membrane time constant of excitatory and inhibitory neurons. The optimal network has biologically-plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-specific excitatory external input. The excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implements feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal ratio of excitatory vs inhibitory neurons and the ratio of mean inhibitory-to-inhibitory vs excitatory-to-inhibitory connectivity are comparable to those of cortical sensory networks. The efficient network solution exhibits an instantaneous balance between excitation and inhibition. The network can perform efficient coding even when external stimuli vary over multiple time scales. Together, these results suggest that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Simone Blanco Malerba
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| | - Tilo Schwalger
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Stefano Panzeri
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| |
Collapse
|
4
|
Matsui T, Hashimoto T, Murakami T, Uemura M, Kikuta K, Kato T, Ohki K. Orthogonalization of spontaneous and stimulus-driven activity by hierarchical neocortical areal network in primates. Nat Commun 2024; 15:10055. [PMID: 39632809 PMCID: PMC11618767 DOI: 10.1038/s41467-024-54322-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/06/2024] [Indexed: 12/07/2024] Open
Abstract
How biological neural networks reliably process information in the presence of spontaneous activity remains controversial. In mouse primary visual cortex (V1), stimulus-evoked and spontaneous activity show orthogonal (dissimilar) patterns, which is advantageous for separating sensory signals from internal noise. However, studies in carnivore and primate V1, which have functional columns, have reported high similarity between stimulus-evoked and spontaneous activity. Thus, the mechanism of signal-noise separation in the columnar visual cortex may be different from that in rodents. To address this issue, we compared spontaneous and stimulus-evoked activity in marmoset V1 and higher visual areas. In marmoset V1, spontaneous and stimulus-evoked activity showed similar patterns as expected. However, in marmoset higher visual areas, spontaneous and stimulus-evoked activity were progressively orthogonalized along the cortical hierarchy, eventually reaching levels comparable to those in mouse V1. These results suggest that orthogonalization of spontaneous and stimulus-evoked activity is a general principle of cortical computation.
Collapse
Affiliation(s)
- Teppei Matsui
- Department of Physiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
- Graduate School of Brain, Doshisha University, Kyoto, Japan.
- Department of Molecular Physiology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan.
- JST-PRESTO, Japan Science and Technology Agency, Tokyo, Japan.
- Institute for AI and Beyond, The University of Tokyo, Tokyo, Japan.
| | - Takayuki Hashimoto
- Department of Physiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
- Department of Molecular Physiology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan.
- Institute for AI and Beyond, The University of Tokyo, Tokyo, Japan.
| | - Tomonari Murakami
- Department of Physiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Department of Molecular Physiology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
- Institute for AI and Beyond, The University of Tokyo, Tokyo, Japan
| | - Masato Uemura
- Department of Physiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Department of Molecular Physiology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan
- Department of Biology, Kansai Medical University, Osaka, Japan
| | - Kohei Kikuta
- Department of Physiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Institute for AI and Beyond, The University of Tokyo, Tokyo, Japan
| | - Toshiki Kato
- Department of Physiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Institute for AI and Beyond, The University of Tokyo, Tokyo, Japan
| | - Kenichi Ohki
- Department of Physiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
- Department of Molecular Physiology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan.
- Institute for AI and Beyond, The University of Tokyo, Tokyo, Japan.
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
5
|
Koren V, Emanuel AJ, Panzeri S. Spiking networks that efficiently process dynamic sensory features explain receptor information mixing in somatosensory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.07.597979. [PMID: 38895477 PMCID: PMC11185787 DOI: 10.1101/2024.06.07.597979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
How do biological neural systems efficiently encode, transform and propagate information between the sensory periphery and the sensory cortex about sensory features evolving at different time scales? Are these computations efficient in normative information processing terms? While previous work has suggested that biologically plausible models of of such neural information processing may be implemented efficiently within a single processing layer, how such computations extend across several processing layers is less clear. Here, we model propagation of multiple time-varying sensory features across a sensory pathway, by extending the theory of efficient coding with spikes to efficient encoding, transformation and transmission of sensory signals. These computations are optimally realized by a multilayer spiking network with feedforward networks of spiking neurons (receptor layer) and recurrent excitatory-inhibitory networks of generalized leaky integrate-and-fire neurons (recurrent layers). Our model efficiently realizes a broad class of feature transformations, including positive and negative interaction across features, through specific and biologically plausible structures of feedforward connectivity. We find that mixing of sensory features in the activity of single neurons is beneficial because it lowers the metabolic cost at the network level. We apply the model to the somatosensory pathway by constraining it with parameters measured empirically and include in its last node, analogous to the primary somatosensory cortex (S1), two types of inhibitory neurons: parvalbumin-positive neurons realizing lateral inhibition, and somatostatin-positive neurons realizing winner-take-all inhibition. By implementing a negative interaction across stimulus features, this model captures several intriguing empirical observations from the somatosensory system of the mouse, including a decrease of sustained responses from subcortical networks to S1, a non-linear effect of the knock-out of receptor neuron types on the activity in S1, and amplification of weak signals from sensory neurons across the pathway.
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| | - Alan J Emanuel
- Department of Cell Biology, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Stefano Panzeri
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
- Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
6
|
Dezhina Z, Smallwood J, Xu T, Turkheimer FE, Moran RJ, Friston KJ, Leech R, Fagerholm ED. Establishing brain states in neuroimaging data. PLoS Comput Biol 2023; 19:e1011571. [PMID: 37844124 PMCID: PMC10602380 DOI: 10.1371/journal.pcbi.1011571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 10/26/2023] [Accepted: 10/04/2023] [Indexed: 10/18/2023] Open
Abstract
The definition of a brain state remains elusive, with varying interpretations across different sub-fields of neuroscience-from the level of wakefulness in anaesthesia, to activity of individual neurons, voltage in EEG, and blood flow in fMRI. This lack of consensus presents a significant challenge to the development of accurate models of neural dynamics. However, at the foundation of dynamical systems theory lies a definition of what constitutes the 'state' of a system-i.e., a specification of the system's future. Here, we propose to adopt this definition to establish brain states in neuroimaging timeseries by applying Dynamic Causal Modelling (DCM) to low-dimensional embedding of resting and task condition fMRI data. We find that ~90% of subjects in resting conditions are better described by first-order models, whereas ~55% of subjects in task conditions are better described by second-order models. Our work calls into question the status quo of using first-order equations almost exclusively within computational neuroscience and provides a new way of establishing brain states, as well as their associated phase space representations, in neuroimaging datasets.
Collapse
Affiliation(s)
- Zalina Dezhina
- Department of Neuroimaging, King’s College London, United Kingdom
| | | | - Ting Xu
- Child Mind Institute, New York, United States of America
| | | | - Rosalyn J. Moran
- Department of Neuroimaging, King’s College London, United Kingdom
| | | | - Robert Leech
- Department of Neuroimaging, King’s College London, United Kingdom
| | | |
Collapse
|
7
|
Pancholi R, Sun-Yan A, Peron S. Microstimulation of sensory cortex engages natural sensory representations. Curr Biol 2023; 33:1765-1777.e5. [PMID: 37130521 PMCID: PMC10246453 DOI: 10.1016/j.cub.2023.03.085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 03/03/2023] [Accepted: 03/30/2023] [Indexed: 05/04/2023]
Abstract
Cortical activity patterns occupy a small subset of possible network states. If this is due to intrinsic network properties, microstimulation of sensory cortex should evoke activity patterns resembling those observed during natural sensory input. Here, we use optical microstimulation of virally transfected layer 2/3 pyramidal neurons in the mouse primary vibrissal somatosensory cortex to compare artificially evoked activity with natural activity evoked by whisker touch and movement ("whisking"). We find that photostimulation engages touch- but not whisking-responsive neurons more than expected by chance. Neurons that respond to photostimulation and touch or to touch alone exhibit higher spontaneous pairwise correlations than purely photoresponsive neurons. Exposure to several days of simultaneous touch and optogenetic stimulation raises both overlap and spontaneous activity correlations among touch and photoresponsive neurons. We thus find that cortical microstimulation engages existing cortical representations and that repeated co-presentation of natural and artificial stimulation enhances this effect.
Collapse
Affiliation(s)
- Ravi Pancholi
- Center for Neural Science, New York University, 4 Washington Pl., Rm. 621, New York, NY 10003, USA
| | - Andrew Sun-Yan
- Center for Neural Science, New York University, 4 Washington Pl., Rm. 621, New York, NY 10003, USA
| | - Simon Peron
- Center for Neural Science, New York University, 4 Washington Pl., Rm. 621, New York, NY 10003, USA.
| |
Collapse
|
8
|
de la Fuente LA, Zamberlan F, Bocaccio H, Kringelbach M, Deco G, Perl YS, Pallavicini C, Tagliazucchi E. Temporal irreversibility of neural dynamics as a signature of consciousness. Cereb Cortex 2023; 33:1856-1865. [PMID: 35512291 DOI: 10.1093/cercor/bhac177] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Revised: 04/16/2022] [Accepted: 04/18/2022] [Indexed: 11/14/2022] Open
Abstract
Dissipative systems evolve in the preferred temporal direction indicated by the thermodynamic arrow of time. The fundamental nature of this temporal asymmetry led us to hypothesize its presence in the neural activity evoked by conscious perception of the physical world, and thus its covariance with the level of conscious awareness. We implemented a data-driven deep learning framework to decode the temporal inversion of electrocorticography signals acquired from non-human primates. Brain activity time series recorded during conscious wakefulness could be distinguished from their inverted counterparts with high accuracy, both using frequency and phase information. However, classification accuracy was reduced for data acquired during deep sleep and under ketamine-induced anesthesia; moreover, the predictions obtained from multiple independent neural networks were less consistent for sleep and anesthesia than for conscious wakefulness. Finally, the analysis of feature importance scores highlighted transitions between slow ($\approx$20 Hz) and fast frequencies (>40 Hz) as the main contributors to the temporal asymmetry observed during conscious wakefulness. Our results show that a preferred temporal direction is manifest in the neural activity evoked by conscious mentation and in the phenomenology of the passage of time, establishing common ground to tackle the relationship between brain and subjective experience.
Collapse
Affiliation(s)
- Laura Alethia de la Fuente
- Department of Physics, University of Buenos Aires 1428, Argentina.,Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires 1058, Argentina.,National Scientific and Technical Research Council, Buenos Aires 1425, Argentina
| | - Federico Zamberlan
- Department of Physics, University of Buenos Aires 1428, Argentina.,National Scientific and Technical Research Council, Buenos Aires 1425, Argentina.,Cognitive Science and Artificial Intelligence Department, Tilburg University, Tilburg 5000, The Netherlands
| | - Hernán Bocaccio
- Department of Physics, University of Buenos Aires 1428, Argentina.,National Scientific and Technical Research Council, Buenos Aires 1425, Argentina
| | - Morten Kringelbach
- Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford OX1, UK.,Department of Psychiatry, University of Oxford, Oxford OX3, UK.,Center for Music in the Brain, Department of Clinical Medicine, Aarhus University 8000, DK
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona 08018, Spain.,Institució Catalana de la Recerca i Estudis Avançats (ICREA), Barcelona 08010, Spain.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,School of Psychological Sciences, Monash University, Melbourne, Clayton VIC 3800, Australia
| | - Yonatan Sanz Perl
- Department of Physics, University of Buenos Aires 1428, Argentina.,Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona 08018, Spain
| | - Carla Pallavicini
- Department of Physics, University of Buenos Aires 1428, Argentina.,National Scientific and Technical Research Council, Buenos Aires 1425, Argentina
| | - Enzo Tagliazucchi
- Department of Physics, University of Buenos Aires 1428, Argentina.,National Scientific and Technical Research Council, Buenos Aires 1425, Argentina.,Latin American Brain Health Institute (BrainLat), Universidad Adolfo Ibanez, Santiago 7910000, Chile
| |
Collapse
|
9
|
Koren V, Bondanelli G, Panzeri S. Computational methods to study information processing in neural circuits. Comput Struct Biotechnol J 2023; 21:910-922. [PMID: 36698970 PMCID: PMC9851868 DOI: 10.1016/j.csbj.2023.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 01/13/2023] Open
Abstract
The brain is an information processing machine and thus naturally lends itself to be studied using computational tools based on the principles of information theory. For this reason, computational methods based on or inspired by information theory have been a cornerstone of practical and conceptual progress in neuroscience. In this Review, we address how concepts and computational tools related to information theory are spurring the development of principled theories of information processing in neural circuits and the development of influential mathematical methods for the analyses of neural population recordings. We review how these computational approaches reveal mechanisms of essential functions performed by neural circuits. These functions include efficiently encoding sensory information and facilitating the transmission of information to downstream brain areas to inform and guide behavior. Finally, we discuss how further progress and insights can be achieved, in particular by studying how competing requirements of neural encoding and readout may be optimally traded off to optimize neural information processing.
Collapse
Affiliation(s)
- Veronika Koren
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany
| | | | - Stefano Panzeri
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany
- Istituto Italiano di Tecnologia, Via Melen 83, Genova 16152, Italy
| |
Collapse
|
10
|
The habenula clock influences response to a stressor. Neurobiol Stress 2021; 15:100403. [PMID: 34632007 PMCID: PMC8488752 DOI: 10.1016/j.ynstr.2021.100403] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 09/17/2021] [Accepted: 09/19/2021] [Indexed: 12/12/2022] Open
Abstract
The response of an animal to a sensory stimulus depends on the nature of the stimulus and on expectations, which are mediated by spontaneous activity. Here, we ask how circadian variation in the expectation of danger, and thus the response to a potential threat, is controlled. We focus on the habenula, a mediator of threat response that functions by regulating neuromodulator release, and use zebrafish as the experimental system. Single cell transcriptomics indicates that multiple clock genes are expressed throughout the habenula, while quantitative in situ hybridization confirms that the clock oscillates. Two-photon calcium imaging indicates a circadian change in spontaneous activity of habenula neurons. To assess the role of this clock, a truncated clocka gene was specifically expressed in the habenula. This partially inhibited the clock, as shown by changes in per3 expression as well as altered day-night variation in dopamine, serotonin and acetylcholine levels. Behaviourally, anxiety-like responses evoked by an alarm pheromone were reduced. Circadian effects of the pheromone were disrupted, such that responses in the day resembled those at night. Behaviours that are regulated by the pineal clock and not triggered by stressors were unaffected. We suggest that the habenula clock regulates the expectation of danger, thus providing one mechanism for circadian change in the response to a stressor.
Collapse
|
11
|
Koren V. Uncovering structured responses of neural populations recorded from macaque monkeys with linear support vector machines. STAR Protoc 2021; 2:100746. [PMID: 34430919 PMCID: PMC8365527 DOI: 10.1016/j.xpro.2021.100746] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
When a mammal, such as a macaque monkey, sees a complex natural image, many neurons in its visual cortex respond simultaneously. Here, we provide a protocol for studying the structure of population responses in laminar recordings with a machine learning model, the linear support vector machine. To unravel the role of single neurons in population responses and the structure of noise correlations, we use a multivariate decoding technique on time-averaged responses. For complete details on the use and execution of this protocol, please refer to Koren et al. (2020a). Linear support vector machine (SVM) is an efficient model for decoding from neural data Permutation test is a rigorous method for testing the significance of results Neural responses along the cortical depth are heterogeneous Decoding weights and noise correlations share a similar structure
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
- Corresponding author
| |
Collapse
|
12
|
Recanatesi S, Farrell M, Lajoie G, Deneve S, Rigotti M, Shea-Brown E. Predictive learning as a network mechanism for extracting low-dimensional latent space representations. Nat Commun 2021; 12:1417. [PMID: 33658520 PMCID: PMC7930246 DOI: 10.1038/s41467-021-21696-1] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 01/22/2021] [Indexed: 01/02/2023] Open
Abstract
Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task’s low-dimensional latent structure in the network activity – i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data. Neural networks trained using predictive models generate representations that recover the underlying low-dimensional latent structure in the data. Here, the authors demonstrate that a network trained on a spatial navigation task generates place-related neural activations similar to those observed in the hippocampus and show that these are related to the latent structure.
Collapse
Affiliation(s)
- Stefano Recanatesi
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.
| | - Matthew Farrell
- Department of Applied Mathematics, University of Washington, Seattle, WA, USA
| | - Guillaume Lajoie
- Department of Mathematics and Statistics, Université de Montréal, Montreal, QC, Canada.,Mila-Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Sophie Deneve
- Group for Neural Theory, Ecole Normal Superieur, Paris, France
| | | | - Eric Shea-Brown
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.,Department of Applied Mathematics, University of Washington, Seattle, WA, USA.,Allen Institute for Brain Science, Seattle, WA, USA
| |
Collapse
|
13
|
Habich A, Fehér KD, Antonenko D, Boraxbekk CJ, Flöel A, Nissen C, Siebner HR, Thielscher A, Klöppel S. Stimulating aged brains with transcranial direct current stimulation: Opportunities and challenges. Psychiatry Res Neuroimaging 2020; 306:111179. [PMID: 32972813 DOI: 10.1016/j.pscychresns.2020.111179] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 06/30/2020] [Accepted: 09/03/2020] [Indexed: 02/06/2023]
Abstract
Ageing involves significant neurophysiological changes that are both systematic while at the same time exhibiting divergent trajectories across individuals. These changes underlie cognitive impairments in elderly while also affecting the response of aged brains to interventions like transcranial direct current stimulation (tDCS). While the cognitive benefits of tDCS are more variable in elderly, older adults also respond differently to stimulation protocols compared to young adults. The age-related neurophysiological changes influencing the responsiveness to tDCS remain to be addressed in-depth. We review and discuss the premise that, in comparison to the better calibrated brain networks present in young adults, aged systems perform further away from a homoeostatic set-point. We argue that this age-related neurophysiological deviation from the homoeostatic optimum extends the leeway for tDCS to modulate the aged brain. This promotes the potency of immediate tDCS effects to induce directional plastic changes towards the homoeostatic equilibrium despite the impaired plasticity induction in elderly. We also consider how age-related neurophysiological changes pose specific challenges for tDCS that necessitate proper adaptations of stimulation protocols. Appreciating the distinctive properties of aged brains and the accompanying adjustment of stimulation parameters can increase the potency and reliability of tDCS as a treatment avenue in older adults.
Collapse
Affiliation(s)
- Annegret Habich
- University Hospital of Old Age Psychiatry and Psychotherpa, University of Bern, Bolligenstrasse 111, 3000 Bern, Switzerland; Faculty of Biology, University of Freiburg, Schänzlestrasse 1, 79104 Freiburg, Germany.
| | - Kristoffer D Fehér
- University Hospital of Psychiatry and Psychotherapy, University of Bern, Bolligenstrasse 111, 3000 Bern, Switzerland
| | - Daria Antonenko
- Department of Neurology, University of Greifswald, Ferdinand-Sauerbruch-Straße, 17475 Greifswald, Germany
| | - Carl-Johan Boraxbekk
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Østvej, 2650 Hvidovre, Denmark; Department of Radiation Sciences, Umeå University, 90187 Umeå, Sweden; Institute of Sports Medicine Copenhagen (ISMC), Copenhagen University Hospital Bispebjerg, Bispebjerg Bakke 23, 2400 Copenhagen, Denmark
| | - Agnes Flöel
- Department of Neurology, University of Greifswald, Ferdinand-Sauerbruch-Straße, 17475 Greifswald, Germany; German Center for Neurodegenerative Diseases, Ellernholzstraße 1-2, 17489 Greifswald, Germany
| | - Christoph Nissen
- University Hospital of Psychiatry and Psychotherapy, University of Bern, Bolligenstrasse 111, 3000 Bern, Switzerland; Department of Psychiatry and Psychotherapy, Faculty of Medicine, University of Freiburg, Hauptstraße 5, 79104 Freiburg, Germany
| | - Hartwig Roman Siebner
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Østvej, 2650 Hvidovre, Denmark; Department of Neurology, Copenhagen University Hospital Bispebjerg, Bispebjerg Bakke 23, 2400 Copenhagen, Denmark; Institute for Clinical Medicine, Faculty of Medical and Health Sciences, University of Copenhagen, Nørre Allé 20, 2200 Copenhagen, Denmark
| | - Axel Thielscher
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Østvej, 2650 Hvidovre, Denmark; Department of Electrical Engineering, Technical University of Denmark, Ørsteds Pl. 348, 2800 Kgs. Lyngby, Denmark
| | - Stefan Klöppel
- University Hospital of Old Age Psychiatry and Psychotherpa, University of Bern, Bolligenstrasse 111, 3000 Bern, Switzerland
| |
Collapse
|
14
|
Pairwise Synchrony and Correlations Depend on the Structure of the Population Code in Visual Cortex. Cell Rep 2020; 33:108367. [PMID: 33176154 DOI: 10.1016/j.celrep.2020.108367] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 12/28/2019] [Accepted: 10/19/2020] [Indexed: 11/22/2022] Open
Abstract
In visual areas of primates, neurons activate in parallel while the animal is engaged in a behavioral task. In this study, we examine the structure of the population code while the animal performs delayed match-to-sample tasks on complex natural images. The macaque monkeys visualized two consecutive stimuli that were either the same or different, while being recorded with laminar arrays across the cortical depth in cortical areas V1 and V4. We decode correct choice behavior from neural populations of simultaneously recorded units. Utilizing decoding weights, we divide neurons into most informative and less informative and show that most informative neurons in V4, but not in V1, are more strongly synchronized, coupled, and correlated than less informative neurons. Because neurons are divided into two coding pools according to their coding preference, in V4, but not in V1, spiking synchrony, coupling, and correlations within the coding pool are stronger than across coding pools.
Collapse
|
15
|
Rullán Buxó CE, Pillow JW. Poisson balanced spiking networks. PLoS Comput Biol 2020; 16:e1008261. [PMID: 33216741 PMCID: PMC7717583 DOI: 10.1371/journal.pcbi.1008261] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 12/04/2020] [Accepted: 08/14/2020] [Indexed: 11/18/2022] Open
Abstract
An important problem in computational neuroscience is to understand how networks of spiking neurons can carry out various computations underlying behavior. Balanced spiking networks (BSNs) provide a powerful framework for implementing arbitrary linear dynamical systems in networks of integrate-and-fire neurons. However, the classic BSN model requires near-instantaneous transmission of spikes between neurons, which is biologically implausible. Introducing realistic synaptic delays leads to an pathological regime known as "ping-ponging", in which different populations spike maximally in alternating time bins, causing network output to overshoot the target solution. Here we document this phenomenon and provide a novel solution: we show that a network can have realistic synaptic delays while maintaining accuracy and stability if neurons are endowed with conditionally Poisson firing. Formally, we propose two alternate formulations of Poisson balanced spiking networks: (1) a "local" framework, which replaces the hard integrate-and-fire spiking rule within each neuron by a "soft" threshold function, such that firing probability grows as a smooth nonlinear function of membrane potential; and (2) a "population" framework, which reformulates the BSN objective function in terms of expected spike counts over the entire population. We show that both approaches offer improved robustness, allowing for accurate implementation of network dynamics with realistic synaptic delays between neurons. Both Poisson frameworks preserve the coding accuracy and robustness to neuron loss of the original model and, moreover, produce positive correlations between similarly tuned neurons, a feature of real neural populations that is not found in the deterministic BSN. This work unifies balanced spiking networks with Poisson generalized linear models and suggests several promising avenues for future research.
Collapse
Affiliation(s)
| | - Jonathan W. Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, USA
| |
Collapse
|
16
|
Lepreux G, Haupt SS, Dürr V. Bimodal modulation of background activity in an identified descending interneuron. J Neurophysiol 2019; 122:2316-2330. [DOI: 10.1152/jn.00864.2018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In the absence of any obvious input, sensory neurons and interneurons can display resting or spontaneous activity. This is often regarded as noise and removed through trial averaging, although it may reflect history-dependent modulation of tuning or fidelity and, thus, be of functional relevance to downstream interneurons. We investigated the history dependence of spontaneous activity in a pair of identified, bimodal descending interneurons of the stick insect, called contralateral ON-type velocity-sensitive interneurons (cONv). The bilateral pair of cONv conveys antennal mechanosensory information to the thoracic ganglia, where it arborizes in regions containing locomotor networks. Each cONv encodes the movement velocity of the contralateral antenna, but also substrate vibration as induced by discrete tapping events. Moreover, cONv display highly fluctuating spontaneous activity that can reach rates similar to those during antennal movement at moderate velocities. Hence, cONv offer a unique opportunity to study history-dependent effects on spontaneous activity and, thus, encoding fidelity in two modalities. In this work, we studied unimodal and cross-modal effects as well as unilateral and bilateral effects, using bilateral recordings of both cONv neurons, while moving one antenna and/or delivering taps to induce substrate vibration. Tapping could reduce spontaneous activity of both neurons, whereas antennal movement reduced spontaneous activity of the contralateral cONv neuron only. Combination of both modalities showed a cooperative effect for some parameter constellations, suggesting bimodal enhancement. Since both stimulus modalities could cause a reduction of spontaneous activity at stimulus intensities occurring during natural locomotion, we conclude that this should enhance neuronal response fidelity during locomotion. NEW & NOTEWORTHY The spontaneous activity in a pair of identified, descending insect interneurons is reduced depending on stimulus history. At rest, spontaneous activity levels are correlated in both interneurons, indicating a common drive from background activity. Whereas taps on the substrate affect both interneurons, antennal movement affects the contralateral interneuron only. Cross-modal interaction occurs, too. Since spontaneous activity is reduced at stimulus intensities encountered during natural locomotion, the mechanism could enhance neuronal response fidelity during locomotion.
Collapse
Affiliation(s)
- Gaëtan Lepreux
- Department of Biological Cybernetics, Faculty of Biology, Bielefeld University, Bielefeld, Germany
- Cognitive Interaction Technology – Center of Excellence, Bielefeld University, Bielefeld, Germany
| | - Stephan Shuichi Haupt
- Department of Biological Cybernetics, Faculty of Biology, Bielefeld University, Bielefeld, Germany
| | - Volker Dürr
- Department of Biological Cybernetics, Faculty of Biology, Bielefeld University, Bielefeld, Germany
- Cognitive Interaction Technology – Center of Excellence, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
17
|
Zhang J, Abiose O, Katsumi Y, Touroutoglou A, Dickerson BC, Barrett LF. Intrinsic Functional Connectivity is Organized as Three Interdependent Gradients. Sci Rep 2019; 9:15976. [PMID: 31685830 PMCID: PMC6828953 DOI: 10.1038/s41598-019-51793-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 10/07/2019] [Indexed: 02/06/2023] Open
Abstract
The intrinsic functional architecture of the brain supports moment-to-moment maintenance of an internal model of the world. We hypothesized and found three interdependent architectural gradients underlying the organization of intrinsic functional connectivity within the human cerebral cortex. We used resting state fMRI data from two samples of healthy young adults (N's = 280 and 270) to generate functional connectivity maps of 109 seeds culled from published research, estimated their pairwise similarities, and multidimensionally scaled the resulting similarity matrix. We discovered an optimal three-dimensional solution, accounting for 98% of the variance within the similarity matrix. The three dimensions corresponded to three gradients, which spatially correlate with two functional features (external vs. internal sources of information; content representation vs. attentional modulation) and one structural feature (anatomically central vs. peripheral) of the brain. Remapping the three dimensions into coordinate space revealed that the connectivity maps were organized in a circumplex structure, indicating that the organization of intrinsic connectivity is jointly guided by graded changes along all three dimensions. Our findings emphasize coordination between multiple, continuous functional and anatomical gradients, and are consistent with the emerging predictive coding perspective.
Collapse
Affiliation(s)
- Jiahe Zhang
- Department of Psychology, Northeastern University, Boston, MA, 02115, USA
| | - Olamide Abiose
- Center for Law, Brain and Behavior, Massachusetts General Hospital, Boston, MA, 02114, USA
| | - Yuta Katsumi
- Department of Psychology, Northeastern University, Boston, MA, 02115, USA
| | - Alexandra Touroutoglou
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, 149 13th St., Charlestown, MA, 02129, USA
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, 149 13th St., Charlestown, MA, 02129, USA
| | - Bradford C Dickerson
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, 149 13th St., Charlestown, MA, 02129, USA
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, 149 13th St., Charlestown, MA, 02129, USA
| | - Lisa Feldman Barrett
- Department of Psychology, Northeastern University, Boston, MA, 02115, USA.
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, 149 13th St., Charlestown, MA, 02129, USA.
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, 149 13th St., Charlestown, MA, 02129, USA.
| |
Collapse
|
18
|
Richards BA, Lillicrap TP, Beaudoin P, Bengio Y, Bogacz R, Christensen A, Clopath C, Costa RP, de Berker A, Ganguli S, Gillon CJ, Hafner D, Kepecs A, Kriegeskorte N, Latham P, Lindsay GW, Miller KD, Naud R, Pack CC, Poirazi P, Roelfsema P, Sacramento J, Saxe A, Scellier B, Schapiro AC, Senn W, Wayne G, Yamins D, Zenke F, Zylberberg J, Therien D, Kording KP. A deep learning framework for neuroscience. Nat Neurosci 2019; 22:1761-1770. [PMID: 31659335 PMCID: PMC7115933 DOI: 10.1038/s41593-019-0520-2] [Citation(s) in RCA: 450] [Impact Index Per Article: 75.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Accepted: 09/23/2019] [Indexed: 11/08/2022]
Abstract
Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.
Collapse
Affiliation(s)
- Blake A Richards
- Mila, Montréal, Quebec, Canada.
- School of Computer Science, McGill University, Montréal, Quebec, Canada.
- Department of Neurology & Neurosurgery, McGill University, Montréal, Quebec, Canada.
- Canadian Institute for Advanced Research, Toronto, Ontario, Canada.
| | - Timothy P Lillicrap
- DeepMind, Inc., London, UK
- Centre for Computation, Mathematics and Physics in the Life Sciences and Experimental Biology, University College London, London, UK
| | | | - Yoshua Bengio
- Mila, Montréal, Quebec, Canada
- Canadian Institute for Advanced Research, Toronto, Ontario, Canada
- Université de Montréal, Montréal, Quebec, Canada
| | - Rafal Bogacz
- MRC Brain Network Dynamics Unit, University of Oxford, Oxford, UK
| | - Amelia Christensen
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| | - Rui Ponte Costa
- Computational Neuroscience Unit, School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, UK
- Department of Physiology, Universität Bern, Bern, Switzerland
| | | | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, CA, USA
- Google Brain, Mountain View, CA, USA
| | - Colleen J Gillon
- Department of Biological Sciences, University of Toronto Scarborough, Toronto, Ontario, Canada
- Department of Cell & Systems Biology, University of Toronto, Toronto, Ontario, Canada
| | - Danijar Hafner
- Google Brain, Mountain View, CA, USA
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Adam Kepecs
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Nikolaus Kriegeskorte
- Department of Psychology and Neuroscience, Columbia University, New York, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, USA
| | - Peter Latham
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Grace W Lindsay
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Kenneth D Miller
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY, USA
| | - Richard Naud
- University of Ottawa Brain and Mind Institute, Ottawa, Ontario, Canada
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Christopher C Pack
- Department of Neurology & Neurosurgery, McGill University, Montréal, Quebec, Canada
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Crete, Greece
| | - Pieter Roelfsema
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, Netherlands
| | - João Sacramento
- Institute of Neuroinformatics, ETH Zürich and University of Zürich, Zürich, Switzerland
| | - Andrew Saxe
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Benjamin Scellier
- Mila, Montréal, Quebec, Canada
- Université de Montréal, Montréal, Quebec, Canada
| | - Anna C Schapiro
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - Walter Senn
- Department of Physiology, Universität Bern, Bern, Switzerland
| | | | - Daniel Yamins
- Department of Psychology, Stanford University, Stanford, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK
| | - Joel Zylberberg
- Canadian Institute for Advanced Research, Toronto, Ontario, Canada
- Department of Physics and Astronomy York University, Toronto, Ontario, Canada
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | | - Konrad P Kording
- Canadian Institute for Advanced Research, Toronto, Ontario, Canada
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
19
|
Koren V, Andrei AR, Hu M, Dragoi V, Obermayer K. Reading-out task variables as a low-dimensional reconstruction of neural spike trains in single trials. PLoS One 2019; 14:e0222649. [PMID: 31622346 PMCID: PMC6797168 DOI: 10.1371/journal.pone.0222649] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Accepted: 09/03/2019] [Indexed: 11/18/2022] Open
Abstract
We propose a new model of the read-out of spike trains that exploits the multivariate structure of responses of neural ensembles. Assuming the point of view of a read-out neuron that receives synaptic inputs from a population of projecting neurons, synaptic inputs are weighted with a heterogeneous set of weights. We propose that synaptic weights reflect the role of each neuron within the population for the computational task that the network has to solve. In our case, the computational task is discrimination of binary classes of stimuli, and weights are such as to maximize the discrimination capacity of the network. We compute synaptic weights as the feature weights of an optimal linear classifier. Once weights have been learned, they weight spike trains and allow to compute the post-synaptic current that modulates the spiking probability of the read-out unit in real time. We apply the model on parallel spike trains from V1 and V4 areas in the behaving monkey macaca mulatta, while the animal is engaged in a visual discrimination task with binary classes of stimuli. The read-out of spike trains with our model allows to discriminate the two classes of stimuli, while population PSTH entirely fails to do so. Splitting neurons in two subpopulations according to the sign of the weight, we show that population signals of the two functional subnetworks are negatively correlated. Disentangling the superficial, the middle and the deep layer of the cortex, we show that in both V1 and V4, superficial layers are the most important in discriminating binary classes of stimuli.
Collapse
Affiliation(s)
- Veronika Koren
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
- * E-mail:
| | - Ariana R. Andrei
- Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, Texas, United States of America
| | - Ming Hu
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Valentin Dragoi
- Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, Texas, United States of America
| | - Klaus Obermayer
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| |
Collapse
|
20
|
Rule ME, O'Leary T, Harvey CD. Causes and consequences of representational drift. Curr Opin Neurobiol 2019; 58:141-147. [PMID: 31569062 PMCID: PMC7385530 DOI: 10.1016/j.conb.2019.08.005] [Citation(s) in RCA: 116] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Revised: 08/13/2019] [Accepted: 08/27/2019] [Indexed: 01/27/2023]
Abstract
The nervous system learns new associations while maintaining memories over long periods, exhibiting a balance between flexibility and stability. Recent experiments reveal that neuronal representations of learned sensorimotor tasks continually change over days and weeks, even after animals have achieved expert behavioral performance. How is learned information stored to allow consistent behavior despite ongoing changes in neuronal activity? What functions could ongoing reconfiguration serve? We highlight recent experimental evidence for such representational drift in sensorimotor systems, and discuss how this fits into a framework of distributed population codes. We identify recent theoretical work that suggests computational roles for drift and argue that the recurrent and distributed nature of sensorimotor representations permits drift while limiting disruptive effects. We propose that representational drift may create error signals between interconnected brain regions that can be used to keep neural codes consistent in the presence of continual change. These concepts suggest experimental and theoretical approaches to studying both learning and maintenance of distributed and adaptive population codes.
Collapse
Affiliation(s)
- Michael E Rule
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom.
| | | |
Collapse
|
21
|
Nobukawa S, Nishimura H, Yamanishi T. Temporal-specific complexity of spiking patterns in spontaneous activity induced by a dual complex network structure. Sci Rep 2019; 9:12749. [PMID: 31484990 PMCID: PMC6726653 DOI: 10.1038/s41598-019-49286-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Accepted: 08/22/2019] [Indexed: 11/08/2022] Open
Abstract
Temporal fluctuation of neural activity in the brain has an important function in optimal information processing. Spontaneous activity is a source of such fluctuation. The distribution of excitatory postsynaptic potentials (EPSPs) between cortical pyramidal neurons can follow a log-normal distribution. Recent studies have shown that networks connected by weak synapses exhibit characteristics of a random network, whereas networks connected by strong synapses have small-world characteristics of small path lengths and large cluster coefficients. To investigate the relationship between temporal complexity spontaneous activity and structural network duality in synaptic connections, we executed a simulation study using the leaky integrate-and-fire spiking neural network with log-normal synaptic weight distribution for the EPSPs and duality of synaptic connectivity, depending on synaptic weight. We conducted multiscale entropy analysis of the temporal spiking activity. Our simulation demonstrated that, when strong synaptic connections approach a small-world network, specific spiking patterns arise during irregular spatio-temporal spiking activity, and the complexity at the large temporal scale (i.e., slow frequency) is enhanced. Moreover, we confirmed through a surrogate data analysis that slow temporal dynamics reflect a deterministic process in the spiking neural networks. This modelling approach may improve the understanding of the spatio-temporal complex neural activity in the brain.
Collapse
Affiliation(s)
- Sou Nobukawa
- Department of Computer Science, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba, 275-0016, Japan.
| | - Haruhiko Nishimura
- Graduate School of Applied Informatics, University of Hyogo, 7-1-28 Chuo-ku, Kobe, Hyogo, 650-8588, Japan
| | - Teruya Yamanishi
- AI & IoT Center, Department of Management and Information Sciences, Fukui University of Technology, 3-6-1 Gakuen, Fukui, 910-8505, Japan
| |
Collapse
|
22
|
Hutchinson JB, Barrett LF. The power of predictions: An emerging paradigm for psychological research. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2019; 28:280-291. [PMID: 31749520 DOI: 10.1177/0963721419831992] [Citation(s) in RCA: 85] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
The last two decades of neuroscience research has produced a growing number of studies that suggest the various psychological phenomena are produced by predictive processes in the brain. When considered together, these studies form a coherent, neurobiologically-inspired research program for guiding psychological research about the mind and behavior. In this paper, we briefly consider the common assumptions and hypotheses that unify an emerging framework and discuss its ramifications, both for improving the replicability and robustness of psychological research and for innovating psychological theory by suggesting an alternative ontology of the human mind.
Collapse
|
23
|
Avitan L, Goodhill GJ. Code Under Construction: Neural Coding Over Development. Trends Neurosci 2018; 41:599-609. [DOI: 10.1016/j.tins.2018.05.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Revised: 05/17/2018] [Accepted: 05/25/2018] [Indexed: 01/11/2023]
|
24
|
Verzi SJ, Rothganger F, Parekh OD, Quach TT, Miner NE, Vineyard CM, James CD, Aimone JB. Computing with Spikes: The Advantage of Fine-Grained Timing. Neural Comput 2018; 30:2660-2690. [PMID: 30021083 DOI: 10.1162/neco_a_01113] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Neural-inspired spike-based computing machines often claim to achieve considerable advantages in terms of energy and time efficiency by using spikes for computation and communication. However, fundamental questions about spike-based computation remain unanswered. For instance, how much advantage do spike-based approaches have over conventional methods, and under what circumstances does spike-based computing provide a comparative advantage? Simply implementing existing algorithms using spikes as the medium of computation and communication is not guaranteed to yield an advantage. Here, we demonstrate that spike-based communication and computation within algorithms can increase throughput, and they can decrease energy cost in some cases. We present several spiking algorithms, including sorting a set of numbers in ascending/descending order, as well as finding the maximum or minimum or median of a set of numbers. We also provide an example application: a spiking median-filtering approach for image processing providing a low-energy, parallel implementation. The algorithms and analyses presented here demonstrate that spiking algorithms can provide performance advantages and offer efficient computation of fundamental operations useful in more complex algorithms.
Collapse
Affiliation(s)
- Stephen J Verzi
- Energy, Earth and Complex Systems Center, Sandia National Laboratories, NM 87185-1138, U.S.A.
| | - Fredrick Rothganger
- Center for Computing Research, Sandia National Laboratories, NM 87185-1326, U.S.A.
| | - Ojas D Parekh
- Center for Computing Research, Sandia National Laboratories, NM 87185-1326, U.S.A.
| | - Tu-Thach Quach
- Threat Intelligence Center, Sandia National Laboratories, NM 87185-1248, U.S.A.
| | - Nadine E Miner
- System Mission Engineering Center, Sandia National Laboratories, NM 87185-9405, U.S.A.
| | - Craig M Vineyard
- Center for Computing Research, Sandia National Laboratories, NM 87185-1327, U.S.A.
| | - Conrad D James
- Microsystems Science, Technology and Components Center, Sandia National Laboratories, NM 87185-1425, U.S.A.
| | - James B Aimone
- Center for Computing Research, Sandia National Laboratories, NM 87185-1327, U.S.A.
| |
Collapse
|
25
|
Zhou S, Yu Y. Synaptic E-I Balance Underlies Efficient Neural Coding. Front Neurosci 2018; 12:46. [PMID: 29456491 PMCID: PMC5801300 DOI: 10.3389/fnins.2018.00046] [Citation(s) in RCA: 102] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 01/19/2018] [Indexed: 12/19/2022] Open
Abstract
Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.
Collapse
Affiliation(s)
- Shanglin Zhou
- State Key Laboratory of Medical Neurobiology, School of Life Science and the Collaborative Innovation Center for Brain Science, Institutes of Brain Science, Center for Computational Systems Biology, Fudan University, Shanghai, China
| | - Yuguo Yu
- State Key Laboratory of Medical Neurobiology, School of Life Science and the Collaborative Innovation Center for Brain Science, Institutes of Brain Science, Center for Computational Systems Biology, Fudan University, Shanghai, China
| |
Collapse
|
26
|
Denève S, Alemi A, Bourdoukan R. The Brain as an Efficient and Robust Adaptive Learner. Neuron 2017; 94:969-977. [PMID: 28595053 DOI: 10.1016/j.neuron.2017.05.016] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 05/08/2017] [Accepted: 05/09/2017] [Indexed: 12/20/2022]
Abstract
Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level.
Collapse
Affiliation(s)
- Sophie Denève
- Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France.
| | - Alireza Alemi
- Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France
| | - Ralph Bourdoukan
- Group for Neural Theory, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France
| |
Collapse
|