1
|
Quach TT, Duchemin AM. Intelligence, brain structure, dendrites, and genes: Genetic, epigenetic and the underlying of the quadruple helix complexity. Neurosci Biobehav Rev 2025; 175:106212. [PMID: 40389043 DOI: 10.1016/j.neubiorev.2025.106212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2025] [Revised: 05/01/2025] [Accepted: 05/12/2025] [Indexed: 05/21/2025]
Abstract
Intelligence can be referred to as the mental ability to learn, comprehend abstract concepts, and solve complex problems. Twin and adoption studies have provided insights into the influence of the familial environment and highlighted the importance of heritability in the development of cognition. Detecting the relative contribution of brain areas, neuronal structures, and connectomes has brought some understanding on how various brain areas, white/gray matter structures and neuronal connectivity process information and contribute to intelligence. Using histological, anatomical, electrophysiological, neuropsychological, neuro-imaging and molecular biology methods, several key concepts have emerged: 1) the parietofrontal-hippocampal integrations probably constitute a substrate for smart behavior, 2) neuronal activity results in structural plasticity of dendritic branches responsible for information transfer, critical for learning and memory, 3) intelligent people process information efficiently, 4) the environment triggers mnemonic epigenomic programs (via dynamic regulation of chromatin accessibility, DNA methylation, loop interruption/formation and histone modification) conferring cognitive phenotypes throughout life, and 5) single/double DNA breaks are prominent in human brain disorders associated with cognitive impairment including Alzheimer's disease and schizophrenia. Along with these observations, molecular/cellular/biological studies have identified sets of specific genes associated with higher scores on intelligence tests. Interestingly, many of these genes are associated with dendritogenesis. Because dendrite structure/function is involved in cognition, the control of dendrite genesis/maintenance may be critical for understanding the landscape of general/specific cognitive ability and new pathways for therapeutic approaches.
Collapse
Affiliation(s)
- Tam T Quach
- Department of Neuroscience. The Ohio State University, Columbus, OH 43210, USA.
| | - Anne-Marie Duchemin
- Department of Psychiatry and Behavioral Health, The Ohio State University, Columbus, OH 43210, USA.
| |
Collapse
|
2
|
Meissner-Bernard C, Jenkins B, Rupprecht P, Bouldoires EA, Zenke F, Friedrich RW, Frank T. Computational functions of precisely balanced neuronal microcircuits in an olfactory memory network. Cell Rep 2025; 44:115330. [PMID: 39985769 DOI: 10.1016/j.celrep.2025.115330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 12/12/2024] [Accepted: 01/28/2025] [Indexed: 02/24/2025] Open
Abstract
Models of balanced autoassociative memory networks predict that specific inhibition is critical to store information in connectivity. To explore these predictions, we characterized and manipulated different subtypes of fast-spiking interneurons in the posterior telencephalic area Dp (pDp) of adult zebrafish, the homolog of the piriform cortex. Modeling of recurrent networks with assemblies showed that a precise balance of excitation and inhibition is important to prevent not only excessive firing rates ("runaway activity") but also the stochastic occurrence of high pattern correlations ("runaway correlations"). Consistent with model predictions, runaway correlations emerged in pDp when synaptic balance was perturbed by optogenetic manipulations of feedback inhibition but not feedforward inhibition. Runaway correlations were driven by sparse subsets of strongly active neurons rather than by a general broadening of tuning curves. These results are consistent with balanced neuronal assemblies in pDp and reveal novel computational functions of inhibitory microcircuits in an autoassociative network.
Collapse
Affiliation(s)
- Claire Meissner-Bernard
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
| | - Bethan Jenkins
- University of Göttingen, Faculty of Biology and Psychology, 37073 Göttingen, Germany; Olfactory Memory and Behavior Group, European Neuroscience Institute Göttingen - A Joint Initiative of the University Medical Center Göttingen and the Max Planck Institute for Multidisciplinary Sciences, Grisebachstraße 5, 37077 Göttingen, Germany; Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany; Göttingen Campus Institute for Dynamics of Biological Networks, 37073 Göttingen, Germany; Max Planck Institute for Biological Intelligence, Am Klopferspitz 18, 82152 Martinsried, Germany
| | - Peter Rupprecht
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland; Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland; Neuroscience Center Zurich, University of Zurich, 8006 Zürich, Switzerland
| | - Estelle Arn Bouldoires
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland; University of Basel, 4003 Basel, Switzerland
| | - Rainer W Friedrich
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland; University of Basel, 4003 Basel, Switzerland.
| | - Thomas Frank
- University of Göttingen, Faculty of Biology and Psychology, 37073 Göttingen, Germany; Olfactory Memory and Behavior Group, European Neuroscience Institute Göttingen - A Joint Initiative of the University Medical Center Göttingen and the Max Planck Institute for Multidisciplinary Sciences, Grisebachstraße 5, 37077 Göttingen, Germany; Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany; Göttingen Campus Institute for Dynamics of Biological Networks, 37073 Göttingen, Germany; Max Planck Institute for Biological Intelligence, Am Klopferspitz 18, 82152 Martinsried, Germany.
| |
Collapse
|
3
|
Clark DG, Beiran M. Structure of activity in multiregion recurrent neural networks. Proc Natl Acad Sci U S A 2025; 122:e2404039122. [PMID: 40053363 PMCID: PMC11912375 DOI: 10.1073/pnas.2404039122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 02/07/2025] [Indexed: 03/12/2025] Open
Abstract
Neural circuits comprise multiple interconnected regions, each with complex dynamics. The interplay between local and global activity is thought to underlie computational flexibility, yet the structure of multiregion neural activity and its origins in synaptic connectivity remain poorly understood. We investigate recurrent neural networks with multiple regions, each containing neurons with random and structured connections. Inspired by experimental evidence of communication subspaces, we use low-rank connectivity between regions to enable selective activity routing. These networks exhibit high-dimensional fluctuations within regions and low-dimensional signal transmission between them. Using dynamical mean-field theory, with cross-region currents as order parameters, we show that regions act as both generators and transmitters of activity-roles that are often in tension. Taming within-region activity can be crucial for effective signal routing. Unlike previous models that suppressed neural activity to control signal flow, our model achieves routing by exciting different high-dimensional activity patterns through connectivity structure and nonlinear dynamics. Our analysis of this disordered system offers insights into multiregion neural data and trained neural networks.
Collapse
Affiliation(s)
- David G. Clark
- Zuckerman Institute, Columbia University, New York, NY10027
- Kavli Institute for Brain Science, Columbia University, New York, NY10027
| | - Manuel Beiran
- Zuckerman Institute, Columbia University, New York, NY10027
- Kavli Institute for Brain Science, Columbia University, New York, NY10027
| |
Collapse
|
4
|
Koren V, Blanco Malerba S, Schwalger T, Panzeri S. Efficient coding in biophysically realistic excitatory-inhibitory spiking networks. eLife 2025; 13:RP99545. [PMID: 40053385 PMCID: PMC11888603 DOI: 10.7554/elife.99545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2025] Open
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we derive the structural, coding, and biophysical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. We assumed that the network encodes a number of independent stimulus features varying with a time scale equal to the membrane time constant of excitatory and inhibitory neurons. The optimal network has biologically plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-specific excitatory external input. The excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implements feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal ratio of excitatory vs inhibitory neurons and the ratio of mean inhibitory-to-inhibitory vs excitatory-to-inhibitory connectivity are comparable to those of cortical sensory networks. The efficient network solution exhibits an instantaneous balance between excitation and inhibition. The network can perform efficient coding even when external stimuli vary over multiple time scales. Together, these results suggest that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
- Institute of Mathematics, Technische Universität BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| | - Simone Blanco Malerba
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
| | - Tilo Schwalger
- Institute of Mathematics, Technische Universität BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| | - Stefano Panzeri
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-EppendorfHamburgGermany
| |
Collapse
|
5
|
Feulner B, Perich MG, Miller LE, Clopath C, Gallego JA. A neural implementation model of feedback-based motor learning. Nat Commun 2025; 16:1805. [PMID: 39979257 PMCID: PMC11842561 DOI: 10.1038/s41467-024-54738-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 11/18/2024] [Indexed: 02/22/2025] Open
Abstract
Animals use feedback to rapidly correct ongoing movements in the presence of a perturbation. Repeated exposure to a predictable perturbation leads to behavioural adaptation that compensates for its effects. Here, we tested the hypothesis that all the processes necessary for motor adaptation may emerge as properties of a controller that adaptively updates its policy. We trained a recurrent neural network to control its own output through an error-based feedback signal, which allowed it to rapidly counteract external perturbations. Implementing a biologically plausible plasticity rule based on this same feedback signal enabled the network to learn to compensate for persistent perturbations through a trial-by-trial process. The network activity changes during learning matched those from populations of neurons from monkey primary motor cortex - known to mediate both movement correction and motor adaptation - during the same task. Furthermore, our model natively reproduced several key aspects of behavioural studies in humans and monkeys. Thus, key features of trial-by-trial motor adaptation can arise from the internal properties of a recurrent neural circuit that adaptively controls its output based on ongoing feedback.
Collapse
Affiliation(s)
- Barbara Feulner
- Department of Bioengineering, Imperial College London, London, UK
| | - Matthew G Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal, Montréal, QC, Canada
- Mila (Quebec Artificial Intelligence Institute), Montréal, QC, Canada
| | - Lee E Miller
- Department of Neuroscience, Northwestern University, Chicago, IL, USA
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern University, and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK.
| | - Juan A Gallego
- Department of Bioengineering, Imperial College London, London, UK.
| |
Collapse
|
6
|
Chen Y, Xiao Z, Du Y, Zhao L, Zhang L, Wu Z, Zhu D, Zhang T, Yao D, Hu X, Liu T, Jiang X. A Unified and Biologically Plausible Relational Graph Representation of Vision Transformers. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:3231-3243. [PMID: 38163310 DOI: 10.1109/tnnls.2023.3342810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Vision transformer (ViT) and its variants have achieved remarkable success in various tasks. The key characteristic of these ViT models is to adopt different aggregation strategies of spatial patch information within the artificial neural networks (ANNs). However, there is still a key lack of unified representation of different ViT architectures for systematic understanding and assessment of model representation performance. Moreover, how those well-performing ViT ANNs are similar to real biological neural networks (BNNs) is largely unexplored. To answer these fundamental questions, we, for the first time, propose a unified and biologically plausible relational graph representation of ViT models. Specifically, the proposed relational graph representation consists of two key subgraphs: an aggregation graph and an affine graph. The former considers ViT tokens as nodes and describes their spatial interaction, while the latter regards network channels as nodes and reflects the information communication between channels. Using this unified relational graph representation, we found that: 1) model performance was closely related to graph measures; 2) the proposed relational graph representation of ViT has high similarity with real BNNs; and 3) there was a further improvement in model performance when training with a superior model to constrain the aggregation graph.
Collapse
|
7
|
Zhang MF, Fan BY, Zhang CY, Chen K, Tian WD, Zhang TH. Activity waves in condensed excitable phases of Quincke rollers. SOFT MATTER 2025; 21:927-934. [PMID: 39803758 DOI: 10.1039/d4sm01168f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2025]
Abstract
Traveling waves are universal in excitable systems; yet, the microscopic dynamics of wave propagation is inaccessible in conventional excitable systems. Here, we show that active colloids of Quincke rollers driven by a periodic electric field can form condensed excitable phases. Distinct from existing excitable media, condensed excitable colloids can be tuned reversibly between active liquids and active crystals in which two distinct waves can be excited, respectively. In active liquids, waves propagate by splitting and cross over each other, like sound waves, in collision. In active crystals, waves annihilate or converge, like shock waves, in collision. We show that the microscopic dynamics of sound waves is dominated by electrostatic repulsions while the dynamics of shock waves is encoded with a local density-dependent memory of propulsion. The condensed excitable colloids with tunable and controllable dynamics offer unexplored opportunities for the study of nonlinear phenomena.
Collapse
Affiliation(s)
- Meng Fei Zhang
- Center for Soft Condensed Matter Physics and Interdisciplinary Research & School of Physical Science and Technology, Soochow University, Suzhou 215006, P. R. China.
| | - Bao Ying Fan
- Center for Soft Condensed Matter Physics and Interdisciplinary Research & School of Physical Science and Technology, Soochow University, Suzhou 215006, P. R. China.
| | - Chuan Yu Zhang
- Center for Soft Condensed Matter Physics and Interdisciplinary Research & School of Physical Science and Technology, Soochow University, Suzhou 215006, P. R. China.
| | - Kang Chen
- Center for Soft Condensed Matter Physics and Interdisciplinary Research & School of Physical Science and Technology, Soochow University, Suzhou 215006, P. R. China.
| | - Wen-de Tian
- Center for Soft Condensed Matter Physics and Interdisciplinary Research & School of Physical Science and Technology, Soochow University, Suzhou 215006, P. R. China.
| | - Tian Hui Zhang
- Center for Soft Condensed Matter Physics and Interdisciplinary Research & School of Physical Science and Technology, Soochow University, Suzhou 215006, P. R. China.
| |
Collapse
|
8
|
Koren V, Malerba SB, Schwalger T, Panzeri S. Efficient coding in biophysically realistic excitatory-inhibitory spiking networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.04.24.590955. [PMID: 38712237 PMCID: PMC11071478 DOI: 10.1101/2024.04.24.590955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we derive the structural, coding, and biophysical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. We assumed that the network encodes a number of independent stimulus features varying with a time scale equal to the membrane time constant of excitatory and inhibitory neurons. The optimal network has biologically-plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-specific excitatory external input. The excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implements feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal ratio of excitatory vs inhibitory neurons and the ratio of mean inhibitory-to-inhibitory vs excitatory-to-inhibitory connectivity are comparable to those of cortical sensory networks. The efficient network solution exhibits an instantaneous balance between excitation and inhibition. The network can perform efficient coding even when external stimuli vary over multiple time scales. Together, these results suggest that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Simone Blanco Malerba
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| | - Tilo Schwalger
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Stefano Panzeri
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| |
Collapse
|
9
|
Meissner-Bernard C, Zenke F, Friedrich RW. Geometry and dynamics of representations in a precisely balanced memory network related to olfactory cortex. eLife 2025; 13:RP96303. [PMID: 39804831 PMCID: PMC11733691 DOI: 10.7554/elife.96303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2025] Open
Abstract
Biological memory networks are thought to store information by experience-dependent changes in the synaptic connectivity between assemblies of neurons. Recent models suggest that these assemblies contain both excitatory and inhibitory neurons (E/I assemblies), resulting in co-tuning and precise balance of excitation and inhibition. To understand computational consequences of E/I assemblies under biologically realistic constraints we built a spiking network model based on experimental data from telencephalic area Dp of adult zebrafish, a precisely balanced recurrent network homologous to piriform cortex. We found that E/I assemblies stabilized firing rate distributions compared to networks with excitatory assemblies and global inhibition. Unlike classical memory models, networks with E/I assemblies did not show discrete attractor dynamics. Rather, responses to learned inputs were locally constrained onto manifolds that 'focused' activity into neuronal subspaces. The covariance structure of these manifolds supported pattern classification when information was retrieved from selected neuronal subsets. Networks with E/I assemblies therefore transformed the geometry of neuronal coding space, resulting in continuous representations that reflected both relatedness of inputs and an individual's experience. Such continuous representations enable fast pattern classification, can support continual learning, and may provide a basis for higher-order learning and cognitive computations.
Collapse
Affiliation(s)
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- University of BaselBaselSwitzerland
| | - Rainer W Friedrich
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- University of BaselBaselSwitzerland
| |
Collapse
|
10
|
Wei M, Amann A, Burylko O, Han X, Yanchuk S, Kurths J. Synchronization cluster bursting in adaptive oscillator networks. CHAOS (WOODBURY, N.Y.) 2024; 34:123167. [PMID: 39718812 DOI: 10.1063/5.0226257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Accepted: 12/04/2024] [Indexed: 12/25/2024]
Abstract
Adaptive dynamical networks are ubiquitous in real-world systems. This paper aims to explore the synchronization dynamics in networks of adaptive oscillators based on a paradigmatic system of adaptively coupled phase oscillators. Our numerical observations reveal the emergence of synchronization cluster bursting, characterized by periodic transitions between cluster synchronization and global synchronization. By investigating a reduced model, the mechanisms underlying synchronization cluster bursting are clarified. We show that a minimal model exhibiting this phenomenon can be reduced to a phase oscillator with complex-valued adaptation. Furthermore, the adaptivity of the system leads to the appearance of additional symmetries, and thus, to the coexistence of stable bursting solutions with very different Kuramoto order parameters.
Collapse
Affiliation(s)
- Mengke Wei
- School of Mathematical Science, Yangzhou University, Yangzhou 225002, China
- Potsdam Institute for Climate Impact Research, Telegrafenberg, Potsdam 14473, Germany
- Faculty of Civil Engineering and Mechanics, Jiangsu University, Zhenjiang 212013, China
| | - Andreas Amann
- Potsdam Institute for Climate Impact Research, Telegrafenberg, Potsdam 14473, Germany
- School of Mathematical Sciences, University College Cork, Cork T12 XF62, Ireland
| | - Oleksandr Burylko
- Potsdam Institute for Climate Impact Research, Telegrafenberg, Potsdam 14473, Germany
- Institute of Mathematics, National Academy of Sciences of Ukraine, Kyiv 01024, Ukraine
- Institute of Mathematics, Humboldt University Berlin, Berlin 12489, Germany
| | - Xiujing Han
- Faculty of Civil Engineering and Mechanics, Jiangsu University, Zhenjiang 212013, China
| | - Serhiy Yanchuk
- Potsdam Institute for Climate Impact Research, Telegrafenberg, Potsdam 14473, Germany
- School of Mathematical Sciences, University College Cork, Cork T12 XF62, Ireland
| | - Jürgen Kurths
- Potsdam Institute for Climate Impact Research, Telegrafenberg, Potsdam 14473, Germany
- Research Institute of Intelligent Complex Systems, Fudan University, Shanghai 200433, China
| |
Collapse
|
11
|
Hu B, Temiz NZ, Chou CN, Rupprecht P, Meissner-Bernard C, Titze B, Chung S, Friedrich RW. Representational learning by optimization of neural manifolds in an olfactory memory network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.17.623906. [PMID: 39605658 PMCID: PMC11601331 DOI: 10.1101/2024.11.17.623906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
Higher brain functions depend on experience-dependent representations of relevant information that may be organized by attractor dynamics or by geometrical modifications of continuous "neural manifolds". To explore these scenarios we analyzed odor-evoked activity in telencephalic area pDp of juvenile and adult zebrafish, the homolog of piriform cortex. No obvious signatures of attractor dynamics were detected. Rather, olfactory discrimination training selectively enhanced the separation of neural manifolds representing task-relevant odors from other representations, consistent with predictions of autoassociative network models endowed with precise synaptic balance. Analytical approaches using the framework of manifold capacity revealed multiple geometrical modifications of representational manifolds that supported the classification of task-relevant sensory information. Manifold capacity predicted odor discrimination across individuals, indicating a close link between manifold geometry and behavior. Hence, pDp and possibly related recurrent networks store information in the geometry of representational manifolds, resulting in joint sensory and semantic maps that may support distributed learning processes.
Collapse
Affiliation(s)
- Bo Hu
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- University of Basel, 4003 Basel, Switzerland
| | - Nesibe Z. Temiz
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- University of Basel, 4003 Basel, Switzerland
| | - Chi-Ning Chou
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - Peter Rupprecht
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- Neuroscience Center Zurich, 8057 Zurich, Switzerland
- Brain Research Institute, University of Zurich, 8057 Zurich, Switzerland
| | - Claire Meissner-Bernard
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
| | - Benjamin Titze
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
| | - SueYeon Chung
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Rainer W. Friedrich
- Friedrich Miescher Institute for Biomedical Research, Fabrikstrasse 24, 4056 Basel, Switzerland
- University of Basel, 4003 Basel, Switzerland
| |
Collapse
|
12
|
Yamane Y. Adaptation of the inferior temporal neurons and efficient visual processing. Front Behav Neurosci 2024; 18:1398874. [PMID: 39132448 PMCID: PMC11310006 DOI: 10.3389/fnbeh.2024.1398874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Accepted: 07/16/2024] [Indexed: 08/13/2024] Open
Abstract
Numerous studies examining the responses of individual neurons in the inferior temporal (IT) cortex have revealed their characteristics such as two-dimensional or three-dimensional shape tuning, objects, or category selectivity. While these basic selectivities have been studied assuming that their response to stimuli is relatively stable, physiological experiments have revealed that the responsiveness of IT neurons also depends on visual experience. The activity changes of IT neurons occur over various time ranges; among these, repetition suppression (RS), in particular, is robustly observed in IT neurons without any behavioral or task constraints. I observed a similar phenomenon in the ventral visual neurons in macaque monkeys while they engaged in free viewing and actively fixated on one consistent object multiple times. This observation indicates that the phenomenon also occurs in natural situations during which the subject actively views stimuli without forced fixation, suggesting that this phenomenon is an everyday occurrence and widespread across regions of the visual system, making it a default process for visual neurons. Such short-term activity modulation may be a key to understanding the visual system; however, the circuit mechanism and the biological significance of RS remain unclear. Thus, in this review, I summarize the observed modulation types in IT neurons and the known properties of RS. Subsequently, I discuss adaptation in vision, including concepts such as efficient and predictive coding, as well as the relationship between adaptation and psychophysical aftereffects. Finally, I discuss some conceptual implications of this phenomenon as well as the circuit mechanisms and the models that may explain adaptation as a fundamental aspect of visual processing.
Collapse
Affiliation(s)
- Yukako Yamane
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| |
Collapse
|
13
|
van Andel DM, Sprengers JJ, Königs M, de Jonge MV, Bruining H. Effects of Bumetanide on Neurocognitive Functioning in Children with Autism Spectrum Disorder: Secondary Analysis of a Randomized Placebo-Controlled Trial. J Autism Dev Disord 2024; 54:894-904. [PMID: 36626004 PMCID: PMC10907457 DOI: 10.1007/s10803-022-05841-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/18/2022] [Indexed: 01/11/2023]
Abstract
We present the secondary-analysis of neurocognitive tests in the 'Bumetanide in Autism Medication and Biomarker' (BAMBI;EUDRA-CT-2014-001560-35) study, a randomized double-blind placebo-controlled (1:1) trial testing 3-months bumetanide treatment (≤ 1 mg twice-daily) in unmedicated children 7-15 years with ASD. Children with IQ ≥ 70 were analyzed for baseline deficits and treatment-effects on the intention-to-treat-population with generalized-linear-models, principal component analysis and network analysis. Ninety-two children were allocated to treatment and 83 eligible for analyses. Heterogeneous neurocognitive impairments were found that were unaffected by bumetanide treatment. Network analysis showed higher modularity after treatment (mean difference:-0.165, 95%CI:-0.317 to - 0.013,p = .034) and changes in the relative importance of response inhibition in the neurocognitive network (mean difference:-0.037, 95%CI:-0.073 to - 0.001,p = .042). This study offers perspectives to include neurocognitive tests in ASD trials.
Collapse
Affiliation(s)
- Dorinde M van Andel
- Department of Psychiatry, UMC Utrecht Brain Centre, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Jan J Sprengers
- Department of Psychiatry, UMC Utrecht Brain Centre, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Marsh Königs
- Department of Paediatrics, Emma Neuroscience Group, Amsterdam UMC Emma Children's Hospital, Amsterdam, The Netherlands
| | - Maretha V de Jonge
- Department of Psychiatry, UMC Utrecht Brain Centre, University Medical Centre Utrecht, Utrecht, The Netherlands
- Department Education and Child Studies, Faculty of Social and Behavioral Sciences, Leiden University, Leiden, The Netherlands
| | - Hilgo Bruining
- Department of Psychiatry, UMC Utrecht Brain Centre, University Medical Centre Utrecht, Utrecht, The Netherlands.
- Child and Adolescent Psychiatry and Psychosocial Care, Emma Children's Hospital, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, Netherlands.
- N=You Neurodevelopmental Precision Center, Amsterdam Neuroscience, Amsterdam Reproduction and Development, Amsterdam UMC, Amsterdam, Netherlands.
- Levvel, Center for Child and Adolescent Psychiatry, Amsterdam, Netherlands.
- Department of Child and Adolescent Psychiatry, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, the Netherlands.
| |
Collapse
|
14
|
Chapman GW, Hasselmo ME. Predictive learning by a burst-dependent learning rule. Neurobiol Learn Mem 2023; 205:107826. [PMID: 37696414 DOI: 10.1016/j.nlm.2023.107826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 08/05/2023] [Accepted: 09/03/2023] [Indexed: 09/13/2023]
Abstract
Humans and other animals are able to quickly generalize latent dynamics of spatiotemporal sequences, often from a minimal number of previous experiences. Additionally, internal representations of external stimuli must remain stable, even in the presence of sensory noise, in order to be useful for informing behavior. In contrast, typical machine learning approaches require many thousands of samples, and generalize poorly to unexperienced examples, or fail completely to predict at long timescales. Here, we propose a novel neural network module which incorporates hierarchy and recurrent feedback terms, constituting a simplified model of neocortical microcircuits. This microcircuit predicts spatiotemporal trajectories at the input layer using a temporal error minimization algorithm. We show that this module is able to predict with higher accuracy into the future compared to traditional models. Investigating this model we find that successive predictive models learn representations which are increasingly removed from the raw sensory space, namely as successive temporal derivatives of the positional information. Next, we introduce a spiking neural network model which implements the rate-model through the use of a recently proposed biological learning rule utilizing dual-compartment neurons. We show that this network performs well on the same tasks as the mean-field models, by developing intrinsic dynamics that follow the dynamics of the external stimulus, while coordinating transmission of higher-order dynamics. Taken as a whole, these findings suggest that hierarchical temporal abstraction of sequences, rather than feed-forward reconstruction, may be responsible for the ability of neural systems to quickly adapt to novel situations.
Collapse
Affiliation(s)
- G William Chapman
- Center for Systems Neuroscience, Boston University, Boston, MA, USA.
| | | |
Collapse
|
15
|
Han MJ, Tsukruk VV. Trainable Bilingual Synaptic Functions in Bio-enabled Synaptic Transistors. ACS NANO 2023; 17:18883-18892. [PMID: 37721448 PMCID: PMC10569090 DOI: 10.1021/acsnano.3c04113] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 09/14/2023] [Indexed: 09/19/2023]
Abstract
The signal transmission of the nervous system is regulated by neurotransmitters. Depending on the type of neurotransmitter released by presynaptic neurons, neuron cells can either be excited or inhibited. Maintaining a balance between excitatory and inhibitory synaptic responses is crucial for the nervous system's versatility, elasticity, and ability to perform parallel computing. On the way to mimic the brain's versatility and plasticity traits, creating a preprogrammed balance between excitatory and inhibitory responses is required. Despite substantial efforts to investigate the balancing of the nervous system, a complex circuit configuration has been suggested to simulate the interaction between excitatory and inhibitory synapses. As a meaningful approach, an optoelectronic synapse for balancing the excitatory and inhibitory responses assisted by light mediation is proposed here by deploying humidity-sensitive chiral nematic phases of known polysaccharide cellulose nanocrystals. The environment-induced pitch tuning changes the polarization of the helicoidal organization, affording different hysteresis effects with the subsequent excitatory and inhibitory nonvolatile behavior in the bio-electrolyte-gated transistors. By applying voltage pulses combined with stimulation of chiral light, the artificial optoelectronic synapse tunes not only synaptic functions but also learning pathways and color recognition. These multifunctional bio-based synaptic field-effect transistors exhibit potential for enhanced parallel neuromorphic computing and robot vision technology.
Collapse
Affiliation(s)
- Moon Jong Han
- Department
of Electronic Engineering, Gachon University, Seongnam 13120, Republic of Korea
| | - Vladimir V. Tsukruk
- School
of Materials Science and Engineering, Georgia
Institute of Technology, Atlanta, Georgia 30332, United States
| |
Collapse
|
16
|
Yamada T, Watanabe T, Sasaki Y. Are sleep disturbances a cause or consequence of autism spectrum disorder? Psychiatry Clin Neurosci 2023; 77:377-385. [PMID: 36949621 PMCID: PMC10871071 DOI: 10.1111/pcn.13550] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/10/2023] [Accepted: 03/17/2023] [Indexed: 03/24/2023]
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by core symptoms such as atypical social communication, stereotyped behaviors, and restricted interests. One of the comorbid symptoms of individuals with ASD is sleep disturbance. There are two major hypotheses regarding the neural mechanism underlying ASD, i.e., the excitation/inhibition (E/I) imbalance and the altered neuroplasticity hypotheses. However, the pathology of ASD remains unclear due to inconsistent research results. This paper argues that sleep is a confounding factor, thus, must be considered when examining the pathology of ASD because sleep plays an important role in modulating the E/I balance and neuroplasticity in the human brain. Investigation of the E/I balance and neuroplasticity during sleep might enhance our understanding of the neural mechanisms of ASD. It may also lead to the development of neurobiologically informed interventions to supplement existing psychosocial therapies.
Collapse
Affiliation(s)
- Takashi Yamada
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, 02912, USA
| | - Takeo Watanabe
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, 02912, USA
| | - Yuka Sasaki
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, 02912, USA
| |
Collapse
|
17
|
Jeon I, Kim T. Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network. Front Comput Neurosci 2023; 17:1092185. [PMID: 37449083 PMCID: PMC10336230 DOI: 10.3389/fncom.2023.1092185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.
Collapse
Affiliation(s)
| | - Taegon Kim
- Brain Science Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
18
|
Langdon C, Genkin M, Engel TA. A unifying perspective on neural manifolds and circuits for cognition. Nat Rev Neurosci 2023; 24:363-377. [PMID: 37055616 PMCID: PMC11058347 DOI: 10.1038/s41583-023-00693-x] [Citation(s) in RCA: 59] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2023] [Indexed: 04/15/2023]
Abstract
Two different perspectives have informed efforts to explain the link between the brain and behaviour. One approach seeks to identify neural circuit elements that carry out specific functions, emphasizing connectivity between neurons as a substrate for neural computations. Another approach centres on neural manifolds - low-dimensional representations of behavioural signals in neural population activity - and suggests that neural computations are realized by emergent dynamics. Although manifolds reveal an interpretable structure in heterogeneous neuronal activity, finding the corresponding structure in connectivity remains a challenge. We highlight examples in which establishing the correspondence between low-dimensional activity and connectivity has been possible, unifying the neural manifold and circuit perspectives. This relationship is conspicuous in systems in which the geometry of neural responses mirrors their spatial layout in the brain, such as the fly navigational system. Furthermore, we describe evidence that, in systems in which neural responses are heterogeneous, the circuit comprises interactions between activity patterns on the manifold via low-rank connectivity. We suggest that unifying the manifold and circuit approaches is important if we are to be able to causally test theories about the neural computations that underlie behaviour.
Collapse
Affiliation(s)
- Christopher Langdon
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Mikhail Genkin
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Tatiana A Engel
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| |
Collapse
|
19
|
Garnier Artiñano T, Andalibi V, Atula I, Maestri M, Vanni S. Biophysical parameters control signal transfer in spiking network. Front Comput Neurosci 2023; 17:1011814. [PMID: 36761840 PMCID: PMC9905747 DOI: 10.3389/fncom.2023.1011814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Introduction Information transmission and representation in both natural and artificial networks is dependent on connectivity between units. Biological neurons, in addition, modulate synaptic dynamics and post-synaptic membrane properties, but how these relate to information transmission in a population of neurons is still poorly understood. A recent study investigated local learning rules and showed how a spiking neural network can learn to represent continuous signals. Our study builds on their model to explore how basic membrane properties and synaptic delays affect information transfer. Methods The system consisted of three input and output units and a hidden layer of 300 excitatory and 75 inhibitory leaky integrate-and-fire (LIF) or adaptive integrate-and-fire (AdEx) units. After optimizing the connectivity to accurately replicate the input patterns in the output units, we transformed the model to more biologically accurate units and included synaptic delay and concurrent action potential generation in distinct neurons. We examined three different parameter regimes which comprised either identical physiological values for both excitatory and inhibitory units (Comrade), more biologically accurate values (Bacon), or the Comrade regime whose output units were optimized for low reconstruction error (HiFi). We evaluated information transmission and classification accuracy of the network with four distinct metrics: coherence, Granger causality, transfer entropy, and reconstruction error. Results Biophysical parameters showed a major impact on information transfer metrics. The classification was surprisingly robust, surviving very low firing and information rates, whereas information transmission overall and particularly low reconstruction error were more dependent on higher firing rates in LIF units. In AdEx units, the firing rates were lower and less information was transferred, but interestingly the highest information transmission rates were no longer overlapping with the highest firing rates. Discussion Our findings can be reflected on the predictive coding theory of the cerebral cortex and may suggest information transfer qualities as a phenomenological quality of biological cells.
Collapse
Affiliation(s)
- Tomás Garnier Artiñano
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland
| | - Vafa Andalibi
- Department of Computer Science, Indiana University Bloomington, Bloomington, IN, United States
| | - Iiris Atula
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland
| | - Matteo Maestri
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland,Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Simo Vanni
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland,Department of Physiology, Medicum, University of Helsinki, Helsinki, Finland,*Correspondence: Simo Vanni,
| |
Collapse
|
20
|
Ali A, Ahmad N, de Groot E, Johannes van Gerven MA, Kietzmann TC. Predictive coding is a consequence of energy efficiency in recurrent neural networks. PATTERNS (NEW YORK, N.Y.) 2022; 3:100639. [PMID: 36569556 PMCID: PMC9768680 DOI: 10.1016/j.patter.2022.100639] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 12/24/2021] [Accepted: 10/27/2022] [Indexed: 11/24/2022]
Abstract
Predictive coding is a promising framework for understanding brain function. It postulates that the brain continuously inhibits predictable sensory input, ensuring preferential processing of surprising elements. A central aspect of this view is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modeling to demonstrate that such architectural hardwiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency. When training recurrent neural networks to minimize their energy consumption while operating in predictive environments, the networks self-organize into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. Moving beyond the view of purely top-down-driven predictions, we demonstrate, via virtual lesioning experiments, that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time.
Collapse
Affiliation(s)
- Abdullahi Ali
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands,Corresponding author
| | - Nasir Ahmad
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Elgar de Groot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands,Department of Experimental Psychology, Utrecht University, Utrecht, the Netherlands
| | | | - Tim Christian Kietzmann
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany,Corresponding author
| |
Collapse
|
21
|
Lee J, Jo J, Lee B, Lee JH, Yoon S. Brain-inspired Predictive Coding Improves the Performance of Machine Challenging Tasks. Front Comput Neurosci 2022; 16:1062678. [PMID: 36465966 PMCID: PMC9709416 DOI: 10.3389/fncom.2022.1062678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 10/28/2022] [Indexed: 09/19/2023] Open
Abstract
Backpropagation has been regarded as the most favorable algorithm for training artificial neural networks. However, it has been criticized for its biological implausibility because its learning mechanism contradicts the human brain. Although backpropagation has achieved super-human performance in various machine learning applications, it often shows limited performance in specific tasks. We collectively referred to such tasks as machine-challenging tasks (MCTs) and aimed to investigate methods to enhance machine learning for MCTs. Specifically, we start with a natural question: Can a learning mechanism that mimics the human brain lead to the improvement of MCT performances? We hypothesized that a learning mechanism replicating the human brain is effective for tasks where machine intelligence is difficult. Multiple experiments corresponding to specific types of MCTs where machine intelligence has room to improve performance were performed using predictive coding, a more biologically plausible learning algorithm than backpropagation. This study regarded incremental learning, long-tailed, and few-shot recognition as representative MCTs. With extensive experiments, we examined the effectiveness of predictive coding that robustly outperformed backpropagation-trained networks for the MCTs. We demonstrated that predictive coding-based incremental learning alleviates the effect of catastrophic forgetting. Next, predictive coding-based learning mitigates the classification bias in long-tailed recognition. Finally, we verified that the network trained with predictive coding could correctly predict corresponding targets with few samples. We analyzed the experimental result by drawing analogies between the properties of predictive coding networks and those of the human brain and discussing the potential of predictive coding networks in general machine learning.
Collapse
Affiliation(s)
- Jangho Lee
- Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| | - Jeonghee Jo
- Institute of New Media and Communications, Seoul National University, Seoul, South Korea
| | - Byounghwa Lee
- CybreBrain Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Jung-Hoon Lee
- CybreBrain Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Sungroh Yoon
- Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
- Interdisciplinary Program in Artificial Intelligence, Seoul National University, Seoul, South Korea
| |
Collapse
|
22
|
Hu B, Guan ZH, Chen G, Chen CLP. Neuroscience and Network Dynamics Toward Brain-Inspired Intelligence. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:10214-10227. [PMID: 33909581 DOI: 10.1109/tcyb.2021.3071110] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article surveys the interdisciplinary research of neuroscience, network science, and dynamic systems, with emphasis on the emergence of brain-inspired intelligence. To replicate brain intelligence, a practical way is to reconstruct cortical networks with dynamic activities that nourish the brain functions, instead of using only artificial computing networks. The survey provides a complex network and spatiotemporal dynamics (abbr. network dynamics) perspective for understanding the brain and cortical networks and, furthermore, develops integrated approaches of neuroscience and network dynamics toward building brain-inspired intelligence with learning and resilience functions. Presented are fundamental concepts and principles of complex networks, neuroscience, and hybrid dynamic systems, as well as relevant studies about the brain and intelligence. Other promising research directions, such as brain science, data science, quantum information science, and machine behavior are also briefly discussed toward future applications.
Collapse
|
23
|
L’esprit predictif : introduction à la théorie du cerveau bayésien. Encephale 2022; 48:436-444. [DOI: 10.1016/j.encep.2021.09.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 09/28/2021] [Accepted: 09/30/2021] [Indexed: 01/13/2023]
|
24
|
Masset P, Qin S, Zavatone-Veth JA. Drifting neuronal representations: Bug or feature? BIOLOGICAL CYBERNETICS 2022; 116:253-266. [PMID: 34993613 DOI: 10.1007/s00422-021-00916-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/17/2021] [Indexed: 06/14/2023]
Abstract
The brain displays a remarkable ability to sustain stable memories, allowing animals to execute precise behaviors or recall stimulus associations years after they were first learned. Yet, recent long-term recording experiments have revealed that single-neuron representations continuously change over time, contravening the classical assumption that learned features remain static. How do unstable neural codes support robust perception, memories, and actions? Here, we review recent experimental evidence for such representational drift across brain areas, as well as dissections of its functional characteristics and underlying mechanisms. We emphasize theoretical proposals for how drift need not only be a form of noise for which the brain must compensate. Rather, it can emerge from computationally beneficial mechanisms in hierarchical networks performing robust probabilistic computations.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
| | - Shanshan Qin
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Physics, Harvard University, Cambridge, MA, USA
| |
Collapse
|
25
|
Shaffer C, Westlin C, Quigley KS, Whitfield-Gabrieli S, Barrett LF. Allostasis, Action, and Affect in Depression: Insights from the Theory of Constructed Emotion. Annu Rev Clin Psychol 2022; 18:553-580. [PMID: 35534123 PMCID: PMC9247744 DOI: 10.1146/annurev-clinpsy-081219-115627] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The theory of constructed emotion is a systems neuroscience approach to understanding the nature of emotion. It is also a general theoretical framework to guide hypothesis generation for how actions and experiences are constructed as the brain continually anticipates metabolic needs and attempts to meet those needs before they arise (termed allostasis). In this review, we introduce this framework and hypothesize that allostatic dysregulation is a trans-disorder vulnerability for mental and physical illness. We then review published findings consistent with the hypothesis that several symptoms in major depressive disorder (MDD), such as fatigue, distress, context insensitivity, reward insensitivity, and motor retardation, are associated with persistent problems in energy regulation. Our approach transforms the current understanding of MDD as resulting from enhanced emotional reactivity combined with reduced cognitive control and, in doing so, offers novel hypotheses regarding the development, progression, treatment, and prevention of MDD.
Collapse
Affiliation(s)
- Clare Shaffer
- Department of Psychology, Northeastern University, Boston, Massachusetts, USA; ,
| | - Christiana Westlin
- Department of Psychology, Northeastern University, Boston, Massachusetts, USA; ,
| | - Karen S Quigley
- Department of Psychology, Northeastern University, Boston, Massachusetts, USA; ,
- VA Bedford Healthcare System, Bedford, Massachusetts, USA
| | - Susan Whitfield-Gabrieli
- Department of Psychology, Northeastern University, Boston, Massachusetts, USA; ,
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Lisa Feldman Barrett
- Department of Psychology, Northeastern University, Boston, Massachusetts, USA; ,
- Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts, USA
| |
Collapse
|
26
|
Merging pruning and neuroevolution: towards robust and efficient controllers for modular soft robots. KNOWL ENG REV 2022. [DOI: 10.1017/s0269888921000151] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Abstract
Artificial neural networks (ANNs) can be employed as controllers for robotic agents. Their structure is often complex, with many neurons and connections, especially when the robots have many sensors and actuators distributed across their bodies and/or when high expressive power is desirable. Pruning (removing neurons or connections) reduces the complexity of the ANN, thus increasing its energy efficiency, and has been reported to improve the generalization capability, in some cases. In addition, it is well-known that pruning in biological neural networks plays a fundamental role in the development of brains and their ability to learn. In this study, we consider the evolutionary optimization of neural controllers for the case study of Voxel-based soft robots, a kind of modular, bio-inspired soft robots, applying pruning during fitness evaluation. For a locomotion task, and for centralized as well as distributed controllers, we experimentally characterize the effect of different forms of pruning on after-pruning effectiveness, life-long effectiveness, adaptability to new terrains, and behavior. We find that incorporating some forms of pruning in neuroevolution leads to almost equally effective controllers as those evolved without pruning, with the benefit of higher robustness to pruning. We also observe occasional improvements in generalization ability.
Collapse
|
27
|
Zhou D, Lynn CW, Cui Z, Ciric R, Baum GL, Moore TM, Roalf DR, Detre JA, Gur RC, Gur RE, Satterthwaite TD, Bassett DS. Efficient coding in the economics of human brain connectomics. Netw Neurosci 2022; 6:234-274. [PMID: 36605887 PMCID: PMC9810280 DOI: 10.1162/netn_a_00223] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 12/08/2021] [Indexed: 01/07/2023] Open
Abstract
In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks characterized by hierarchical organization and highly connected hubs remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8-23 years), we analyze structural networks derived from diffusion-weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior-beyond the conventional network efficiency metric-for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity.
Collapse
Affiliation(s)
- Dale Zhou
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Christopher W. Lynn
- Initiative for the Theoretical Sciences, Graduate Center, City University of New York, New York, NY, USA,Joseph Henry Laboratories of Physics, Princeton University, Princeton, NJ, USA
| | - Zaixu Cui
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Rastko Ciric
- Department of Bioengineering, Schools of Engineering and Medicine, Stanford University, Stanford, CA, USA
| | - Graham L. Baum
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Tyler M. Moore
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - David R. Roalf
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - John A. Detre
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Raquel E. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Theodore D. Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Dani S. Bassett
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Department of Physics & Astronomy, College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, USA,Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, USA,Department of Electrical & Systems Engineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, USA,Santa Fe Institute, Santa Fe, NM, USA,* Corresponding Author:
| |
Collapse
|
28
|
Büchel J, Zendrikov D, Solinas S, Indiveri G, Muir DR. Supervised training of spiking neural networks for robust deployment on mixed-signal neuromorphic processors. Sci Rep 2021; 11:23376. [PMID: 34862429 PMCID: PMC8642544 DOI: 10.1038/s41598-021-02779-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 11/22/2021] [Indexed: 11/14/2022] Open
Abstract
Mixed-signal analog/digital circuits emulate spiking neurons and synapses with extremely high energy efficiency, an approach known as "neuromorphic engineering". However, analog circuits are sensitive to process-induced variation among transistors in a chip ("device mismatch"). For neuromorphic implementation of Spiking Neural Networks (SNNs), mismatch causes parameter variation between identically-configured neurons and synapses. Each chip exhibits a different distribution of neural parameters, causing deployed networks to respond differently between chips. Current solutions to mitigate mismatch based on per-chip calibration or on-chip learning entail increased design complexity, area and cost, making deployment of neuromorphic devices expensive and difficult. Here we present a supervised learning approach that produces SNNs with high robustness to mismatch and other common sources of noise. Our method trains SNNs to perform temporal classification tasks by mimicking a pre-trained dynamical system, using a local learning rule from non-linear control theory. We demonstrate our method on two tasks requiring temporal memory, and measure the robustness of our approach to several forms of noise and mismatch. We show that our approach is more robust than common alternatives for training SNNs. Our method provides robust deployment of pre-trained networks on mixed-signal neuromorphic hardware, without requiring per-device training or calibration.
Collapse
Affiliation(s)
- Julian Büchel
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dmitrii Zendrikov
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Sergio Solinas
- Department of Biomedical Science, University of Sassari, Piazza Università, 21, 07100, Sassari, Sardegna, Italy
| | - Giacomo Indiveri
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dylan R Muir
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland.
| |
Collapse
|
29
|
A general principle of dendritic constancy: A neuron's size- and shape-invariant excitability. Neuron 2021; 109:3647-3662.e7. [PMID: 34555313 DOI: 10.1016/j.neuron.2021.08.028] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 06/29/2021] [Accepted: 08/20/2021] [Indexed: 11/20/2022]
Abstract
Reducing neuronal size results in less membrane and therefore lower input conductance. Smaller neurons are thus more excitable, as seen in their responses to somatic current injections. However, the impact of a neuron's size and shape on its voltage responses to dendritic synaptic activation is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs in unbranched cables, showing that these are entirely independent of dendritic length. For a given synaptic density, neuronal responses depend only on the average dendritic diameter and intrinsic conductivity. This remains valid for a wide range of morphologies irrespective of their arborization complexity. Spiking models indicate that morphology-invariant numbers of spikes approximate the percentage of active synapses. In contrast to spike rate, spike times do depend on dendrite morphology. In summary, neuronal excitability in response to distributed synaptic inputs is largely unaffected by dendrite length or complexity.
Collapse
|
30
|
Zeldenrust F, Gutkin B, Denéve S. Efficient and robust coding in heterogeneous recurrent networks. PLoS Comput Biol 2021; 17:e1008673. [PMID: 33930016 PMCID: PMC8115785 DOI: 10.1371/journal.pcbi.1008673] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 05/12/2021] [Accepted: 04/07/2021] [Indexed: 11/19/2022] Open
Abstract
Cortical networks show a large heterogeneity of neuronal properties. However, traditional coding models have focused on homogeneous populations of excitatory and inhibitory neurons. Here, we analytically derive a class of recurrent networks of spiking neurons that close to optimally track a continuously varying input online, based on two assumptions: 1) every spike is decoded linearly and 2) the network aims to reduce the mean-squared error between the input and the estimate. From this we derive a class of predictive coding networks, that unifies encoding and decoding and in which we can investigate the difference between homogeneous networks and heterogeneous networks, in which each neurons represents different features and has different spike-generating properties. We find that in this framework, 'type 1' and 'type 2' neurons arise naturally and networks consisting of a heterogeneous population of different neuron types are both more efficient and more robust against correlated noise. We make two experimental predictions: 1) we predict that integrators show strong correlations with other integrators and resonators are correlated with resonators, whereas the correlations are much weaker between neurons with different coding properties and 2) that 'type 2' neurons are more coherent with the overall network activity than 'type 1' neurons.
Collapse
Affiliation(s)
- Fleur Zeldenrust
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Boris Gutkin
- Group for Neural Theory, INSERM U960, Département d’Études Cognitives, École Normal Supérieure PSL University, Paris, France
- Center for Cognition and Decision Making, National Research University Higher School of Economics, Moscow, Russia
| | - Sophie Denéve
- Group for Neural Theory, INSERM U960, Département d’Études Cognitives, École Normal Supérieure PSL University, Paris, France
| |
Collapse
|
31
|
Recanatesi S, Farrell M, Lajoie G, Deneve S, Rigotti M, Shea-Brown E. Predictive learning as a network mechanism for extracting low-dimensional latent space representations. Nat Commun 2021; 12:1417. [PMID: 33658520 PMCID: PMC7930246 DOI: 10.1038/s41467-021-21696-1] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 01/22/2021] [Indexed: 01/02/2023] Open
Abstract
Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task’s low-dimensional latent structure in the network activity – i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data. Neural networks trained using predictive models generate representations that recover the underlying low-dimensional latent structure in the data. Here, the authors demonstrate that a network trained on a spatial navigation task generates place-related neural activations similar to those observed in the hippocampus and show that these are related to the latent structure.
Collapse
Affiliation(s)
- Stefano Recanatesi
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.
| | - Matthew Farrell
- Department of Applied Mathematics, University of Washington, Seattle, WA, USA
| | - Guillaume Lajoie
- Department of Mathematics and Statistics, Université de Montréal, Montreal, QC, Canada.,Mila-Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Sophie Deneve
- Group for Neural Theory, Ecole Normal Superieur, Paris, France
| | | | - Eric Shea-Brown
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.,Department of Applied Mathematics, University of Washington, Seattle, WA, USA.,Allen Institute for Brain Science, Seattle, WA, USA
| |
Collapse
|
32
|
Sohn H, Meirhaeghe N, Rajalingham R, Jazayeri M. A Network Perspective on Sensorimotor Learning. Trends Neurosci 2021; 44:170-181. [PMID: 33349476 PMCID: PMC9744184 DOI: 10.1016/j.tins.2020.11.007] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 09/11/2020] [Accepted: 11/20/2020] [Indexed: 12/15/2022]
Abstract
What happens in the brain when we learn? Ever since the foundational work of Cajal, the field has made numerous discoveries as to how experience could change the structure and function of individual synapses. However, more recent advances have highlighted the need for understanding learning in terms of complex interactions between populations of neurons and synapses. How should one think about learning at such a macroscopic level? Here, we develop a conceptual framework to bridge the gap between the different scales at which learning operates, from synapses to neurons to behavior. Using this framework, we explore the principles that guide sensorimotor learning across these scales, and set the stage for future experimental and theoretical work in the field.
Collapse
Affiliation(s)
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences & Technology, Massachusetts Institute of Technology,Corresponding authors: Nicolas Meirhaeghe, , Mehrdad Jazayeri, Ph.D.,
| | | | - Mehrdad Jazayeri
- McGovern Institute for Brain Research,,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology,Corresponding authors: Nicolas Meirhaeghe, , Mehrdad Jazayeri, Ph.D.,
| |
Collapse
|
33
|
Rullán Buxó CE, Pillow JW. Poisson balanced spiking networks. PLoS Comput Biol 2020; 16:e1008261. [PMID: 33216741 PMCID: PMC7717583 DOI: 10.1371/journal.pcbi.1008261] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 12/04/2020] [Accepted: 08/14/2020] [Indexed: 11/18/2022] Open
Abstract
An important problem in computational neuroscience is to understand how networks of spiking neurons can carry out various computations underlying behavior. Balanced spiking networks (BSNs) provide a powerful framework for implementing arbitrary linear dynamical systems in networks of integrate-and-fire neurons. However, the classic BSN model requires near-instantaneous transmission of spikes between neurons, which is biologically implausible. Introducing realistic synaptic delays leads to an pathological regime known as "ping-ponging", in which different populations spike maximally in alternating time bins, causing network output to overshoot the target solution. Here we document this phenomenon and provide a novel solution: we show that a network can have realistic synaptic delays while maintaining accuracy and stability if neurons are endowed with conditionally Poisson firing. Formally, we propose two alternate formulations of Poisson balanced spiking networks: (1) a "local" framework, which replaces the hard integrate-and-fire spiking rule within each neuron by a "soft" threshold function, such that firing probability grows as a smooth nonlinear function of membrane potential; and (2) a "population" framework, which reformulates the BSN objective function in terms of expected spike counts over the entire population. We show that both approaches offer improved robustness, allowing for accurate implementation of network dynamics with realistic synaptic delays between neurons. Both Poisson frameworks preserve the coding accuracy and robustness to neuron loss of the original model and, moreover, produce positive correlations between similarly tuned neurons, a feature of real neural populations that is not found in the deterministic BSN. This work unifies balanced spiking networks with Poisson generalized linear models and suggests several promising avenues for future research.
Collapse
Affiliation(s)
| | - Jonathan W. Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, USA
| |
Collapse
|
34
|
Wang Q, Banerjee S, So C, Qiu C, Lam HIC, Tse D, Völgyi B, Pan F. Unmasking inhibition prolongs neuronal function in retinal degeneration mouse model. FASEB J 2020; 34:15282-15299. [PMID: 32985731 DOI: 10.1096/fj.202001315rr] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 08/25/2020] [Accepted: 09/08/2020] [Indexed: 11/11/2022]
Abstract
All neurodegenerative diseases involve a relatively long period of timeframe from the onset of the disease to complete loss of functions. Extending this timeframe, even at a reduced level of function, would improve the quality of life of patients with these devastating diseases. The retina, as the part of the central nervous system and a frequent site of many distressing neurodegenerative disease, provides an ideal model to investigate the feasibility of extending the functional timeframe through pharmacologic intervention. Retinitis Pigmentosa (RP) is a group of blinding diseases. Although the rate of progression and degree of visual loss varies, there is usually a prolonged time before patients totally lose their photoreceptors and vision. It is believed that inhibitory mechanisms are still intact and may become relatively strong after the gradual loss of photoreceptors in RP patients. Therefore, it is possible that light-evoked responses of retinal ganglion cells and visual information processes in retinal circuits could be "unmasked" by blocking these inhibitory mechanisms restoring some level of visual function. Our results indicate that if the inhibition in the inner retina was unmasked in the retina of the rd10 mouse (the well-characterized RP mimicking, clinically relevant mouse model), the light-evoked responses of many retinal ganglion cells can be induced and restore their normal light sensitivity. GABA A receptor plays a major role in this masking inhibition. ERG b-wave and behavioral tests of spatial vision partly recovered after the application of PTX. Hence, removing retinal inhibition unmasks signalling mediated by surviving cones, thereby restoring some degree of visual function. These results may offer a novel strategy to restore the visual function with the surviving cones in RP patients and other gradual and progressive neurodegenerative diseases.
Collapse
Affiliation(s)
- Qin Wang
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Seema Banerjee
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Chunghim So
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Chunting Qiu
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Hang-I Christie Lam
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Dennis Tse
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Béla Völgyi
- Department of Experimental Zoology and Neurobiology, Szentágothai Research Centre, MTA NAP Retinal Electrical Synapses Research Group, University of Pécs, Pécs, Hungary
| | - Feng Pan
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong.,The Centre for Eye and Vision Research, Hong Kong
| |
Collapse
|
35
|
Li Q, Gao J, Zhang Z, Huang Q, Wu Y, Xu B. Distinguishing Epileptiform Discharges From Normal Electroencephalograms Using Adaptive Fractal and Network Analysis: A Clinical Perspective. Front Physiol 2020; 11:828. [PMID: 32903770 PMCID: PMC7438848 DOI: 10.3389/fphys.2020.00828] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 06/22/2020] [Indexed: 01/03/2023] Open
Abstract
Epilepsy is one of the most common disorders of the brain. Clinically, to corroborate an epileptic seizure-like symptom and to find the seizure localization, electroencephalogram (EEG) data are often visually examined by a clinical doctor to detect the presence of epileptiform discharges. Epileptiform discharges are transient waveforms lasting for several tens to hundreds of milliseconds and are mainly divided into seven types. It is important to develop systematic approaches to accurately distinguish these waveforms from normal control ones. This is a difficult task if one wishes to develop first principle rather than black-box based approaches, since clinically used scalp EEGs usually contain a lot of noise and artifacts. To solve this problem, we analyzed 640 multi-channel EEG segments, each 4s long. Among these segments, 540 are short epileptiform discharges, and 100 are from healthy controls. We have proposed two approaches for distinguishing epileptiform discharges from normal EEGs. The first method is based on Signal Range and EEGs' long range correlation properties characterized by the Hurst parameter H extracted by applying adaptive fractal analysis (AFA), which can also maximally suppress the effects of noise and various kinds of artifacts. Our second method is based on networks constructed from three aspects of the scalp EEG signals, the Signal Range, the energy of the alpha wave component, and EEG's long range correlation properties. The networks are further analyzed using singular value decomposition (SVD). The square of the first singular value from SVD is used to construct features to distinguish epileptiform discharges from normal controls. Using Random Forest Classifier (RF), our approaches can achieve very high accuracy in distinguishing epileptiform discharges from normal control ones, and thus are very promising to be used clinically. The network-based approach is also used to infer the localizations of each type of epileptiform discharges, and it is found that the sub-networks representing the most likely location of each type of epileptiform discharges are different among the seven types of epileptiform discharges.
Collapse
Affiliation(s)
- Qiong Li
- School of Computer, Electronics and Information, Guangxi University, Nanning, China
| | - Jianbo Gao
- Center for Geodata and Analysis, Faculty of Geographical Science, Beijing Normal University, Beijing, China
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- International College, Guangxi University, Nanning, Guangxi, China
| | - Ziwen Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning, China
| | - Qi Huang
- The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Yuan Wu
- The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Bo Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
36
|
Lu Z, Bassett DS. Invertible generalized synchronization: A putative mechanism for implicit learning in neural systems. CHAOS (WOODBURY, N.Y.) 2020; 30:063133. [PMID: 32611103 DOI: 10.1063/5.0004344] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Accepted: 05/25/2020] [Indexed: 06/11/2023]
Abstract
Regardless of the marked differences between biological and artificial neural systems, one fundamental similarity is that they are essentially dynamical systems that can learn to imitate other dynamical systems whose governing equations are unknown. The brain is able to learn the dynamic nature of the physical world via experience; analogously, artificial neural systems such as reservoir computing networks (RCNs) can learn the long-term behavior of complex dynamical systems from data. Recent work has shown that the mechanism of such learning in RCNs is invertible generalized synchronization (IGS). Yet, whether IGS is also the mechanism of learning in biological systems remains unclear. To shed light on this question, we draw inspiration from features of the human brain to propose a general and biologically feasible learning framework that utilizes IGS. To evaluate the framework's relevance, we construct several distinct neural network models as instantiations of the proposed framework. Regardless of their particularities, these neural network models can consistently learn to imitate other dynamical processes with a biologically feasible adaptation rule that modulates the strength of synapses. Further, we observe and theoretically explain the spontaneous emergence of four distinct phenomena reminiscent of cognitive functions: (i) learning multiple dynamics; (ii) switching among the imitations of multiple dynamical systems, either spontaneously or driven by external cues; (iii) filling-in missing variables from incomplete observations; and (iv) deciphering superimposed input from different dynamical systems. Collectively, our findings support the notion that biological neural networks can learn the dynamic nature of their environment through the mechanism of IGS.
Collapse
Affiliation(s)
- Zhixin Lu
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Danielle S Bassett
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| |
Collapse
|
37
|
Ashhad S, Feldman JL. Emergent Elements of Inspiratory Rhythmogenesis: Network Synchronization and Synchrony Propagation. Neuron 2020; 106:482-497.e4. [PMID: 32130872 PMCID: PMC11221628 DOI: 10.1016/j.neuron.2020.02.005] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 01/15/2020] [Accepted: 02/07/2020] [Indexed: 12/22/2022]
Abstract
We assessed the mechanism of mammalian breathing rhythmogenesis in the preBötzinger complex (preBötC) in vitro, where experimental tests remain inconsistent with hypotheses of canonical rhythmogenic cellular or synaptic mechanisms, i.e., pacemaker neurons or inhibition. Under rhythmic conditions, in each cycle, an inspiratory burst emerges as (presumptive) preBötC rhythmogenic neurons transition from aperiodic uncorrelated population spike activity to become increasingly synchronized during preinspiration (for ∼50-500 ms), which can trigger inspiratory bursts that propagate to motoneurons. In nonrhythmic conditions, antagonizing GABAA receptors can initiate this synchronization while inducing a higher conductance state in nonrhythmogenic preBötC output neurons. Our analyses uncover salient features of preBötC network dynamics where inspiratory bursts arise when and only when the preBötC rhythmogenic subpopulation strongly synchronizes to drive output neurons. Furthermore, downstream propagation of preBötC network activity, ultimately to motoneurons, is dependent on the strength of input synchrony onto preBötC output neurons exemplifying synchronous propagation of network activity.
Collapse
Affiliation(s)
- Sufyan Ashhad
- Department of Neurobiology, University of California, Los Angeles, Box 951763, Los Angeles, CA 90095-1763, USA
| | - Jack L Feldman
- Department of Neurobiology, University of California, Los Angeles, Box 951763, Los Angeles, CA 90095-1763, USA.
| |
Collapse
|
38
|
Hong C, Wei X, Wang J, Deng B, Yu H, Che Y. Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible With Various Temporal Codes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1285-1296. [PMID: 31247574 DOI: 10.1109/tnnls.2019.2919662] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent studies have demonstrated the effectiveness of supervised learning in spiking neural networks (SNNs). A trainable SNN provides a valuable tool not only for engineering applications but also for theoretical neuroscience studies. Here, we propose a modified SpikeProp learning algorithm, which ensures better learning stability for SNNs and provides more diverse network structures and coding schemes. Specifically, we designed a spike gradient threshold rule to solve the well-known gradient exploding problem in SNN training. In addition, regulation rules on firing rates and connection weights are proposed to control the network activity during training. Based on these rules, biologically realistic features such as lateral connections, complex synaptic dynamics, and sparse activities are included in the network to facilitate neural computation. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, namely, handwritten digit recognition, spatial coordinate transformation, and motor sequence generation. Several important features observed in experimental studies, such as selective activity, excitatory-inhibitory balance, and weak pairwise correlation, emerged in the trained model. This agreement between experimental and computational results further confirmed the importance of these features in neural function. This work provides a new framework, in which various neural behaviors can be modeled and the underlying computational mechanisms can be studied.
Collapse
|
39
|
Brendel W, Bourdoukan R, Vertechi P, Machens CK, Denève S. Learning to represent signals spike by spike. PLoS Comput Biol 2020; 16:e1007692. [PMID: 32176682 PMCID: PMC7135338 DOI: 10.1371/journal.pcbi.1007692] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 04/06/2020] [Accepted: 01/27/2020] [Indexed: 12/31/2022] Open
Abstract
Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks’ convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations. Spiking neural networks can encode information with high efficiency in the spike trains of individual neurons if the synaptic weights between neurons are set to specific, optimal values. In this regime, the networks exhibit irregular spike trains, high trial-to-trial variability, and stimulus tuning, as typically observed in cortex. The strong variability on the level of single neurons paradoxically coincides with a precise, non-redundant, and spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these spiking networks can be learnt with local learning rules. In this study, we show how the required architecture can be learnt. We derive local and biophysically plausible learning rules for recurrent neural networks from first principles. We show both mathematically and using numerical simulations that these learning rules drive the networks into the optimal state, and we show that the optimal state is governed by the statistics of the input signals. After learning, the voltages of individual neurons can be interpreted as measuring the instantaneous error of the code, given by the error between the desired output signal and the actual output signal.
Collapse
Affiliation(s)
- Wieland Brendel
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- Tübingen AI Center, University of Tübingen, Germany
| | - Ralph Bourdoukan
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Pietro Vertechi
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Christian K. Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- * E-mail: (CKM); (SD)
| | - Sophie Denève
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- * E-mail: (CKM); (SD)
| |
Collapse
|
40
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
41
|
Multi-level anomalous Hall resistance in a single Hall cross for the applications of neuromorphic device. Sci Rep 2020; 10:1285. [PMID: 31992806 PMCID: PMC6987114 DOI: 10.1038/s41598-020-58223-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 12/13/2019] [Indexed: 12/13/2022] Open
Abstract
We demonstrate the process of obtaining memristive multi-states Hall resistance (RH) change in a single Hall cross (SHC) structure. Otherwise, the working mechanism successfully mimics the behavior of biological neural systems. The motion of domain wall (DW) in the SHC was used to control the ascend (or descend) of the RH amplitude. The primary synaptic functions such as long-term potentiation (LTP), long-term depression (LTD), and spike-time-dependent plasticity (STDP) could then be emulated by regulating RH. Applied programmable magnetic field pulses are in varying conditions such as intensity and duration to adjust RH. These results show that analog readings of DW movement can be closely resembled with the change of synaptic weight and have great potentials for bioinspired neuromorphic computing.
Collapse
|
42
|
Kastanenka KV, Moreno-Bote R, De Pittà M, Perea G, Eraso-Pichot A, Masgrau R, Poskanzer KE, Galea E. A roadmap to integrate astrocytes into Systems Neuroscience. Glia 2020; 68:5-26. [PMID: 31058383 PMCID: PMC6832773 DOI: 10.1002/glia.23632] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 04/08/2019] [Accepted: 04/09/2019] [Indexed: 12/14/2022]
Abstract
Systems neuroscience is still mainly a neuronal field, despite the plethora of evidence supporting the fact that astrocytes modulate local neural circuits, networks, and complex behaviors. In this article, we sought to identify which types of studies are necessary to establish whether astrocytes, beyond their well-documented homeostatic and metabolic functions, perform computations implementing mathematical algorithms that sub-serve coding and higher-brain functions. First, we reviewed Systems-like studies that include astrocytes in order to identify computational operations that these cells may perform, using Ca2+ transients as their encoding language. The analysis suggests that astrocytes may carry out canonical computations in a time scale of subseconds to seconds in sensory processing, neuromodulation, brain state, memory formation, fear, and complex homeostatic reflexes. Next, we propose a list of actions to gain insight into the outstanding question of which variables are encoded by such computations. The application of statistical analyses based on machine learning, such as dimensionality reduction and decoding in the context of complex behaviors, combined with connectomics of astrocyte-neuronal circuits, is, in our view, fundamental undertakings. We also discuss technical and analytical approaches to study neuronal and astrocytic populations simultaneously, and the inclusion of astrocytes in advanced modeling of neural circuits, as well as in theories currently under exploration such as predictive coding and energy-efficient coding. Clarifying the relationship between astrocytic Ca2+ and brain coding may represent a leap forward toward novel approaches in the study of astrocytes in health and disease.
Collapse
Affiliation(s)
- Ksenia V. Kastanenka
- Department of Neurology, MassGeneral Institute for Neurodegenerative Diseases, Massachusetts General Hospital and Harvard Medical School, Massachusetts 02129, USA
| | - Rubén Moreno-Bote
- Department of Information and Communications Technologies, Center for Brain and Cognition and Universitat Pompeu Fabra, 08018 Barcelona, Spain
- ICREA, 08010 Barcelona, Spain
| | | | | | - Abel Eraso-Pichot
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
| | - Roser Masgrau
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
| | - Kira E. Poskanzer
- Department of Biochemistry & Biophysics, Neuroscience Graduate Program, and Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, California 94143, USA
- Equally contributing authors
| | - Elena Galea
- ICREA, 08010 Barcelona, Spain
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
- Equally contributing authors
| |
Collapse
|
43
|
Bridi MCD, Zong FJ, Min X, Luo N, Tran T, Qiu J, Severin D, Zhang XT, Wang G, Zhu ZJ, He KW, Kirkwood A. Daily Oscillation of the Excitation-Inhibition Balance in Visual Cortical Circuits. Neuron 2019; 105:621-629.e4. [PMID: 31831331 DOI: 10.1016/j.neuron.2019.11.011] [Citation(s) in RCA: 89] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 09/16/2019] [Accepted: 11/07/2019] [Indexed: 12/16/2022]
Abstract
A balance between synaptic excitation and inhibition (E/I balance) maintained within a narrow window is widely regarded to be crucial for cortical processing. In line with this idea, the E/I balance is reportedly comparable across neighboring neurons, behavioral states, and developmental stages and altered in many neurological disorders. Motivated by these ideas, we examined whether synaptic inhibition changes over the 24-h day to compensate for the well-documented sleep-dependent changes in synaptic excitation. We found that, in pyramidal cells of visual and prefrontal cortices and hippocampal CA1, synaptic inhibition also changes over the 24-h light/dark cycle but, surprisingly, in the opposite direction of synaptic excitation. Inhibition is upregulated in the visual cortex during the light phase in a sleep-dependent manner. In the visual cortex, these changes in the E/I balance occurred in feedback, but not feedforward, circuits. These observations open new and interesting questions on the function and regulation of the E/I balance.
Collapse
Affiliation(s)
- Michelle C D Bridi
- Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Fang-Jiao Zong
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xia Min
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Nancy Luo
- Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Trinh Tran
- Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Jiaqian Qiu
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Daniel Severin
- Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Xue-Ting Zhang
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guanglin Wang
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China
| | - Zheng-Jiang Zhu
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Kai-Wen He
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Alfredo Kirkwood
- Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
44
|
Koren V, Andrei AR, Hu M, Dragoi V, Obermayer K. Reading-out task variables as a low-dimensional reconstruction of neural spike trains in single trials. PLoS One 2019; 14:e0222649. [PMID: 31622346 PMCID: PMC6797168 DOI: 10.1371/journal.pone.0222649] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Accepted: 09/03/2019] [Indexed: 11/18/2022] Open
Abstract
We propose a new model of the read-out of spike trains that exploits the multivariate structure of responses of neural ensembles. Assuming the point of view of a read-out neuron that receives synaptic inputs from a population of projecting neurons, synaptic inputs are weighted with a heterogeneous set of weights. We propose that synaptic weights reflect the role of each neuron within the population for the computational task that the network has to solve. In our case, the computational task is discrimination of binary classes of stimuli, and weights are such as to maximize the discrimination capacity of the network. We compute synaptic weights as the feature weights of an optimal linear classifier. Once weights have been learned, they weight spike trains and allow to compute the post-synaptic current that modulates the spiking probability of the read-out unit in real time. We apply the model on parallel spike trains from V1 and V4 areas in the behaving monkey macaca mulatta, while the animal is engaged in a visual discrimination task with binary classes of stimuli. The read-out of spike trains with our model allows to discriminate the two classes of stimuli, while population PSTH entirely fails to do so. Splitting neurons in two subpopulations according to the sign of the weight, we show that population signals of the two functional subnetworks are negatively correlated. Disentangling the superficial, the middle and the deep layer of the cortex, we show that in both V1 and V4, superficial layers are the most important in discriminating binary classes of stimuli.
Collapse
Affiliation(s)
- Veronika Koren
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
- * E-mail:
| | - Ariana R. Andrei
- Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, Texas, United States of America
| | - Ming Hu
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Valentin Dragoi
- Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, Texas, United States of America
| | - Klaus Obermayer
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| |
Collapse
|
45
|
Weissenberger F, Gauy MM, Zou X, Steger A. Mutual Inhibition with Few Inhibitory Cells via Nonlinear Inhibitory Synaptic Interaction. Neural Comput 2019; 31:2252-2265. [PMID: 31525311 DOI: 10.1162/neco_a_01230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In computational neural network models, neurons are usually allowed to excite some and inhibit other neurons, depending on the weight of their synaptic connections. The traditional way to transform such networks into networks that obey Dale's law (i.e., a neuron can either excite or inhibit) is to accompany each excitatory neuron with an inhibitory one through which inhibitory signals are mediated. However, this requires an equal number of excitatory and inhibitory neurons, whereas a realistic number of inhibitory neurons is much smaller. In this letter, we propose a model of nonlinear interaction of inhibitory synapses on dendritic compartments of excitatory neurons that allows the excitatory neurons to mediate inhibitory signals through a subset of the inhibitory population. With this construction, the number of required inhibitory neurons can be reduced tremendously.
Collapse
Affiliation(s)
- Felix Weissenberger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich CH-8092, Switzerland
| | - Marcelo Matheus Gauy
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich CH-8092, Switzerland
| | - Xun Zou
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich CH-8092, Switzerland
| | - Angelika Steger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich CH-8092, Switzerland
| |
Collapse
|
46
|
Nobukawa S, Nishimura H, Yamanishi T. Temporal-specific complexity of spiking patterns in spontaneous activity induced by a dual complex network structure. Sci Rep 2019; 9:12749. [PMID: 31484990 PMCID: PMC6726653 DOI: 10.1038/s41598-019-49286-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Accepted: 08/22/2019] [Indexed: 11/08/2022] Open
Abstract
Temporal fluctuation of neural activity in the brain has an important function in optimal information processing. Spontaneous activity is a source of such fluctuation. The distribution of excitatory postsynaptic potentials (EPSPs) between cortical pyramidal neurons can follow a log-normal distribution. Recent studies have shown that networks connected by weak synapses exhibit characteristics of a random network, whereas networks connected by strong synapses have small-world characteristics of small path lengths and large cluster coefficients. To investigate the relationship between temporal complexity spontaneous activity and structural network duality in synaptic connections, we executed a simulation study using the leaky integrate-and-fire spiking neural network with log-normal synaptic weight distribution for the EPSPs and duality of synaptic connectivity, depending on synaptic weight. We conducted multiscale entropy analysis of the temporal spiking activity. Our simulation demonstrated that, when strong synaptic connections approach a small-world network, specific spiking patterns arise during irregular spatio-temporal spiking activity, and the complexity at the large temporal scale (i.e., slow frequency) is enhanced. Moreover, we confirmed through a surrogate data analysis that slow temporal dynamics reflect a deterministic process in the spiking neural networks. This modelling approach may improve the understanding of the spatio-temporal complex neural activity in the brain.
Collapse
Affiliation(s)
- Sou Nobukawa
- Department of Computer Science, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba, 275-0016, Japan.
| | - Haruhiko Nishimura
- Graduate School of Applied Informatics, University of Hyogo, 7-1-28 Chuo-ku, Kobe, Hyogo, 650-8588, Japan
| | - Teruya Yamanishi
- AI & IoT Center, Department of Management and Information Sciences, Fukui University of Technology, 3-6-1 Gakuen, Fukui, 910-8505, Japan
| |
Collapse
|
47
|
Training dynamically balanced excitatory-inhibitory networks. PLoS One 2019; 14:e0220547. [PMID: 31393909 PMCID: PMC6687153 DOI: 10.1371/journal.pone.0220547] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 07/19/2019] [Indexed: 12/02/2022] Open
Abstract
The construction of biologically plausible models of neural circuits is crucial for understanding the computational properties of the nervous system. Constructing functional networks composed of separate excitatory and inhibitory neurons obeying Dale’s law presents a number of challenges. We show how a target-based approach, when combined with a fast online constrained optimization technique, is capable of building functional models of rate and spiking recurrent neural networks in which excitation and inhibition are balanced. Balanced networks can be trained to produce complicated temporal patterns and to solve input-output tasks while retaining biologically desirable features such as Dale’s law and response variability.
Collapse
|
48
|
Lam M, Hill WD, Trampush JW, Yu J, Knowles E, Davies G, Stahl E, Huckins L, Liewald DC, Djurovic S, Melle I, Sundet K, Christoforou A, Reinvang I, DeRosse P, Lundervold AJ, Steen VM, Espeseth T, Räikkönen K, Widen E, Palotie A, Eriksson JG, Giegling I, Konte B, Hartmann AM, Roussos P, Giakoumaki S, Burdick KE, Payton A, Ollier W, Chiba-Falek O, Attix DK, Need AC, Cirulli ET, Voineskos AN, Stefanis NC, Avramopoulos D, Hatzimanolis A, Arking DE, Smyrnis N, Bilder RM, Freimer NA, Cannon TD, London E, Poldrack RA, Sabb FW, Congdon E, Conley ED, Scult MA, Dickinson D, Straub RE, Donohoe G, Morris D, Corvin A, Gill M, Hariri AR, Weinberger DR, Pendleton N, Bitsios P, Rujescu D, Lahti J, Le Hellard S, Keller MC, Andreassen OA, Deary IJ, Glahn DC, Malhotra AK, Lencz T. Pleiotropic Meta-Analysis of Cognition, Education, and Schizophrenia Differentiates Roles of Early Neurodevelopmental and Adult Synaptic Pathways. Am J Hum Genet 2019; 105:334-350. [PMID: 31374203 PMCID: PMC6699140 DOI: 10.1016/j.ajhg.2019.06.012] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Accepted: 06/12/2019] [Indexed: 12/12/2022] Open
Abstract
Susceptibility to schizophrenia is inversely correlated with general cognitive ability at both the phenotypic and the genetic level. Paradoxically, a modest but consistent positive genetic correlation has been reported between schizophrenia and educational attainment, despite the strong positive genetic correlation between cognitive ability and educational attainment. Here we leverage published genome-wide association studies (GWASs) in cognitive ability, education, and schizophrenia to parse biological mechanisms underlying these results. Association analysis based on subsets (ASSET), a pleiotropic meta-analytic technique, allowed jointly associated loci to be identified and characterized. Specifically, we identified subsets of variants associated in the expected ("concordant") direction across all three phenotypes (i.e., greater risk for schizophrenia, lower cognitive ability, and lower educational attainment); these were contrasted with variants that demonstrated the counterintuitive ("discordant") relationship between education and schizophrenia (i.e., greater risk for schizophrenia and higher educational attainment). ASSET analysis revealed 235 independent loci associated with cognitive ability, education, and/or schizophrenia at p < 5 × 10-8. Pleiotropic analysis successfully identified more than 100 loci that were not significant in the input GWASs. Many of these have been validated by larger, more recent single-phenotype GWASs. Leveraging the joint genetic correlations of cognitive ability, education, and schizophrenia, we were able to dissociate two distinct biological mechanisms-early neurodevelopmental pathways that characterize concordant allelic variation and adulthood synaptic pruning pathways-that were linked to the paradoxical positive genetic association between education and schizophrenia. Furthermore, genetic correlation analyses revealed that these mechanisms contribute not only to the etiopathogenesis of schizophrenia but also to the broader biological dimensions implicated in both general health outcomes and psychiatric illness.
Collapse
Affiliation(s)
- Max Lam
- Institute of Mental Health, Singapore, 539747, Singapore; Division of Psychiatry Research, The Zucker Hillside Hospital, Glen Oaks, NY 11004, USA; Stanley Center for Psychiatric Research, Broad Institute of Harvard and MIT, Cambridge, MA 02142, USA
| | - W David Hill
- Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom; Department of Psychology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom
| | - Joey W Trampush
- Department of Psychiatry and the Behavioral Sciences, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
| | - Jin Yu
- Division of Psychiatry Research, The Zucker Hillside Hospital, Glen Oaks, NY 11004, USA
| | - Emma Knowles
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06511, USA
| | - Gail Davies
- Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom; Department of Psychology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom
| | - Eli Stahl
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Department of Genetics and Genomic Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Institute for Multiscale Biology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Laura Huckins
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Department of Genetics and Genomic Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Institute for Multiscale Biology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - David C Liewald
- Department of Psychology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom
| | - Srdjan Djurovic
- Department of Medical Genetics, Oslo University Hospital, University of Bergen, Bergen 4956, Nydalen 0424, Norway; Norsk Senter for Forskning på Mentale Lidelser, K.G. Jebsen Centre for Psychosis Research, University of Bergen, Bergen 4956, Nydalen 0424, Norway
| | - Ingrid Melle
- Norsk Senter for Forskning på Mentale Lidelser, K.G. Jebsen Centre for Psychosis Research, University of Bergen, Bergen 4956, Nydalen 0424, Norway; Division of Mental Health and Addiction, Oslo University Hospital, Oslo 1039, Blindern 0315, Norway
| | - Kjetil Sundet
- Division of Mental Health and Addiction, Oslo University Hospital, Oslo 1039, Blindern 0315, Norway; Department of Psychology, University of Oslo, Oslo 1094, Blindern 0317, Norway
| | - Andrea Christoforou
- Dr. Einar Martens Research Group for Biological Psychiatry, Center for Medical Genetics and Molecular Medicine, Haukeland University Hospital, Bergen 7804, N-5020 Bergen, Norway
| | - Ivar Reinvang
- Department of Psychology, University of Oslo, Oslo 1094, Blindern 0317, Norway
| | - Pamela DeRosse
- Division of Psychiatry Research, The Zucker Hillside Hospital, Glen Oaks, NY 11004, USA
| | - Astri J Lundervold
- Department of Biological and Medical Psychology, University of Bergen, 7807, N-5020, Norway
| | - Vidar M Steen
- Norsk Senter for Forskning på Mentale Lidelser, K.G. Jebsen Centre for Psychosis Research, University of Bergen, Bergen 4956, Nydalen 0424, Norway; Dr. Einar Martens Research Group for Biological Psychiatry, Center for Medical Genetics and Molecular Medicine, Haukeland University Hospital, Bergen 7804, N-5020 Bergen, Norway
| | - Thomas Espeseth
- Division of Mental Health and Addiction, Oslo University Hospital, Oslo 1039, Blindern 0315, Norway; Department of Psychology, University of Oslo, Oslo 1094, Blindern 0317, Norway
| | - Katri Räikkönen
- Institute of Behavioural Sciences, University of Helsinki, Helsinki, 00014, Finland
| | - Elisabeth Widen
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, 00014, Finland
| | - Aarno Palotie
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, 00014, Finland; Wellcome Trust Sanger Institute, Wellcome Trust Genome Campus, Cambridge CB10 1SA, United Kingdom; Department of Medical Genetics, University of Helsinki and University Central Hospital, Helsinki, 00014, Finland
| | - Johan G Eriksson
- Department of General Practice, University of Helsinki and Helsinki University Hospital, Helsinki, 00014, Finland; National Institute for Health and Welfare, Helsinki FI-00271, Finland; Folkhälsan Research Center, Helsinki 00290, Finland
| | - Ina Giegling
- Department of Psychiatry, Martin Luther University of Halle-Wittenberg, Halle 06108, Germany
| | - Bettina Konte
- Department of Psychiatry, Martin Luther University of Halle-Wittenberg, Halle 06108, Germany
| | - Annette M Hartmann
- Department of Psychiatry, Martin Luther University of Halle-Wittenberg, Halle 06108, Germany
| | - Panos Roussos
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Department of Genetics and Genomic Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Institute for Multiscale Biology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Mental Illness Research, Education, and Clinical Center (VISN 2), James J. Peters VA Medical Center, Bronx, NY 10468, USA
| | | | - Katherine E Burdick
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Mental Illness Research, Education, and Clinical Center (VISN 2), James J. Peters VA Medical Center, Bronx, NY 10468, USA; Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115
| | - Antony Payton
- Division of Informatics, Imaging, and Data Sciences, School of Health Sciences, University of Manchester, Manchester M139NT, United Kingdom
| | - William Ollier
- Centre for Epidemiology, Division of Population Health, Health Services Research and Primary Care, University of Manchester, Manchester M139PL, United Kingdom; School of Healthcare Sciences, Manchester Metropolitan University, Manchester M15 6BH, United Kingdom
| | - Ornit Chiba-Falek
- Department of Neurology, Bryan Alzheimer Disease Research Center, Duke University Medical Center, Durham, NC 27705, USA; Center for Genomic and Computational Biology, Duke University Medical Center, Durham, NC 27705, USA
| | - Deborah K Attix
- Department of Neurology, Bryan Alzheimer Disease Research Center, Duke University Medical Center, Durham, NC 27705, USA; Center for Genomic and Computational Biology, Duke University Medical Center, Durham, NC 27705, USA; Psychiatry and Behavioral Sciences, Division of Medical Psychology, Duke University Medical Center, Durham, NC 27708, USA; Department of Neurology, Duke University Medical Center, Durham, NC 27708, USA
| | - Anna C Need
- Division of Brain Sciences, Department of Medicine, Imperial College, London W12 0NN, UK
| | | | - Aristotle N Voineskos
- Campbell Family Mental Health Institute, Centre for Addiction and Mental Health, University of Toronto, Toronto M6J 1H4, Canada
| | - Nikos C Stefanis
- Department of Psychiatry, National and Kapodistrian University of Athens Medical School, Eginition Hospital, Athens, Greece; University Mental Health Research Institute, Athens 115 27, Greece; Neurobiology Research Institute, Theodor-Theohari Cozzika Foundation, Athens, Greece
| | - Dimitrios Avramopoulos
- Department of Psychiatry, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA; McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Alex Hatzimanolis
- Campbell Family Mental Health Institute, Centre for Addiction and Mental Health, University of Toronto, Toronto M6J 1H4, Canada; Department of Psychiatry, National and Kapodistrian University of Athens Medical School, Eginition Hospital, Athens, Greece; University Mental Health Research Institute, Athens 115 27, Greece
| | - Dan E Arking
- Department of Psychiatry, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Nikolaos Smyrnis
- Campbell Family Mental Health Institute, Centre for Addiction and Mental Health, University of Toronto, Toronto M6J 1H4, Canada; Department of Psychiatry, National and Kapodistrian University of Athens Medical School, Eginition Hospital, Athens, Greece
| | - Robert M Bilder
- McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Nelson A Freimer
- McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Tyrone D Cannon
- Department of Psychology, Yale University, New Haven, CT 06511, USA
| | - Edythe London
- UCLA Semel Institute for Neuroscience and Human Behavior, Los Angeles, CA 90024, USA
| | | | - Fred W Sabb
- Robert and Beverly Lewis Center for Neuroimaging, University of Oregon, Eugene, OR, 97401, USA
| | - Eliza Congdon
- UCLA Semel Institute for Neuroscience and Human Behavior, Los Angeles, CA 90024, USA
| | | | - Matthew A Scult
- Laboratory of NeuroGenetics, Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA
| | - Dwight Dickinson
- Clinical and Translational Neuroscience Branch, Intramural Research Program, National Institute of Mental Health, National Institute of Health, Bethesda, MD 20814, USA
| | - Richard E Straub
- Lieber Institute for Brain Development, Johns Hopkins University Medical Campus, Baltimore, MD 21205, USA
| | - Gary Donohoe
- Neuroimaging, Cognition, and Genomics Centre, School of Psychology and Discipline of Biochemistry, National University of Ireland, Galway, Ireland
| | - Derek Morris
- Neuroimaging, Cognition, and Genomics Centre, School of Psychology and Discipline of Biochemistry, National University of Ireland, Galway, Ireland
| | - Aiden Corvin
- Neuropsychiatric Genetics Research Group, Department of Psychiatry, Trinity College Dublin, Dublin, Ireland; Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Michael Gill
- Neuropsychiatric Genetics Research Group, Department of Psychiatry, Trinity College Dublin, Dublin, Ireland; Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Ahmad R Hariri
- Laboratory of NeuroGenetics, Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA
| | - Daniel R Weinberger
- Lieber Institute for Brain Development, Johns Hopkins University Medical Campus, Baltimore, MD 21205, USA
| | - Neil Pendleton
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, University of Manchester, Manchester Academic Health Science Centre, Salford Royal NHS Foundation Trust, Manchester M13 9PL, United Kingdom
| | - Panos Bitsios
- Department of Psychiatry and Behavioral Sciences, Faculty of Medicine, University of Crete, Heraklion, Crete GR-71003, Greece
| | - Dan Rujescu
- Department of Psychiatry, Martin Luther University of Halle-Wittenberg, Halle 06108, Germany
| | - Jari Lahti
- Institute of Behavioural Sciences, University of Helsinki, Helsinki, 00014, Finland; Helsinki Collegium for Advanced Studies, University of Helsinki, Helsinki 00014, Finland
| | - Stephanie Le Hellard
- Norsk Senter for Forskning på Mentale Lidelser, K.G. Jebsen Centre for Psychosis Research, University of Bergen, Bergen 4956, Nydalen 0424, Norway; Dr. Einar Martens Research Group for Biological Psychiatry, Center for Medical Genetics and Molecular Medicine, Haukeland University Hospital, Bergen 7804, N-5020 Bergen, Norway
| | - Matthew C Keller
- Institute for Behavioral Genetics, University of Colorado, Boulder, CO 80303, USA
| | - Ole A Andreassen
- Norsk Senter for Forskning på Mentale Lidelser, K.G. Jebsen Centre for Psychosis Research, University of Bergen, Bergen 4956, Nydalen 0424, Norway; Division of Mental Health and Addiction, Oslo University Hospital, Oslo 1039, Blindern 0315, Norway; Institute of Clinical Medicine, University of Oslo, Oslo 0318, Norway
| | - Ian J Deary
- Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom; Department of Psychology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom
| | - David C Glahn
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06511, USA
| | - Anil K Malhotra
- Division of Psychiatry Research, The Zucker Hillside Hospital, Glen Oaks, NY 11004, USA; Department of Psychiatry, Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY 11549, USA; Center for Psychiatric Neuroscience, Feinstein Institute for Medical Research, Manhasset, NY 11030, USA
| | - Todd Lencz
- Division of Psychiatry Research, The Zucker Hillside Hospital, Glen Oaks, NY 11004, USA; Department of Psychiatry, Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY 11549, USA; Center for Psychiatric Neuroscience, Feinstein Institute for Medical Research, Manhasset, NY 11030, USA.
| |
Collapse
|
49
|
Ott T, Masset P, Kepecs A. The Neurobiology of Confidence: From Beliefs to Neurons. COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY 2019; 83:9-16. [PMID: 31270145 DOI: 10.1101/sqb.2018.83.038794] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
How confident are you? As humans, aware of our subjective sense of confidence, we can readily answer. Knowing your level of confidence helps to optimize both routine decisions such as whether to go back and check if the front door was locked and momentous ones like finding a partner for life. Yet the inherently subjective nature of confidence has limited investigations by neurobiologists. Here, we provide an overview of recent advances in this field and lay out a conceptual framework that lets us translate psychological questions about subjective confidence into the language of neuroscience. We show how statistical notions of confidence provide a bridge between our subjective sense of confidence and confidence-guided behaviors in nonhuman animals, thus enabling the study of the underlying neurobiology. We discuss confidence as a core cognitive process that enables organisms to optimize behavior such as learning or resource allocation and that serves as the basis of metacognitive reasoning. These approaches place confidence on a solid footing and pave the way for a mechanistic understanding of how the brain implements confidence-based algorithms to guide behavior.
Collapse
Affiliation(s)
- Torben Ott
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York 11724, USA
| | - Paul Masset
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York 11724, USA.,Watson School of Biological Sciences, Cold Spring Harbor, New York 11724, USA.,Department of Molecular and Cellular Biology & Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Adam Kepecs
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York 11724, USA
| |
Collapse
|
50
|
Shine JM. Neuromodulatory Influences on Integration and Segregation in the Brain. Trends Cogn Sci 2019; 23:572-583. [PMID: 31076192 DOI: 10.1016/j.tics.2019.04.002] [Citation(s) in RCA: 150] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2018] [Revised: 04/01/2019] [Accepted: 04/04/2019] [Indexed: 12/20/2022]
Abstract
Cognitive function relies on the dynamic cooperation of specialized regions of the brain; however, the elements of the system responsible for coordinating this interaction remain poorly understood. In this Opinion article I argue that this capacity is mediated in part by competitive and cooperative dynamic interactions between two prominent metabotropic neuromodulatory systems - the cholinergic basal forebrain and the noradrenergic locus coeruleus (LC). I assert that activity in these projection nuclei regulates the amount of segregation and integration within the whole brain network by modulating the activity of a diverse set of specialized regions of the brain on a timescale relevant for cognition and attention.
Collapse
Affiliation(s)
- James M Shine
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia.
| |
Collapse
|