1
|
Tian LY, Garzón KU, Rouse AG, Eldridge MAG, Schieber MH, Wang XJ, Tenenbaum JB, Freiwald WA. Neural representation of action symbols in primate frontal cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.03.641276. [PMID: 40093053 PMCID: PMC11908170 DOI: 10.1101/2025.03.03.641276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 03/19/2025]
Abstract
At the core of intelligence is proficiency in solving new problems, including those that differ dramatically from problems seen before. Problem-solving, in turn, depends on goal-directed generation of novel thoughts and behaviors1, which has been proposed to rely on internal representations of discrete units, or symbols, and processes that can recombine them into a large set of possible composite representations1-11. Although this view has been influential in formulating cognitive-level explanations of behavior, definitive evidence for a neuronal substrate of symbols has remained elusive. Here, we identify a neural population encoding action symbols-internal, recombinable representations of discrete units of motor behavior-localized to a specific area of frontal cortex. In macaque monkeys performing a drawing-like task designed to assess recombination of learned action symbols into novel sequences, we found behavioral evidence for three critical features that indicate actions have an underlying symbolic representation: (i) invariance over low-level motor parameters; (ii) categorical structure, reflecting discrete classes of action; and (iii) recombination into novel sequences. In simultaneous neural recordings across motor, premotor, and prefrontal cortex, we found that planning-related population activity in ventral premotor cortex encodes actions in a manner that, like behavior, reflects motor invariance, categorical structure, and recombination, three properties indicating a symbolic representation. Activity in no other recorded area exhibited this combination of properties. These findings reveal a neural representation of action symbols localized to PMv, and therefore identify a putative neural substrate for symbolic cognitive operations.
Collapse
Affiliation(s)
- Lucas Y Tian
- Laboratory of Neural Systems, The Rockefeller University, New York, NY, USA
- Center for Brains, Minds and Machines, MIT & Rockefeller University
| | - Kedar U Garzón
- Laboratory of Neural Systems, The Rockefeller University, New York, NY, USA
| | - Adam G Rouse
- Department of Neurosurgery, Department of Cell Biology & Physiology, University of Kansas Medical Center, Kansas City, KS, USA
| | - Mark A G Eldridge
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK
| | - Marc H Schieber
- Department of Neurology, University of Rochester, Rochester, NY, USA
| | - Xiao-Jing Wang
- Center for Neural Science, New York University, New York, NY, USA
| | - Joshua B Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Center for Brains, Minds and Machines, MIT & Rockefeller University
| | - Winrich A Freiwald
- Laboratory of Neural Systems, The Rockefeller University, New York, NY, USA
- Center for Brains, Minds and Machines, MIT & Rockefeller University
| |
Collapse
|
2
|
Saranirad V, Dora S, McGinnity TM, Coyle D. CDNA-SNN: A New Spiking Neural Network for Pattern Classification Using Neuronal Assemblies. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:2274-2287. [PMID: 38329858 DOI: 10.1109/tnnls.2024.3353571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/10/2024]
Abstract
Spiking neural networks (SNNs) mimic their biological counterparts more closely than their predecessors and are considered the third generation of artificial neural networks. It has been proven that networks of spiking neurons have a higher computational capacity and lower power requirements than sigmoidal neural networks. This article introduces a new type of SNN that draws inspiration and incorporates concepts from neuronal assemblies in the human brain. The proposed network, termed as class-dependent neuronal activation-based SNN (CDNA-SNN), assigns each neuron learnable values known as CDNAs which indicate the neuron's average relative spiking activity in response to samples from different classes. A new learning algorithm that categorizes the neurons into different class assemblies based on their CDNAs is also presented. These neuronal assemblies are trained via a novel training method based on spike-timing-dependent plasticity (STDP) to have high activity for their associated class and low firing rate for other classes. Also, using CDNAs, a new type of STDP that controls the amount of plasticity based on the assemblies of pre- and postsynaptic neurons is proposed. The performance of CDNA-SNN is evaluated on five datasets from the University of California, Irvine (UCI) machine learning repository, as well as Modified National Institute of Standards and Technology (MNIST) and Fashion MNIST, using nested cross-validation (N-CV) for hyperparameter optimization. Our results show that CDNA-SNN significantly outperforms synaptic weight association training (SWAT) ( ) and SpikeProp ( ) on 3/5 and self-regulating evolving spiking neural (SRESN) ( ) on 2/5 UCI datasets while using the significantly lower number of trainable parameters. Furthermore, compared to other supervised, fully connected SNNs, the proposed SNN reaches the best performance for Fashion MNIST and comparable performance for MNIST and neuromorphic-MNIST (N-MNIST), also utilizing much less (1%-35%) parameters.
Collapse
|
3
|
Dabagia M, Papadimitriou CH, Vempala SS. Computation With Sequences of Assemblies in a Model of the Brain. Neural Comput 2024; 37:193-233. [PMID: 39383019 DOI: 10.1162/neco_a_01720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 07/08/2024] [Indexed: 10/11/2024]
Abstract
Even as machine learning exceeds human-level performance on many applications, the generality, robustness, and rapidity of the brain's learning capabilities remain unmatched. How cognition arises from neural activity is the central open question in neuroscience, inextricable from the study of intelligence itself. A simple formal model of neural activity was proposed in Papadimitriou et al. (2020) and has been subsequently shown, through both mathematical proofs and simulations, to be capable of implementing certain simple cognitive operations via the creation and manipulation of assemblies of neurons. However, many intelligent behaviors rely on the ability to recognize, store, and manipulate temporal sequences of stimuli (planning, language, navigation, to list a few). Here we show that in the same model, sequential precedence can be captured naturally through synaptic weights and plasticity, and, as a result, a range of computations on sequences of assemblies can be carried out. In particular, repeated presentation of a sequence of stimuli leads to the memorization of the sequence through corresponding neural assemblies: upon future presentation of any stimulus in the sequence, the corresponding assembly and its subsequent ones will be activated, one after the other, until the end of the sequence. If the stimulus sequence is presented to two brain areas simultaneously, a scaffolded representation is created, resulting in more efficient memorization and recall, in agreement with cognitive experiments. Finally, we show that any finite state machine can be learned in a similar way, through the presentation of appropriate patterns of sequences. Through an extension of this mechanism, the model can be shown to be capable of universal computation. Taken together, these results provide a concrete hypothesis for the basis of the brain's remarkable abilities to compute and learn, with sequences playing a vital role.
Collapse
Affiliation(s)
- Max Dabagia
- School of Computer Science, Georgia Tech, Atlanta, GA 30332, U.S.A.
| | | | | |
Collapse
|
4
|
Pulvermüller F. Neurobiological mechanisms for language, symbols and concepts: Clues from brain-constrained deep neural networks. Prog Neurobiol 2023; 230:102511. [PMID: 37482195 PMCID: PMC10518464 DOI: 10.1016/j.pneurobio.2023.102511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/02/2023] [Accepted: 07/18/2023] [Indexed: 07/25/2023]
Abstract
Neural networks are successfully used to imitate and model cognitive processes. However, to provide clues about the neurobiological mechanisms enabling human cognition, these models need to mimic the structure and function of real brains. Brain-constrained networks differ from classic neural networks by implementing brain similarities at different scales, ranging from the micro- and mesoscopic levels of neuronal function, local neuronal links and circuit interaction to large-scale anatomical structure and between-area connectivity. This review shows how brain-constrained neural networks can be applied to study in silico the formation of mechanisms for symbol and concept processing and to work towards neurobiological explanations of specifically human cognitive abilities. These include verbal working memory and learning of large vocabularies of symbols, semantic binding carried by specific areas of cortex, attention focusing and modulation driven by symbol type, and the acquisition of concrete and abstract concepts partly influenced by symbols. Neuronal assembly activity in the networks is analyzed to deliver putative mechanistic correlates of higher cognitive processes and to develop candidate explanations founded in established neurobiological principles.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099 Berlin, Germany; Einstein Center for Neurosciences Berlin, 10117 Berlin, Germany; Cluster of Excellence 'Matters of Activity', Humboldt Universität zu Berlin, 10099 Berlin, Germany.
| |
Collapse
|
5
|
Barabási DL, Bianconi G, Bullmore E, Burgess M, Chung S, Eliassi-Rad T, George D, Kovács IA, Makse H, Nichols TE, Papadimitriou C, Sporns O, Stachenfeld K, Toroczkai Z, Towlson EK, Zador AM, Zeng H, Barabási AL, Bernard A, Buzsáki G. Neuroscience Needs Network Science. J Neurosci 2023; 43:5989-5995. [PMID: 37612141 PMCID: PMC10451115 DOI: 10.1523/jneurosci.1014-23.2023] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 07/10/2023] [Accepted: 07/14/2023] [Indexed: 08/25/2023] Open
Abstract
The brain is a complex system comprising a myriad of interacting neurons, posing significant challenges in understanding its structure, function, and dynamics. Network science has emerged as a powerful tool for studying such interconnected systems, offering a framework for integrating multiscale data and complexity. To date, network methods have significantly advanced functional imaging studies of the human brain and have facilitated the development of control theory-based applications for directing brain activity. Here, we discuss emerging frontiers for network neuroscience in the brain atlas era, addressing the challenges and opportunities in integrating multiple data streams for understanding the neural transitions from development to healthy function to disease. We underscore the importance of fostering interdisciplinary opportunities through workshops, conferences, and funding initiatives, such as supporting students and postdoctoral fellows with interests in both disciplines. By bringing together the network science and neuroscience communities, we can develop novel network-based methods tailored to neural circuits, paving the way toward a deeper understanding of the brain and its functions, as well as offering new challenges for network science.
Collapse
Affiliation(s)
- Dániel L Barabási
- Biophysics Program, Harvard University, Cambridge, 02138, Massachusetts
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, 02138, Massachusetts
| | - Ginestra Bianconi
- School of Mathematical Sciences, Queen Mary University of London, London, E1 4NS, United Kingdom
- Alan Turing Institute, The British Library, London, NW1 2DB, United Kingdom
| | - Ed Bullmore
- Department of Psychiatry and Wolfson Brain Imaging Centre, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom
| | | | - SueYeon Chung
- Center for Neural Science, New York University, New York, New York 10003
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation, New York, New York 10010
| | - Tina Eliassi-Rad
- Network Science Institute, Northeastern University, Boston, 02115, Massachusetts
- Khoury College of Computer Sciences, Northeastern University, Boston, 02115, Massachusetts
- Santa Fe Institute, Santa Fe, New Mexico 87501
| | | | - István A Kovács
- Department of Physics and Astronomy, Northwestern University, Evanston, Illinois 60208
- Northwestern Institute on Complex Systems, Northwestern University, Evanston, Illinois 60208
| | - Hernán Makse
- Levich Institute and Physics Department, City College of New York, New York, New York 10031
| | - Thomas E Nichols
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, Nuffield Department of Population Health, University of Oxford, Oxford, OX3 7LF, United Kingdom
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, OX3 9DU, United Kingdom
| | | | - Olaf Sporns
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana 47405
| | - Kim Stachenfeld
- DeepMind, London, EC4A 3TW, United Kingdom
- Columbia University, New York, New York 10027
| | - Zoltán Toroczkai
- Department of Physics, University of Notre Dame, Notre Dame, Indiana 46556
| | - Emma K Towlson
- Department of Computer Science, University of Calgary, Calgary, Alberta, AB T2N 1N4, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, AB T2N 1N4, Canada
- Department of Physics and Astronomy, University of Calgary, Calgary, Alberta, AB T2N 1N4, Canada
- Alberta Children's Hospital Research Institute, University of Calgary, Calgary, Alberta, AB T2N 1N4, Canada
| | - Anthony M Zador
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York 11724
| | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, 98109, Washington
| | - Albert-László Barabási
- Network Science Institute, Northeastern University, Boston, 02115, Massachusetts
- Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts 02115
- Department of Network and Data Science, Central European University, Budapest, H-1051, Hungary
| | - Amy Bernard
- The Kavli Foundation, Los Angeles, 90230, California
| | - György Buzsáki
- Center for Neural Science, New York University, New York, New York 10003
- Neuroscience Institute and Department of Neurology, NYU Grossman School of Medicine, New York University, New York, New York 10016
| |
Collapse
|
6
|
Abstract
The current gap between computing algorithms and neuromorphic hardware to emulate brains is an outstanding bottleneck in developing neural computing technologies. Aimone and Parekh discuss the possibility of bridging this gap using theoretical computing frameworks from a neuroscience perspective.
Collapse
Affiliation(s)
- James B Aimone
- Neural Exploration and Research Laboratory, Center for Computing Research, Sandia National Laboratories, Albuquerque, NM, USA.
| | - Ojas Parekh
- Neural Exploration and Research Laboratory, Center for Computing Research, Sandia National Laboratories, Albuquerque, NM, USA.
| |
Collapse
|
7
|
Maes A, Barahona M, Clopath C. Long- and short-term history effects in a spiking network model of statistical learning. Sci Rep 2023; 13:12939. [PMID: 37558704 PMCID: PMC10412617 DOI: 10.1038/s41598-023-39108-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/20/2023] [Indexed: 08/11/2023] Open
Abstract
The statistical structure of the environment is often important when making decisions. There are multiple theories of how the brain represents statistical structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, the network spends more time in states which encode high-probability stimuli. Starting from the neural assembly, increasingly thought of to be the building block for computation in the brain, we focus on how arbitrary prior knowledge about the external world can both be learned and spontaneously recollected. We present a model based upon learning the inverse of the cumulative distribution function. Learning is entirely unsupervised using biophysical neurons and biologically plausible learning rules. We show how this prior knowledge can then be accessed to compute expectations and signal surprise in downstream networks. Sensory history effects emerge from the model as a consequence of ongoing learning.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, USA.
- Department of Bioengineering, Imperial College London, London, UK.
| | | | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
8
|
Barabási DL, Bianconi G, Bullmore E, Burgess M, Chung S, Eliassi-Rad T, George D, Kovács IA, Makse H, Papadimitriou C, Nichols TE, Sporns O, Stachenfeld K, Toroczkai Z, Towlson EK, Zador AM, Zeng H, Barabási AL, Bernard A, Buzsáki G. Neuroscience needs Network Science. ARXIV 2023:arXiv:2305.06160v2. [PMID: 37214134 PMCID: PMC10197734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The brain is a complex system comprising a myriad of interacting elements, posing significant challenges in understanding its structure, function, and dynamics. Network science has emerged as a powerful tool for studying such intricate systems, offering a framework for integrating multiscale data and complexity. Here, we discuss the application of network science in the study of the brain, addressing topics such as network models and metrics, the connectome, and the role of dynamics in neural networks. We explore the challenges and opportunities in integrating multiple data streams for understanding the neural transitions from development to healthy function to disease, and discuss the potential for collaboration between network science and neuroscience communities. We underscore the importance of fostering interdisciplinary opportunities through funding initiatives, workshops, and conferences, as well as supporting students and postdoctoral fellows with interests in both disciplines. By uniting the network science and neuroscience communities, we can develop novel network-based methods tailored to neural circuits, paving the way towards a deeper understanding of the brain and its functions.
Collapse
Affiliation(s)
- Dániel L Barabási
- Biophysics Program, Harvard University, Cambridge, MA, USA
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, Massachusetts, USA
| | - Ginestra Bianconi
- School of Mathematical Sciences, Queen Mary University of London, London, E1 4NS, UK
- The Alan Turing Institute, The British Library, London, NW1 2DB, UK
| | - Ed Bullmore
- Department of Psychiatry and Wolfson Brain Imaging Centre, University of Cambridge, Cambridge, United Kingdom
| | | | - SueYeon Chung
- Center for Neural Science, New York University, New York, NY, USA
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation, New York, NY, USA
| | - Tina Eliassi-Rad
- Network Science Institute, Northeastern University, Boston, MA, USA
- Khoury College of Computer Sciences, Northeastern University, Boston, MA, USA
- Santa Fe Institute, Santa Fe, NM, USA
| | | | - István A. Kovács
- Department of Physics and Astronomy, Northwestern University, 633 Clark Street, Evanston, IL 60208, USA
- Northwestern Institute on Complex Systems, Chambers Hall, 600 Foster St, Northwestern University, Evanston, IL 60208
| | - Hernán Makse
- Levich Institute and Physics Department, City College of New York, New York, NY 10031 US
| | | | - Thomas E. Nichols
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, Nuffield Department of Population Health, University of Oxford, Oxford, OX3 7LF, UK
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, OX3 9DU, UK
| | - Olaf Sporns
- Department of Psychological and Brain Sciences, Indiana University, Bloomington IN 47405
| | | | - Zoltán Toroczkai
- Department of Physics, University of Notre Dame, 225 Nieuwland Science Hall, Notre Dame IN 46556, USA
| | - Emma K. Towlson
- Department of Computer Science, Department of Physics and Astronomy, Hotchkiss Brain Institute, Children’s Research Hospital, University of Calgary, Calgary, Alberta, Canada 22
| | - Anthony M Zador
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, 11724
| | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Albert-László Barabási
- Network Science Institute, Northeastern University, Boston, MA, USA
- Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA, 02115, USA
- Department of Network and Data Science, Central European University, Budapest, H-1051, Hungary
| | | | - György Buzsáki
- Neuroscience Institute and Department of Neurology, NYU Grossman School of Medicine, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
9
|
Herreras O, Torres D, Makarov VA, Makarova J. Theoretical considerations and supporting evidence for the primary role of source geometry on field potential amplitude and spatial extent. Front Cell Neurosci 2023; 17:1129097. [PMID: 37066073 PMCID: PMC10097999 DOI: 10.3389/fncel.2023.1129097] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 03/15/2023] [Indexed: 04/03/2023] Open
Abstract
Field potential (FP) recording is an accessible means to capture the shifts in the activity of neuron populations. However, the spatial and composite nature of these signals has largely been ignored, at least until it became technically possible to separate activities from co-activated sources in different structures or those that overlap in a volume. The pathway-specificity of mesoscopic sources has provided an anatomical reference that facilitates transcending from theoretical analysis to the exploration of real brain structures. We review computational and experimental findings that indicate how prioritizing the spatial geometry and density of sources, as opposed to the distance to the recording site, better defines the amplitudes and spatial reach of FPs. The role of geometry is enhanced by considering that zones of the active populations that act as sources or sinks of current may arrange differently with respect to each other, and have different geometry and densities. Thus, observations that seem counterintuitive in the scheme of distance-based logic alone can now be explained. For example, geometric factors explain why some structures produce FPs and others do not, why different FP motifs generated in the same structure extend far while others remain local, why factors like the size of an active population or the strong synchronicity of its neurons may fail to affect FPs, or why the rate of FP decay varies in different directions. These considerations are exemplified in large structures like the cortex and hippocampus, in which the role of geometrical elements and regional activation in shaping well-known FP oscillations generally go unnoticed. Discovering the geometry of the sources in play will decrease the risk of population or pathway misassignments based solely on the FP amplitude or temporal pattern.
Collapse
Affiliation(s)
- Oscar Herreras
- Laboratory of Experimental and Computational Neurophysiology, Department of Translational Neuroscience, Cajal Institute, Spanish National Research Council, Madrid, Spain
- *Correspondence: Oscar Herreras,
| | - Daniel Torres
- Laboratory of Experimental and Computational Neurophysiology, Department of Translational Neuroscience, Cajal Institute, Spanish National Research Council, Madrid, Spain
| | - Valeriy A. Makarov
- Institute for Interdisciplinary Mathematics, School of Mathematics, Universidad Complutense de Madrid, Madrid, Spain
| | - Julia Makarova
- Laboratory of Experimental and Computational Neurophysiology, Department of Translational Neuroscience, Cajal Institute, Spanish National Research Council, Madrid, Spain
- Julia Makarova,
| |
Collapse
|
10
|
Distributed algorithms from arboreal ants for the shortest path problem. Proc Natl Acad Sci U S A 2023; 120:e2207959120. [PMID: 36716366 PMCID: PMC9963535 DOI: 10.1073/pnas.2207959120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
Colonies of the arboreal turtle ant create networks of trails that link nests and food sources on the graph formed by branches and vines in the canopy of the tropical forest. Ants put down a volatile pheromone on the edges as they traverse them. At each vertex, the next edge to traverse is chosen using a decision rule based on the current pheromone level. There is a bidirectional flow of ants around the network. In a previous field study, it was observed that the trail networks approximately minimize the number of vertices, thus solving a variant of the popular shortest path problem without any central control and with minimal computational resources. We propose a biologically plausible model, based on a variant of the reinforced random walk on a graph, which explains this observation and suggests surprising algorithms for the shortest path problem and its variants. Through simulations and analysis, we show that when the rate of flow of ants does not change, the dynamics converges to the path with the minimum number of vertices, as observed in the field. The dynamics converges to the shortest path when the rate of flow increases with time, so the colony can solve the shortest path problem merely by increasing the flow rate. We also show that to guarantee convergence to the shortest path, bidirectional flow and a decision rule dividing the flow in proportion to the pheromone level are necessary, but convergence to approximately short paths is possible with other decision rules.
Collapse
|
11
|
Zhao Z, Wang Y, Zou Q, Xu T, Tao F, Zhang J, Wang X, Shi CJR, Luo J, Xie Y. The spike gating flow: A hierarchical structure-based spiking neural network for online gesture recognition. Front Neurosci 2022; 16:923587. [PMID: 36408382 PMCID: PMC9667043 DOI: 10.3389/fnins.2022.923587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 10/03/2022] [Indexed: 01/25/2023] Open
Abstract
Action recognition is an exciting research avenue for artificial intelligence since it may be a game changer in emerging industrial fields such as robotic visions and automobiles. However, current deep learning (DL) faces major challenges for such applications because of the huge computational cost and inefficient learning. Hence, we developed a novel brain-inspired spiking neural network (SNN) based system titled spiking gating flow (SGF) for online action learning. The developed system consists of multiple SGF units which are assembled in a hierarchical manner. A single SGF unit contains three layers: a feature extraction layer, an event-driven layer, and a histogram-based training layer. To demonstrate the capability of the developed system, we employed a standard dynamic vision sensor (DVS) gesture classification as a benchmark. The results indicated that we can achieve 87.5% of accuracy which is comparable with DL, but at a smaller training/inference data number ratio of 1.5:1. Only a single training epoch is required during the learning process. Meanwhile, to the best of our knowledge, this is the highest accuracy among the non-backpropagation based SNNs. Finally, we conclude the few-shot learning (FSL) paradigm of the developed network: 1) a hierarchical structure-based network design involves prior human knowledge; 2) SNNs for content-based global dynamic feature detection.
Collapse
Affiliation(s)
- Zihao Zhao
- School of Microelectronics, Fudan University, Shanghai, China
- Alibaba DAMO Academy, Shanghai, China
| | - Yanhong Wang
- School of Microelectronics, Fudan University, Shanghai, China
- Alibaba DAMO Academy, Shanghai, China
| | - Qiaosha Zou
- School of Microelectronics, Fudan University, Shanghai, China
| | - Tie Xu
- Alibaba Group, Hangzhou, China
| | | | | | - Xiaoan Wang
- BrainUp Research Laboratory, Shanghai, China
| | - C.-J. Richard Shi
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
| | - Junwen Luo
- Alibaba DAMO Academy, Shanghai, China
- BrainUp Research Laboratory, Shanghai, China
| | - Yuan Xie
- Alibaba DAMO Academy, Shanghai, China
| |
Collapse
|
12
|
Rabuffo G, Sorrentino P, Bernard C, Jirsa V. Spontaneous neuronal avalanches as a correlate of access consciousness. Front Psychol 2022; 13:1008407. [PMID: 36337573 PMCID: PMC9634647 DOI: 10.3389/fpsyg.2022.1008407] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 10/04/2022] [Indexed: 09/03/2023] Open
Abstract
Decades of research have advanced our understanding of the biophysical mechanisms underlying consciousness. However, an overarching framework bridging between models of consciousness and the large-scale organization of spontaneous brain activity is still missing. Based on the observation that spontaneous brain activity dynamically switches between epochs of segregation and large-scale integration of information, we hypothesize a brain-state dependence of conscious access, whereby the presence of either segregated or integrated states marks distinct modes of information processing. We first review influential works on the neuronal correlates of consciousness, spontaneous resting-state brain activity and dynamical system theory. Then, we propose a test experiment to validate our hypothesis that conscious access occurs in aperiodic cycles, alternating windows where new incoming information is collected but not experienced, to punctuated short-lived integration events, where conscious access to previously collected content occurs. In particular, we suggest that the integration events correspond to neuronal avalanches, which are collective bursts of neuronal activity ubiquitously observed in electrophysiological recordings. If confirmed, the proposed framework would link the physics of spontaneous cortical dynamics, to the concept of ignition within the global neuronal workspace theory, whereby conscious access manifest itself as a burst of neuronal activity.
Collapse
Affiliation(s)
- Giovanni Rabuffo
- Institut de Neurosciences des Systemes, Aix-Marseille University, Marseille, France
| | | | | | | |
Collapse
|
13
|
Kleyko D, Davies M, Frady EP, Kanerva P, Kent SJ, Olshausen BA, Osipov E, Rabaey JM, Rachkovskij DA, Rahimi A, Sommer FT. Vector Symbolic Architectures as a Computing Framework for Emerging Hardware. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2022; 110:1538-1571. [PMID: 37868615 PMCID: PMC10588678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 10/24/2023]
Abstract
This article reviews recent progress in the development of the computing framework Vector Symbolic Architectures (also known as Hyperdimensional Computing). This framework is well suited for implementation in stochastic, emerging hardware and it naturally expresses the types of cognitive operations required for Artificial Intelligence (AI). We demonstrate in this article that the field-like algebraic structure of Vector Symbolic Architectures offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant to modern computing. In addition, we illustrate the distinguishing feature of Vector Symbolic Architectures, "computing in superposition," which sets it apart from conventional computing. It also opens the door to efficient solutions to the difficult combinatorial search problems inherent in AI applications. We sketch ways of demonstrating that Vector Symbolic Architectures are computationally universal. We see them acting as a framework for computing with distributed representations that can play a role of an abstraction layer for emerging computing hardware. This article serves as a reference for computer architects by illustrating the philosophy behind Vector Symbolic Architectures, techniques of distributed computing with them, and their relevance to emerging computing hardware, such as neuromorphic computing.
Collapse
Affiliation(s)
- Denis Kleyko
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA and also with the Intelligent Systems Lab at Research Institutes of Sweden, 16440 Kista, Sweden
| | - Mike Davies
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA
| | - E Paxon Frady
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA
| | - Pentti Kanerva
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Spencer J Kent
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Bruno A Olshausen
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Evgeny Osipov
- Department of Computer Science Electrical and Space Engineering, Luleå University of Technology, 97187 Luleå, Sweden
| | - Jan M Rabaey
- Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley, CA 94720, USA
| | - Dmitri A Rachkovskij
- International Research and Training Center for Information Technologies and Systems, 03680 Kyiv, Ukraine, and with the Department of Computer Science Electrical and Space Engineering, Luleå University of Technology, 97187 Luleå, Sweden
| | - Abbas Rahimi
- IBM Research - Zurich, 8803 Rüschlikon, Switzerland
| | - Friedrich T Sommer
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA and also with the Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| |
Collapse
|
14
|
Volzhenin K, Changeux JP, Dumas G. Multilevel development of cognitive abilities in an artificial neural network. Proc Natl Acad Sci U S A 2022; 119:e2201304119. [PMID: 36122214 PMCID: PMC9522351 DOI: 10.1073/pnas.2201304119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 08/16/2022] [Indexed: 11/18/2022] Open
Abstract
Several neuronal mechanisms have been proposed to account for the formation of cognitive abilities through postnatal interactions with the physical and sociocultural environment. Here, we introduce a three-level computational model of information processing and acquisition of cognitive abilities. We propose minimal architectural requirements to build these levels, and how the parameters affect their performance and relationships. The first sensorimotor level handles local nonconscious processing, here during a visual classification task. The second level or cognitive level globally integrates the information from multiple local processors via long-ranged connections and synthesizes it in a global, but still nonconscious, manner. The third and cognitively highest level handles the information globally and consciously. It is based on the global neuronal workspace (GNW) theory and is referred to as the conscious level. We use the trace and delay conditioning tasks to, respectively, challenge the second and third levels. Results first highlight the necessity of epigenesis through the selection and stabilization of synapses at both local and global scales to allow the network to solve the first two tasks. At the global scale, dopamine appears necessary to properly provide credit assignment despite the temporal delay between perception and reward. At the third level, the presence of interneurons becomes necessary to maintain a self-sustained representation within the GNW in the absence of sensory input. Finally, while balanced spontaneous intrinsic activity facilitates epigenesis at both local and global scales, the balanced excitatory/inhibitory ratio increases performance. We discuss the plausibility of the model in both neurodevelopmental and artificial intelligence terms.
Collapse
Affiliation(s)
- Konstantin Volzhenin
- Neuroscience Department, Institut Pasteur, 75015 Paris, France
- Laboratory of Computational and Quantitative Biology, Sorbonne Université, 75005 Paris, France
| | | | - Guillaume Dumas
- Neuroscience Department, Institut Pasteur, 75015 Paris, France
- Mila - Quebec Artificial Intelligence Institute, Centre Hospitalier Universitaire Sainte-Justine Research Center, Department of Psychiatry, Université de Montréal, Montréal, QC H3T 1C5, Canada
| |
Collapse
|
15
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
16
|
Fayyaz Z, Altamimi A, Zoellner C, Klein N, Wolf OT, Cheng S, Wiskott L. A Model of Semantic Completion in Generative Episodic Memory. Neural Comput 2022; 34:1841-1870. [PMID: 35896150 DOI: 10.1162/neco_a_01520] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 05/03/2022] [Indexed: 11/04/2022]
Abstract
Many studies have suggested that episodic memory is a generative process, but most computational models adopt a storage view. In this article, we present a model of the generative aspects of episodic memory. It is based on the central hypothesis that the hippocampus stores and retrieves selected aspects of an episode as a memory trace, which is necessarily incomplete. At recall, the neocortex reasonably fills in the missing parts based on general semantic information in a process we call semantic completion. The model combines two neural network architectures known from machine learning, the vector-quantized variational autoencoder (VQ-VAE) and the pixel convolutional neural network (PixelCNN). As episodes, we use images of digits and fashion items (MNIST) augmented by different backgrounds representing context. The model is able to complete missing parts of a memory trace in a semantically plausible way up to the point where it can generate plausible images from scratch, and it generalizes well to images not trained on. Compression as well as semantic completion contribute to a strong reduction in memory requirements and robustness to noise. Finally, we also model an episodic memory experiment and can reproduce that semantically congruent contexts are always recalled better than incongruent ones, high attention levels improve memory accuracy in both cases, and contexts that are not remembered correctly are more often remembered semantically congruently than completely wrong. This model contributes to a deeper understanding of the interplay between episodic memory and semantic information in the generative process of recalling the past.
Collapse
Affiliation(s)
- Zahra Fayyaz
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, 44801 Bochum, Germany
| | - Aya Altamimi
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, 44801 Bochum, Germany
| | - Carina Zoellner
- Cognitive Psychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, 44801 Bochum, Germany
| | - Nicole Klein
- Cognitive Psychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, 44801 Bochum, Germany
| | - Oliver T Wolf
- Cognitive Psychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, 44801 Bochum, Germany
| | - Sen Cheng
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, 44801 Bochum, Germany
| | - Laurenz Wiskott
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, 44801 Bochum, Germany
| |
Collapse
|
17
|
Kudithipudi D, Aguilar-Simon M, Babb J, Bazhenov M, Blackiston D, Bongard J, Brna AP, Chakravarthi Raja S, Cheney N, Clune J, Daram A, Fusi S, Helfer P, Kay L, Ketz N, Kira Z, Kolouri S, Krichmar JL, Kriegman S, Levin M, Madireddy S, Manicka S, Marjaninejad A, McNaughton B, Miikkulainen R, Navratilova Z, Pandit T, Parker A, Pilly PK, Risi S, Sejnowski TJ, Soltoggio A, Soures N, Tolias AS, Urbina-Meléndez D, Valero-Cuevas FJ, van de Ven GM, Vogelstein JT, Wang F, Weiss R, Yanguas-Gil A, Zou X, Siegelmann H. Biological underpinnings for lifelong learning machines. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00452-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
18
|
Folschweiller S, Sauer JF. Phase-specific pooling of sparse assembly activity by respiration-related brain oscillations. J Physiol 2022; 600:1991-2011. [PMID: 35218015 DOI: 10.1113/jp282631] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 02/10/2022] [Indexed: 11/08/2022] Open
Abstract
Neuronal assemblies activate phase-coupled to ongoing respiration-related oscillations (RROs) in the medial prefrontal cortex of mice. The phase coupling strength of assemblies exceeds that of individual neurons. Assemblies preferentially activate during the descending phase of RRO. Despite higher assembly frequency during descending RRO, overlap between active assemblies remains constant across RRO phase. Putative GABAergic interneurons are preferentially recruited by assembly neurons during descending RRO, suggesting that interneurons might contribute to the segregation of active assemblies during the descending phase of RRO. ABSTRACT: Nasal breathing affects cognitive functions, but it has remained largely unclear how respiration-driven inputs shape information processing in neuronal circuits. Current theories emphasize the role of neuronal assemblies, coalitions of transiently active pyramidal cells, as the core unit of cortical network computations. Here, we show that the phase of respiration-related oscillations (RROs) influences the likelihood of activation of a subset of neuronal assemblies in the medial prefrontal cortex (mPFC) of awake mice. RROs bias the activation of neuronal assemblies more efficiently than that of individual neurons by entraining the coactivity of assembly neurons. Moreover, the activation of assemblies is moderately biased towards the descending phase of RROs. Despite the enriched activation of assemblies during descending RRO, the overlap between individual assemblies remains constant across RRO phases. Putative GABAergic interneurons are shown to coactivate with assemblies and receive enhanced excitatory drive from assembly neurons during descending RRO, suggesting that the phase-specific recruitment of putative interneurons might help to keep the activation of different assemblies separated from each other during times of preferred assembly activation. Our results thus identify respiration-synchronized brain rhythms as drivers of neuronal assemblies and point to a role of RROs in defining time windows of enhanced yet segregated assembly activity. Abstract figure legend. Nasal breathing affects cognitive functions, but it has remained largely unclear how respiration-driven inputs shape information processing in neuronal circuits. We show that the phase of respiration-related oscillations (RROs) influences the likelihood of the activation of a subset of neuronal assemblies in the medial prefrontal cortex (mPFC) of awake mice. The activation of assemblies is moderately biased towards the descending phase of RROs, while the overlap between individual assemblies remains constant across RRO phases. Putative GABAergic interneurons are shown to coactivate with assemblies and receive enhanced excitatory drive from assembly neurons during descending RRO, suggesting that the phase-specific recruitment of putative interneurons might help to keep the activation of different assemblies separated from each other. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Shani Folschweiller
- Institute for Physiology I, Medical Faculty, Albert-Ludwigs-University Freiburg, Hermann-Herder-Strasse 7, Freiburg, D-79104, Germany.,Faculty of Biology, Albert-Ludwigs-University Freiburg, Schaenzlestrasse 1, Freiburg, D-79104, Germany
| | - Jonas-Frederic Sauer
- Institute for Physiology I, Medical Faculty, Albert-Ludwigs-University Freiburg, Hermann-Herder-Strasse 7, Freiburg, D-79104, Germany
| |
Collapse
|
19
|
Akhlaghpour H. An RNA-Based Theory of Natural Universal Computation. J Theor Biol 2021; 537:110984. [PMID: 34979104 DOI: 10.1016/j.jtbi.2021.110984] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 09/30/2021] [Accepted: 12/07/2021] [Indexed: 12/15/2022]
Abstract
Life is confronted with computation problems in a variety of domains including animal behavior, single-cell behavior, and embryonic development. Yet we currently do not know of a naturally existing biological system that is capable of universal computation, i.e., Turing-equivalent in scope. Generic finite-dimensional dynamical systems (which encompass most models of neural networks, intracellular signaling cascades, and gene regulatory networks) fall short of universal computation, but are assumed to be capable of explaining cognition and development. I present a class of models that bridge two concepts from distant fields: combinatory logic (or, equivalently, lambda calculus) and RNA molecular biology. A set of basic RNA editing rules can make it possible to compute any computable function with identical algorithmic complexity to that of Turing machines. The models do not assume extraordinarily complex molecular machinery or any processes that radically differ from what we already know to occur in cells. Distinct independent enzymes can mediate each of the rules and RNA molecules solve the problem of parenthesis matching through their secondary structure. In the most plausible of these models all of the editing rules can be implemented with merely cleavage and ligation operations at fixed positions relative to predefined motifs. This demonstrates that universal computation is well within the reach of molecular biology. It is therefore reasonable to assume that life has evolved - or possibly began with - a universal computer that yet remains to be discovered. The variety of seemingly unrelated computational problems across many scales can potentially be solved using the same RNA-based computation system. Experimental validation of this theory may immensely impact our understanding of memory, cognition, development, disease, evolution, and the early stages of life.
Collapse
Affiliation(s)
- Hessameddin Akhlaghpour
- Laboratory of Integrative Brain Function, The Rockefeller University, New York, NY, 10065, USA
| |
Collapse
|
20
|
Brakel LAW. Can Neuroscientists Test a New Physicalist Mind/Body View: DiCoToP (Diachronic Conjunctive Token Physicalism)? Front Hum Neurosci 2021; 15:786133. [PMID: 34975437 PMCID: PMC8720042 DOI: 10.3389/fnhum.2021.786133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 11/24/2021] [Indexed: 11/23/2022] Open
Abstract
Given that disparate mind/body views have interfered with interdisciplinary research in psychoanalysis and neuroscience, the mind/body problem itself is explored here. Adding a philosophy of mind framework, problems for both dualists and physicalists are presented, along with essential concepts including: independent mental causation, emergence, and multiple realization. To address some of these issues in a new light, this article advances an original mind/body account-Diachronic Conjunctive Token Physicalism (DiCoToP). Next, puzzles DiCoTop reveals, psychoanalytic problems it solves, and some empirical evidence accrued for views consistent with DiCoToP are presented. In closing, this piece challenges/appeals for neuroscience research to gain evidence for (or against) the DiCoToP view.
Collapse
Affiliation(s)
- Linda A. W. Brakel
- Department of Philosophy, University of Michigan, Ann Arbor, MI, United States
- Department of Psychiatry, University of Michigan, Ann Arbor, MI, United States
- Michigan Psychoanalytic Institute, Farmington Hills, MI, United States
| |
Collapse
|
21
|
Wong EC. Distributed Phase Oscillatory Excitation Efficiently Produces Attractors Using Spike-Timing-Dependent Plasticity. Neural Comput 2021; 34:415-436. [PMID: 34915556 DOI: 10.1162/neco_a_01466] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 09/18/2021] [Indexed: 11/04/2022]
Abstract
The brain is thought to represent information in the form of activity in distributed groups of neurons known as attractors. We show here that in a randomly connected network of simulated spiking neurons, periodic stimulation of neurons with distributed phase offsets, along with standard spike-timing-dependent plasticity (STDP), efficiently creates distributed attractors. These attractors may have a consistent ordered firing pattern or become irregular, depending on the conditions. We also show that when two such attractors are stimulated in sequence, the same STDP mechanism can create a directed association between them, forming the basis of an associative network. We find that for an STDP time constant of 20 ms, the dependence of the efficiency of attractor creation on the driving frequency has a broad peak centered around 8 Hz. Upon restimulation, the attractors self-oscillate, but with an oscillation frequency that is higher than the driving frequency, ranging from 10 to 100 Hz.
Collapse
Affiliation(s)
- Eric C Wong
- Departments of Radiology and Psychiatry, University of California, San Diego, La Jolla, CA 92093, U.S.A.
| |
Collapse
|
22
|
Papadimitriou CH, Friederici AD. Bridging the Gap Between Neurons and Cognition Through Assemblies of Neurons. Neural Comput 2021; 34:291-306. [PMID: 34915560 DOI: 10.1162/neco_a_01463] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 09/02/2021] [Indexed: 11/04/2022]
Abstract
During recent decades, our understanding of the brain has advanced dramatically at both the cellular and molecular levels and at the cognitive neurofunctional level; however, a huge gap remains between the microlevel of physiology and the macrolevel of cognition. We propose that computational models based on assemblies of neurons can serve as a blueprint for bridging these two scales. We discuss recently developed computational models of assemblies that have been demonstrated to mediate higher cognitive functions such as the processing of simple sentences, to be realistically realizable by neural activity, and to possess general computational power.
Collapse
Affiliation(s)
| | - Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, D-04303 Leipzig, Germany
| |
Collapse
|
23
|
Sadeh S, Clopath C. Excitatory-inhibitory balance modulates the formation and dynamics of neuronal assemblies in cortical networks. SCIENCE ADVANCES 2021; 7:eabg8411. [PMID: 34731002 PMCID: PMC8565910 DOI: 10.1126/sciadv.abg8411] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 09/14/2021] [Indexed: 05/20/2023]
Abstract
Repetitive activation of subpopulations of neurons leads to the formation of neuronal assemblies, which can guide learning and behavior. Recent technological advances have made the artificial induction of these assemblies feasible, yet how various parameters of induction can be optimized is not clear. Here, we studied this question in large-scale cortical network models with excitatory-inhibitory balance. We found that the background network in which assemblies are embedded can strongly modulate their dynamics and formation. Networks with dominant excitatory interactions enabled a fast formation of assemblies, but this was accompanied by recruitment of other non-perturbed neurons, leading to some degree of nonspecific induction. On the other hand, networks with strong excitatory-inhibitory interactions ensured that the formation of assemblies remained constrained to the perturbed neurons, but slowed down the induction. Our results suggest that these two regimes can be suitable for computational and cognitive tasks with different trade-offs between speed and specificity.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bioengineering Department, Imperial College London, London SW7 2AZ, UK
| | | |
Collapse
|
24
|
Folschweiller S, Sauer JF. Respiration-Driven Brain Oscillations in Emotional Cognition. Front Neural Circuits 2021; 15:761812. [PMID: 34790100 PMCID: PMC8592085 DOI: 10.3389/fncir.2021.761812] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 10/05/2021] [Indexed: 12/21/2022] Open
Abstract
Respiration paces brain oscillations and the firing of individual neurons, revealing a profound impact of rhythmic breathing on brain activity. Intriguingly, respiration-driven entrainment of neural activity occurs in a variety of cortical areas, including those involved in higher cognitive functions such as associative neocortical regions and the hippocampus. Here we review recent findings of respiration-entrained brain activity with a particular focus on emotional cognition. We summarize studies from different brain areas involved in emotional behavior such as fear, despair, and motivation, and compile findings of respiration-driven activities across species. Furthermore, we discuss the proposed cellular and network mechanisms by which cortical circuits are entrained by respiration. The emerging synthesis from a large body of literature suggests that the impact of respiration on brain function is widespread across the brain and highly relevant for distinct cognitive functions. These intricate links between respiration and cognitive processes call for mechanistic studies of the role of rhythmic breathing as a timing signal for brain activity.
Collapse
Affiliation(s)
- Shani Folschweiller
- Institute for Physiology I, University of Freiburg, Freiburg, Germany
- Faculty of Biology, University of Freiburg, Freiburg, Germany
| | | |
Collapse
|
25
|
Pulvermüller F, Tomasello R, Henningsen-Schomers MR, Wennekers T. Biological constraints on neural network models of cognitive function. Nat Rev Neurosci 2021; 22:488-502. [PMID: 34183826 PMCID: PMC7612527 DOI: 10.1038/s41583-021-00473-5] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/17/2021] [Indexed: 02/06/2023]
Abstract
Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative, hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning to implementation of inhibition and control, along with neuroanatomical properties including areal structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, on the basis of these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Berlin, Germany.
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.
- Einstein Center for Neurosciences Berlin, Berlin, Germany.
- Cluster of Excellence 'Matters of Activity', Humboldt-Universität zu Berlin, Berlin, Germany.
| | - Rosario Tomasello
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Berlin, Germany
- Cluster of Excellence 'Matters of Activity', Humboldt-Universität zu Berlin, Berlin, Germany
| | - Malte R Henningsen-Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Berlin, Germany
- Cluster of Excellence 'Matters of Activity', Humboldt-Universität zu Berlin, Berlin, Germany
| | - Thomas Wennekers
- School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth, UK
| |
Collapse
|
26
|
Smyrnakis I, Papadopouli M, Pallagina G, Smirnakis S. Information Capacity of a Stochastically Responding Neuron Assembly. Neurocomputing 2021; 436:22-34. [PMID: 34539080 DOI: 10.1016/j.neucom.2020.12.130] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
In this work, certain aspects of the structure of the overlapping groups of neurons encoding specific signals are examined. Individual neurons are assumed to respond stochastically to input signal. Identification of a particular signal is assumed to result from the aggregate activity of a group of neurons, which we call information pathway. Conditions for definite response and for non-interference of pathways are derived. These conditions constrain the response properties of individual neurons and the allowed overlap among pathways. Under these constrains, and under the simplifying assumption that all pathways have similar structure, the information capacity of the system is derived. Furthermore, we show that there is a definite advantage in the information capacity if pathway neurons areinterspersed among the neuron assembly.
Collapse
Affiliation(s)
- I Smyrnakis
- Institute of Computer Science, Foundation for Research & Technology-Hellas
| | - M Papadopouli
- Institute of Computer Science, Foundation for Research & Technology-Hellas.,Department of Computer Science, University of Crete, Heraklion, Greece
| | - G Pallagina
- Department of Neurology, Brigham and Womens Hospital, Harvard Medical School, Boston MA 02115
| | - S Smirnakis
- Department of Neurology, Brigham and Womens Hospital, Harvard Medical School, Boston MA 02115.,Jamaica Plain VA Hospital, Harvard Medical School
| |
Collapse
|
27
|
Two sources of uncertainty independently modulate temporal expectancy. Proc Natl Acad Sci U S A 2021; 118:2019342118. [PMID: 33853943 DOI: 10.1073/pnas.2019342118] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The environment is shaped by two sources of temporal uncertainty: the discrete probability of whether an event will occur and-if it does-the continuous probability of when it will happen. These two types of uncertainty are fundamental to every form of anticipatory behavior including learning, decision-making, and motor planning. It remains unknown how the brain models the two uncertainty parameters and how they interact in anticipation. It is commonly assumed that the discrete probability of whether an event will occur has a fixed effect on event expectancy over time. In contrast, we first demonstrate that this pattern is highly dynamic and monotonically increases across time. Intriguingly, this behavior is independent of the continuous probability of when an event will occur. The effect of this continuous probability on anticipation is commonly proposed to be driven by the hazard rate (HR) of events. We next show that the HR fails to account for behavior and propose a model of event expectancy based on the probability density function of events. Our results hold for both vision and audition, suggesting independence of the representation of the two uncertainties from sensory input modality. These findings enrich the understanding of fundamental anticipatory processes and have provocative implications for many aspects of behavior and its neural underpinnings.
Collapse
|