1
|
Mattera A, Alfieri V, Granato G, Baldassarre G. Chaotic recurrent neural networks for brain modelling: A review. Neural Netw 2025; 184:107079. [PMID: 39756119 DOI: 10.1016/j.neunet.2024.107079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/25/2024] [Accepted: 12/19/2024] [Indexed: 01/07/2025]
Abstract
Even in the absence of external stimuli, the brain is spontaneously active. Indeed, most cortical activity is internally generated by recurrence. Both theoretical and experimental studies suggest that chaotic dynamics characterize this spontaneous activity. While the precise function of brain chaotic activity is still puzzling, we know that chaos confers many advantages. From a computational perspective, chaos enhances the complexity of network dynamics. From a behavioural point of view, chaotic activity could generate the variability required for exploration. Furthermore, information storage and transfer are maximized at the critical border between order and chaos. Despite these benefits, many computational brain models avoid incorporating spontaneous chaotic activity due to the challenges it poses for learning algorithms. In recent years, however, multiple approaches have been proposed to overcome this limitation. As a result, many different algorithms have been developed, initially within the reservoir computing paradigm. Over time, the field has evolved to increase the biological plausibility and performance of the algorithms, sometimes going beyond the reservoir computing framework. In this review article, we examine the computational benefits of chaos and the unique properties of chaotic recurrent neural networks, with a particular focus on those typically utilized in reservoir computing. We also provide a detailed analysis of the algorithms designed to train chaotic RNNs, tracing their historical evolution and highlighting key milestones in their development. Finally, we explore the applications and limitations of chaotic RNNs for brain modelling, consider their potential broader impacts beyond neuroscience, and outline promising directions for future research.
Collapse
Affiliation(s)
- Andrea Mattera
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy.
| | - Valerio Alfieri
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy; International School of Advanced Studies, Center for Neuroscience, University of Camerino, Via Gentile III Da Varano, 62032, Camerino, Italy
| | - Giovanni Granato
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| | - Gianluca Baldassarre
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| |
Collapse
|
2
|
Mijalkov M, Storm L, Zufiria-Gerbolés B, Veréb D, Xu Z, Canal-Garcia A, Sun J, Chang YW, Zhao H, Gómez-Ruiz E, Passaretti M, Garcia-Ptacek S, Kivipelto M, Svenningsson P, Zetterberg H, Jacobs H, Lüdge K, Brunner D, Mehlig B, Volpe G, Pereira JB. Computational memory capacity predicts aging and cognitive decline. Nat Commun 2025; 16:2748. [PMID: 40113762 PMCID: PMC11926346 DOI: 10.1038/s41467-025-57995-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 03/06/2025] [Indexed: 03/22/2025] Open
Abstract
Memory is a crucial cognitive function that deteriorates with age. However, this ability is normally assessed using cognitive tests instead of the architecture of brain networks. Here, we use reservoir computing, a recurrent neural network computing paradigm, to assess the linear memory capacities of neural-network reservoirs extracted from brain anatomical connectivity data in a lifespan cohort of 636 individuals. The computational memory capacity emerges as a robust marker of aging, being associated with resting-state functional activity, white matter integrity, locus coeruleus signal intensity, and cognitive performance. We replicate our findings in an independent cohort of 154 young and 72 old individuals. By linking the computational memory capacity of the brain network with cognition, brain function and integrity, our findings open new pathways to employ reservoir computing to investigate aging and age-related disorders.
Collapse
Affiliation(s)
- Mite Mijalkov
- Department of Clinical Neuroscience, Division of Neuro, Karolinska Institutet, Stockholm, Sweden.
| | - Ludvig Storm
- Department of Physics, Goteborg University, Goteborg, Sweden
| | - Blanca Zufiria-Gerbolés
- Department of Clinical Neuroscience, Division of Neuro, Karolinska Institutet, Stockholm, Sweden
| | - Dániel Veréb
- Department of Clinical Neuroscience, Division of Neuro, Karolinska Institutet, Stockholm, Sweden
| | - Zhilei Xu
- Department of Clinical Neuroscience, Division of Neuro, Karolinska Institutet, Stockholm, Sweden
| | - Anna Canal-Garcia
- Department of Clinical Neuroscience, Division of Neuro, Karolinska Institutet, Stockholm, Sweden
| | - Jiawei Sun
- Department of Clinical Neuroscience, Division of Neuro, Karolinska Institutet, Stockholm, Sweden
| | - Yu-Wei Chang
- Department of Physics, Goteborg University, Goteborg, Sweden
| | - Hang Zhao
- Department of Physics, Goteborg University, Goteborg, Sweden
| | | | - Massimiliano Passaretti
- Department of Clinical Neuroscience, Division of Neuro, Karolinska Institutet, Stockholm, Sweden
| | - Sara Garcia-Ptacek
- Department of Neurobiology, Care Sciences and Society, Division of Clinical Geriatrics, Karolinska Institutet, Stockholm, Sweden
- Theme Inflammation and Aging. Aging Brain Theme. Karolinska University Hospital, Solna, Sweden
| | - Miia Kivipelto
- Department of Neurobiology, Care Sciences and Society, Division of Clinical Geriatrics, Karolinska Institutet, Stockholm, Sweden
- University of Eastern Finland, Kuopio, Finland
| | - Per Svenningsson
- Department of Clinical Neuroscience, Division of Neuro, Karolinska Institutet, Stockholm, Sweden
| | - Henrik Zetterberg
- Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, the Sahlgrenska Academy at the University of Gothenburg, Mölndal, Sweden
- Clinical Neurochemistry Laboratory, Sahlgrenska University Hospital, Mölndal, Sweden
- Department of Neurodegenerative Disease, UCL Institute of Neurology, Queen Square, London, UK
- UK Dementia Research Institute at UCL, London, UK
- Hong Kong Center for Neurodegenerative Diseases, Clear Water Bay, Hong Kong, China
- Wisconsin Alzheimer's Disease Research Center, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, USA
| | - Heidi Jacobs
- Maastricht University, Maastricht, Netherlands
- Massachusetts General Hospital, Boston, MA, USA
| | - Kathy Lüdge
- Institute of Physics, Technische Universität Ilmenau, Weimarer Straße 25, Ilmenau, Germany
| | - Daniel Brunner
- Institut FEMTO-ST, Université Franche-Comté, CNRS, Besançon, France
| | - Bernhard Mehlig
- Department of Physics, Goteborg University, Goteborg, Sweden
| | - Giovanni Volpe
- Department of Physics, Goteborg University, Goteborg, Sweden.
| | - Joana B Pereira
- Department of Clinical Neuroscience, Division of Neuro, Karolinska Institutet, Stockholm, Sweden.
| |
Collapse
|
3
|
Lv G, Xu T, Li J, Zhu P, Chen F, Yang D, He G. Reduced connection strength leads to enhancement of working memory capacity in cognitive training. Neuroimage 2025; 308:121055. [PMID: 39892528 DOI: 10.1016/j.neuroimage.2025.121055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 01/21/2025] [Accepted: 01/23/2025] [Indexed: 02/03/2025] Open
Abstract
It has been widely observed that cognitive training can enhance the working memory capacity (WMC) of participants, yet the underlying mechanisms remain unexplained. Previous research has confirmed that abacus-based mental calculation (AMC) training can enhance the WMC of subjects and suggested its possible association with changes in functional connectivity. With fMRI data, we construct whole brain resting state connectivity of subjects who underwent long-term AMC training and other subjects from a control group. Their working memory capacity is simulated based on their whole brain resting state connectivity and reservoir computing. It is found that the AMC group has higher WMC than the control group, and especially the WMC involved in the frontoparietal network (FPN), visual network (VIS) and sensorimotor network (SMN) associated with the AMC training is even higher in the AMC group. However, the advantage of the AMC group disappears if the connection strengths between brain regions are neglected. The effects on WMC from the connection strength differences between the AMC and control groups are evaluated. The results show that the WMC of the control group is enhanced and achieved consistency with or even better than that the AMC group if the connection strength of the control group are weakened. And the advantage of FPN, VIS and SMN is reproduced too. In conclusion, our work reveals a correlation between reduction in functional connection strength and enhancements in the WMC of subjects undergoing cognitive training.
Collapse
Affiliation(s)
- Guiyang Lv
- School of Physics, Zhejiang University, Hangzhou, 310027, China; Institute of Big Data and Artificial Intelligence in Medicine, School of Electronics and Information Engineering, Taizhou University, Taizhou, 318000, China
| | - Tianyong Xu
- School of Physics, Zhejiang University, Hangzhou, 310027, China
| | - Jinhang Li
- School of Physics, Zhejiang University, Hangzhou, 310027, China
| | - Ping Zhu
- School of Physics, Zhejiang University, Hangzhou, 310027, China
| | - Feiyan Chen
- School of Physics, Zhejiang University, Hangzhou, 310027, China
| | - Dongping Yang
- Research Center for Augmented Intelligence, Research Institute of Artificial Intelligence, Zhejiang Lab, Hangzhou, 311100, China
| | - Guoguang He
- School of Physics, Zhejiang University, Hangzhou, 310027, China.
| |
Collapse
|
4
|
Triebkorn P, Jirsa V, Dominey PF. Simulating the impact of white matter connectivity on processing time scales using brain network models. Commun Biol 2025; 8:197. [PMID: 39920323 PMCID: PMC11806016 DOI: 10.1038/s42003-025-07587-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Accepted: 01/21/2025] [Indexed: 02/09/2025] Open
Abstract
The capacity of the brain to process input across temporal scales is exemplified in human narrative, which requires integration of information ranging from words, over sentences to long paragraphs. It has been shown that this processing is distributed in a hierarchy across multiple areas in the brain with areas close to the sensory cortex, processing on a faster time scale than areas in associative cortex. In this study we used reservoir computing with human derived connectivity to investigate the effect of the structural connectivity on time scales across brain regions during a narrative task paradigm. We systematically tested the effect of removal of selected fibre bundles (IFO, ILF, MLF, SLF I/II/III, UF, AF) on the processing time scales across brain regions. We show that long distance pathways such as the IFO provide a form of shortcut whereby input driven activation in the visual cortex can directly impact distant frontal areas. To validate our model we demonstrated significant correlation of our predicted time scale ordering with empirical results from the intact/scrambled narrative fMRI task paradigm. This study emphasizes structural connectivity's role in brain temporal processing hierarchies, providing a framework for future research on structure and neural dynamics across cognitive tasks.
Collapse
Affiliation(s)
- Paul Triebkorn
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, 13005, France.
| | - Viktor Jirsa
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, 13005, France
| | - Peter Ford Dominey
- Inserm UMR1093-CAPS, Université Bourgogne Europe, UFR des Sciences du Sport, Campus Universitaire, BP 27877, 21000, Dijon, France.
| |
Collapse
|
5
|
Pilzak A, Calderini M, Berberian N, Thivierge JP. Role of short-term plasticity and slow temporal dynamics in enhancing time series prediction with a brain-inspired recurrent neural network. CHAOS (WOODBURY, N.Y.) 2025; 35:023153. [PMID: 39977307 DOI: 10.1063/5.0233158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 02/01/2025] [Indexed: 02/22/2025]
Abstract
Typical reservoir networks are based on random connectivity patterns that differ from brain circuits in two important ways. First, traditional reservoir networks lack synaptic plasticity among recurrent units, whereas cortical networks exhibit plasticity across all neuronal types and cortical layers. Second, reservoir networks utilize random Gaussian connectivity, while cortical networks feature a heavy-tailed distribution of synaptic strengths. It is unclear what are the computational advantages of these features for predicting complex time series. In this study, we integrated short-term plasticity (STP) and lognormal connectivity into a novel recurrent neural network (RNN) framework. The model exhibited rich patterns of population activity characterized by slow coordinated fluctuations. Using graph spectral decomposition, we show that weighted networks with lognormal connectivity and STP yield higher complexity than several graph types. When tested on various tasks involving the prediction of complex time series data, the RNN model outperformed a baseline model with random connectivity as well as several other network architectures. Overall, our results underscore the potential of incorporating brain-inspired features such as STP and heavy-tailed connectivity to enhance the robustness and performance of artificial neural networks in complex data prediction and signal processing tasks.
Collapse
Affiliation(s)
- Artem Pilzak
- School of Psychology, University of Ottawa, 156 Jean-Jacques Lussier, Ottawa, Ontario K1N 6N5, Canada
| | - Matias Calderini
- School of Psychology, University of Ottawa, 156 Jean-Jacques Lussier, Ottawa, Ontario K1N 6N5, Canada
| | - Nareg Berberian
- School of Psychology, University of Ottawa, 156 Jean-Jacques Lussier, Ottawa, Ontario K1N 6N5, Canada
| | - Jean-Philippe Thivierge
- School of Psychology, University of Ottawa, 156 Jean-Jacques Lussier, Ottawa, Ontario K1N 6N5, Canada
- Brain and Mind Research Institute, University of Ottawa, 451 Smyth Rd., Ottawa, Ontario K1H 8M5, Canada
| |
Collapse
|
6
|
Yadav M, Sinha S, Stender M. Evolution beats random chance: Performance-dependent network evolution for enhanced computational capacity. Phys Rev E 2025; 111:014320. [PMID: 39972840 DOI: 10.1103/physreve.111.014320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Accepted: 01/13/2025] [Indexed: 02/21/2025]
Abstract
The quest to understand structure-function relationships in networks across scientific disciplines has intensified. However, the optimal network architecture remains elusive, particularly for complex information processing. Therefore, we investigate how optimal and specific network structures form to efficiently solve distinct tasks using a framework of performance-dependent network evolution, leveraging reservoir computing principles. Our study demonstrates that task-specific minimal network structures obtained through this framework consistently outperform networks generated by alternative growth strategies and Erdős-Rényi random networks. Evolved networks exhibit unexpected sparsity and adhere to scaling laws in node-density space while showcasing a distinctive asymmetry in input and information readout node distribution. Consequently, we propose a heuristic for quantifying task complexity from performance-dependently evolved networks, offering valuable insights into the evolutionary dynamics of the network structure-function relationship. Our findings advance the fundamental understanding of process-specific network evolution and shed light on the design and optimization of complex information processing mechanisms, notably in machine learning.
Collapse
Affiliation(s)
- Manish Yadav
- Technische Universität Berlin, Chair of Cyber-Physical Systems in Mechanical Engineering, Straße des 17. Juni, 10623 Berlin, Germany
| | - Sudeshna Sinha
- Indian Institute of Science Education and Research Mohali, Department of Physical Sciences, Sector 81, SAS Nagar, 140306 Punjab, India
| | - Merten Stender
- Technische Universität Berlin, Chair of Cyber-Physical Systems in Mechanical Engineering, Straße des 17. Juni, 10623 Berlin, Germany
| |
Collapse
|
7
|
Rathor SK, Ziegler M, Schumacher J. Asymmetrically connected reservoir networks learn better. Phys Rev E 2025; 111:015307. [PMID: 39972846 DOI: 10.1103/physreve.111.015307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Accepted: 01/08/2025] [Indexed: 02/21/2025]
Abstract
We show that connectivity within the high-dimensional recurrent layer of a reservoir network is crucial for its performance. To this end, we systematically investigate the impact of network connectivity on its performance, i.e., we examine the symmetry and structure of the reservoir in relation to its computational power. Reservoirs with random and asymmetric connections are found to perform better for an exemplary Mackey-Glass time series than all structured reservoirs, including biologically inspired connectivities, such as small-world topologies. This result is quantified by the information processing capacity of the different network topologies which becomes highest for asymmetric and randomly connected networks.
Collapse
Affiliation(s)
- Shailendra K Rathor
- Technische Universität Ilmenau, Institute of Thermodynamics and Fluid Mechanics, P.O.Box 100565, D-98684 Ilmenau, Germany
| | - Martin Ziegler
- Kiel University, Energy Materials and Devices, Department of Materials Science, Faculty of Engineering, D-24143 Kiel, Germany
| | - Jörg Schumacher
- Technische Universität Ilmenau, Institute of Thermodynamics and Fluid Mechanics, P.O.Box 100565, D-98684 Ilmenau, Germany
- Tandon School of Engineering, New York University, New York City, New York 11201, USA
| |
Collapse
|
8
|
Zhang Y, Zhou K, Bao P, Liu J. A biologically inspired computational model of human ventral temporal cortex. Neural Netw 2024; 178:106437. [PMID: 38936111 DOI: 10.1016/j.neunet.2024.106437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 06/01/2024] [Accepted: 06/05/2024] [Indexed: 06/29/2024]
Abstract
Our minds represent miscellaneous objects in the physical world metaphorically in an abstract and complex high-dimensional object space, which is implemented in a two-dimensional surface of the ventral temporal cortex (VTC) with topologically organized object selectivity. Here we investigated principles guiding the topographical organization of object selectivities in the VTC by constructing a hybrid Self-Organizing Map (SOM) model that harnesses a biologically inspired algorithm of wiring cost minimization and adheres to the constraints of the lateral wiring span of human VTC neurons. In a series of in silico experiments with functional brain neuroimaging and neurophysiological single-unit data from humans and non-human primates, the VTC-SOM predicted the topographical structure of fine-scale category-selective regions (face-, tool-, body-, and place-selective regions) and the boundary in large-scale abstract functional maps (animate vs. inanimate, real-word small-size vs. big-size, central vs. peripheral), with no significant loss in functionality (e.g., categorical selectivity and view-invariant representations). In addition, when the same principle was applied to V1 orientation preferences, a pinwheel-like topology emerged, suggesting the model's broad applicability. In summary, our study illustrates that the simple principle of wiring cost minimization, coupled with the appropriate biological constraint of lateral wiring span, is able to implement the high-dimensional object space in a two-dimensional cortical surface.
Collapse
Affiliation(s)
- Yiyuan Zhang
- Tsinghua Laboratory of Brain & Intelligence, Department of Psychology, Tsinghua University, Beijing, 100084, China
| | - Ke Zhou
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, 100875, China.
| | - Pinglei Bao
- Department of Psychology, Peking University, Beijing, 100871, China
| | - Jia Liu
- Tsinghua Laboratory of Brain & Intelligence, Department of Psychology, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
9
|
Bosl W, Enlow MB, Nelson C. A QR Code for the Brain: A dynamical systems framework for computing neurophysiological biomarkers. RESEARCH SQUARE 2024:rs.3.rs-4927086. [PMID: 39372924 PMCID: PMC11451722 DOI: 10.21203/rs.3.rs-4927086/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/08/2024]
Abstract
Neural circuits are often considered the bridge connecting genetic causes and behavior. Whereas prenatal neural circuits are believed to be derived from a combination of genetic and intrinsic activity, postnatal circuits are largely influenced by exogenous activity and experience. A dynamical neuroelectric field maintained by neural activity is proposed as the fundamental information processing substrate of cognitive function. Time series measurements of the neuroelectric field can be collected by scalp sensors and used to mathematically quantify the essential dynamical features of the neuroelectric field by constructing a digital twin of the dynamical system phase space. The multiscale nonlinear values that result can be organized into tensor data structures, from which latent features can be extracted using tensor factorization. These latent features can be mapped to behavioral constructs to derive digital biomarkers. This computational framework provides a robust method for incorporating neurodynamical measures into neuropsychiatric biomarker discovery.
Collapse
|
10
|
Li Z, Andreev A, Hramov A, Blyuss O, Zaikin A. Novel efficient reservoir computing methodologies for regular and irregular time series classification. NONLINEAR DYNAMICS 2024; 113:4045-4062. [PMID: 39822383 PMCID: PMC11732944 DOI: 10.1007/s11071-024-10244-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 08/26/2024] [Indexed: 01/19/2025]
Abstract
Time series is a data structure prevalent in a wide range of fields such as healthcare, finance and meteorology. It goes without saying that analyzing time series data holds the key to gaining insight into our day-to-day observations. Among the vast spectrum of time series analysis, time series classification offers the unique opportunity to classify the sequences into their respective categories for the sake of automated detection. To this end, two types of mainstream approaches, recurrent neural networks and distance-based methods, have been commonly employed to address this specific problem. Despite their enormous success, methods like Long Short-Term Memory networks typically require high computational resources. It is largely as a consequence of the nature of backpropagation, driving the search for some backpropagation-free alternatives. Reservoir computing is an instance of recurrent neural networks that is known for its efficiency in processing time series sequences. Therefore, in this article, we will develop two reservoir computing based methods that can effectively deal with regular and irregular time series with minimal computational cost, both while achieving a desirable level of classification accuracy.
Collapse
Affiliation(s)
- Zonglun Li
- Department of Mathematics, University College London, London, UK
- Department of Women’s Cancer, Institute for Women’s Health, University College London, London, UK
| | - Andrey Andreev
- Baltic Center for Neurotechnology and Artificial Intelligence, Immanuel Kant Baltic Federal University, Aleksandra Nevskogo Str., 14, Kaliningrad, Russia 236041
| | - Alexander Hramov
- Baltic Center for Neurotechnology and Artificial Intelligence, Immanuel Kant Baltic Federal University, Aleksandra Nevskogo Str., 14, Kaliningrad, Russia 236041
| | - Oleg Blyuss
- Wolfson Institute of Population Health, Queen Mary University of London, London, UK
- Department of Pediatrics and Pediatric Infectious Diseases,Institute of Child’s Health, Sechenov First Moscow State Medical University,Sechenov University, Moscow, Russia 119991
| | - Alexey Zaikin
- Department of Mathematics, University College London, London, UK
- Department of Women’s Cancer, Institute for Women’s Health, University College London, London, UK
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
- Lobachevsky State University of Nizhniy Novgorod, Prospekt Gagarina 23, Nizhniy Novgorod, Russia 603022
| |
Collapse
|
11
|
Pan W, Zhao F, Han B, Dong Y, Zeng Y. Emergence of brain-inspired small-world spiking neural network through neuroevolution. iScience 2024; 27:108845. [PMID: 38327781 PMCID: PMC10847652 DOI: 10.1016/j.isci.2024.108845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 08/23/2023] [Accepted: 01/03/2024] [Indexed: 02/09/2024] Open
Abstract
Studies suggest that the brain's high efficiency and low energy consumption may be closely related to its small-world topology and critical dynamics. However, existing efforts on the performance-oriented structural evolution of spiking neural networks (SNNs) are time-consuming and ignore the core structural properties of the brain. Here, we introduce a multi-objective Evolutionary Liquid State Machine (ELSM), which blends the small-world coefficient and criticality to evolve models and guide the emergence of brain-inspired, efficient structures. Experiments reveal ELSM's consistent and comparable performance, achieving 97.23% on NMNIST and outperforming LSM models on MNIST and Fashion-MNIST with 98.12% and 88.81% accuracies, respectively. Further analysis shows its versatility and spontaneous evolution of topologies such as hub nodes, short paths, long-tailed degree distributions, and numerous communities. This study evolves recurrent spiking neural networks into brain-inspired energy-efficient structures, showcasing versatility in multiple tasks and potential for adaptive general artificial intelligence.
Collapse
Affiliation(s)
- Wenxuan Pan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Bing Han
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yiting Dong
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
12
|
Kawai Y, Park J, Asada M. Reservoir computing using self-sustained oscillations in a locally connected neural network. Sci Rep 2023; 13:15532. [PMID: 37726352 PMCID: PMC10509144 DOI: 10.1038/s41598-023-42812-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 09/14/2023] [Indexed: 09/21/2023] Open
Abstract
Understanding how the structural organization of neural networks influences their computational capabilities is of great interest to both machine learning and neuroscience communities. In our previous work, we introduced a novel learning system, called the reservoir of basal dynamics (reBASICS), which features a modular neural architecture (small-sized random neural networks) capable of reducing chaoticity of neural activity and of producing stable self-sustained limit cycle activities. The integration of these limit cycles is achieved by linear summation of their weights, and arbitrary time series are learned by modulating these weights. Despite its excellent learning performance, interpreting a modular structure of isolated small networks as a brain network has posed a significant challenge. Here, we investigate how local connectivity, a well-known characteristic of brain networks, contributes to reducing neural system chaoticity and generates self-sustained limit cycles based on empirical experiments. Moreover, we present the learning performance of the locally connected reBASICS in two tasks: a motor timing task and a learning task of the Lorenz time series. Although its performance was inferior to that of modular reBASICS, locally connected reBASICS could learn a time series of tens of seconds while the time constant of neural units was ten milliseconds. This work indicates that the locality of connectivity in neural networks may contribute to generation of stable self-sustained oscillations to learn arbitrary long-term time series, as well as the economy of wiring cost.
Collapse
Affiliation(s)
- Yuji Kawai
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, 565-0871, Japan.
| | - Jihoon Park
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, 565-0871, Japan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Osaka, 565-0871, Japan
| | - Minoru Asada
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, 565-0871, Japan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Osaka, 565-0871, Japan
- International Professional University of Technology in Osaka, Kita-ku, Osaka, 530-0001, Japan
- Chubu University Academy of Emerging Sciences, Chubu University, Kasugai, Aichi, 487-8501, Japan
| |
Collapse
|
13
|
Frank SA. Precise Traits from Sloppy Components: Perception and the Origin of Phenotypic Response. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1162. [PMID: 37628192 PMCID: PMC10453304 DOI: 10.3390/e25081162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 06/12/2023] [Accepted: 08/01/2023] [Indexed: 08/27/2023]
Abstract
Organisms perceive their environment and respond. The origin of perception-response traits presents a puzzle. Perception provides no value without response. Response requires perception. Recent advances in machine learning may provide a solution. A randomly connected network creates a reservoir of perceptive information about the recent history of environmental states. In each time step, a relatively small number of inputs drives the dynamics of the relatively large network. Over time, the internal network states retain a memory of past inputs. To achieve a functional response to past states or to predict future states, a system must learn only how to match states of the reservoir to the target response. In the same way, a random biochemical or neural network of an organism can provide an initial perceptive basis. With a solution for one side of the two-step perception-response challenge, evolving an adaptive response may not be so difficult. Two broader themes emerge. First, organisms may often achieve precise traits from sloppy components. Second, evolutionary puzzles often follow the same outlines as the challenges of machine learning. In each case, the basic problem is how to learn, either by artificial computational methods or by natural selection.
Collapse
Affiliation(s)
- Steven A Frank
- Department of Ecology and Evolutionary Biology, University of California, Irvine, CA 92697-2525, USA
| |
Collapse
|
14
|
Bosl WJ, Bosquet Enlow M, Lock EF, Nelson CA. A biomarker discovery framework for childhood anxiety. Front Psychiatry 2023; 14:1158569. [PMID: 37533889 PMCID: PMC10393248 DOI: 10.3389/fpsyt.2023.1158569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 07/04/2023] [Indexed: 08/04/2023] Open
Abstract
Introduction Anxiety is the most common manifestation of psychopathology in youth, negatively affecting academic, social, and adaptive functioning and increasing risk for mental health problems into adulthood. Anxiety disorders are diagnosed only after clinical symptoms emerge, potentially missing opportunities to intervene during critical early prodromal periods. In this study, we used a new empirical approach to extracting nonlinear features of the electroencephalogram (EEG), with the goal of discovering differences in brain electrodynamics that distinguish children with anxiety disorders from healthy children. Additionally, we examined whether this approach could distinguish children with externalizing disorders from healthy children and children with anxiety. Methods We used a novel supervised tensor factorization method to extract latent factors from repeated multifrequency nonlinear EEG measures in a longitudinal sample of children assessed in infancy and at ages 3, 5, and 7 years of age. We first examined the validity of this method by showing that calendar age is highly correlated with latent EEG complexity factors (r = 0.77). We then computed latent factors separately for distinguishing children with anxiety disorders from healthy controls using a 5-fold cross validation scheme and similarly for distinguishing children with externalizing disorders from healthy controls. Results We found that latent factors derived from EEG recordings at age 7 years were required to distinguish children with an anxiety disorder from healthy controls; recordings from infancy, 3 years, or 5 years alone were insufficient. However, recordings from two (5, 7 years) or three (3, 5, 7 years) recordings gave much better results than 7 year recordings alone. Externalizing disorders could be detected using 3- and 5 years EEG data, also giving better results with two or three recordings than any single snapshot. Further, sex assigned at birth was an important covariate that improved accuracy for both disorder groups, and birthweight as a covariate modestly improved accuracy for externalizing disorders. Recordings from infant EEG did not contribute to the classification accuracy for either anxiety or externalizing disorders. Conclusion This study suggests that latent factors extracted from EEG recordings in childhood are promising candidate biomarkers for anxiety and for externalizing disorders if chosen at appropriate ages.
Collapse
Affiliation(s)
- William J. Bosl
- Center for AI & Medicine, University of San Francisco, San Francisco, CA, United States
- Computational Health Informatics Program, Boston Children’s Hospital, Boston, MA, United States
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
| | - Michelle Bosquet Enlow
- Department of Psychiatry and Behavioral Sciences, Boston Children’s Hospital, Boston, MA, United States
- Department of Psychiatry, Harvard Medical School, Boston, MA, United States
| | - Eric F. Lock
- Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN, United States
| | - Charles A. Nelson
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Boston Children’s Hospital, Boston, MA, United States
- Harvard Graduate School of Education, Cambridge, MA, United States
| |
Collapse
|