1
|
Gordleeva S, Tsybina YA, Krivonosov MI, Tyukin IY, Kazantsev VB, Zaikin A, Gorban AN. Situation-Based Neuromorphic Memory in Spiking Neuron-Astrocyte Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:881-895. [PMID: 38048242 DOI: 10.1109/tnnls.2023.3335450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/06/2023]
Abstract
Mammalian brains operate in very special surroundings: to survive they have to react quickly and effectively to the pool of stimuli patterns previously recognized as danger. Many learning tasks often encountered by living organisms involve a specific set-up centered around a relatively small set of patterns presented in a particular environment. For example, at a party, people recognize friends immediately, without deep analysis, just by seeing a fragment of their clothes. This set-up with reduced "ontology" is referred to as a "situation." Situations are usually local in space and time. In this work, we propose that neuron-astrocyte networks provide a network topology that is effectively adapted to accommodate situation-based memory. In order to illustrate this, we numerically simulate and analyze a well-established model of a neuron-astrocyte network, which is subjected to stimuli conforming to the situation-driven environment. Three pools of stimuli patterns are considered: external patterns, patterns from the situation associative pool regularly presented to the network and learned by the network, and patterns already learned and remembered by astrocytes. Patterns from the external world are added to and removed from the associative pool. Then, we show that astrocytes are structurally necessary for an effective function in such a learning and testing set-up. To demonstrate this we present a novel neuromorphic computational model for short-term memory implemented by a two-net spiking neural-astrocytic network. Our results show that such a system tested on synthesized data with selective astrocyte-induced modulation of neuronal activity provides an enhancement of retrieval quality in comparison to standard spiking neural networks trained via Hebbian plasticity only. We argue that the proposed set-up may offer a new way to analyze, model, and understand neuromorphic artificial intelligence systems.
Collapse
|
2
|
Desbordes T, Lakretz Y, Chanoine V, Oquab M, Badier JM, Trébuchon A, Carron R, Bénar CG, Dehaene S, King JR. Dimensionality and Ramping: Signatures of Sentence Integration in the Dynamics of Brains and Deep Language Models. J Neurosci 2023; 43:5350-5364. [PMID: 37217308 PMCID: PMC10359032 DOI: 10.1523/jneurosci.1163-22.2023] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 02/07/2023] [Accepted: 02/19/2023] [Indexed: 05/24/2023] Open
Abstract
A sentence is more than the sum of its words: its meaning depends on how they combine with one another. The brain mechanisms underlying such semantic composition remain poorly understood. To shed light on the neural vector code underlying semantic composition, we introduce two hypotheses: (1) the intrinsic dimensionality of the space of neural representations should increase as a sentence unfolds, paralleling the growing complexity of its semantic representation; and (2) this progressive integration should be reflected in ramping and sentence-final signals. To test these predictions, we designed a dataset of closely matched normal and jabberwocky sentences (composed of meaningless pseudo words) and displayed them to deep language models and to 11 human participants (5 men and 6 women) monitored with simultaneous MEG and intracranial EEG. In both deep language models and electrophysiological data, we found that representational dimensionality was higher for meaningful sentences than jabberwocky. Furthermore, multivariate decoding of normal versus jabberwocky confirmed three dynamic patterns: (1) a phasic pattern following each word, peaking in temporal and parietal areas; (2) a ramping pattern, characteristic of bilateral inferior and middle frontal gyri; and (3) a sentence-final pattern in left superior frontal gyrus and right orbitofrontal cortex. These results provide a first glimpse into the neural geometry of semantic integration and constrain the search for a neural code of linguistic composition.SIGNIFICANCE STATEMENT Starting from general linguistic concepts, we make two sets of predictions in neural signals evoked by reading multiword sentences. First, the intrinsic dimensionality of the representation should grow with additional meaningful words. Second, the neural dynamics should exhibit signatures of encoding, maintaining, and resolving semantic composition. We successfully validated these hypotheses in deep neural language models, artificial neural networks trained on text and performing very well on many natural language processing tasks. Then, using a unique combination of MEG and intracranial electrodes, we recorded high-resolution brain data from human participants while they read a controlled set of sentences. Time-resolved dimensionality analysis showed increasing dimensionality with meaning, and multivariate decoding allowed us to isolate the three dynamical patterns we had hypothesized.
Collapse
Affiliation(s)
- Théo Desbordes
- Meta AI Research, Paris 75002, France; and Cognitive Neuroimaging Unit NeuroSpin center, 91191, Gif-sur-Yvette, France
| | - Yair Lakretz
- Cognitive Neuroimaging Unit NeuroSpin center, Gif-sur-Yvette, 91191, France
| | - Valérie Chanoine
- Institute of Language, Communication and the Brain, Aix-en-Provence, 13100, France; and Aix-Marseille Université, Centre National de la Recherche Scientifique, LPL, Aix-en-Provence, 13100, France
| | | | - Jean-Michel Badier
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale, CNRS, LPL, Aix-en-Provence 13100; and Inst Neurosci Syst, Marseille, 13005, France
| | - Agnès Trébuchon
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale, CNRS, LPL, Aix-en-Provence 13100, France; and Inst Neurosci Syst, Marseille, 13005, France; and Assistance Publique Hopitaux de Marseille, Timone hospital, Epileptology and Cerebral Rythmology, Marseille, 13385, France
| | - Romain Carron
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale, CNRS, LPL, Aix-en-Provence 13100, France; and Inst Neurosci Syst, Marseille, 13005, France; and Assistance Publique Hopitaux de Marseille, Timone hospital, Functional and Stereotactic Neurosurgery, Marseille, 13385, France
| | - Christian-G Bénar
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale, CNRS, LPL, Aix-en-Provence 13100, France; and Inst Neurosci Syst, Marseille, 13005, France
| | - Stanislas Dehaene
- Université Paris Saclay, Institut National de la Santé et de la Recherche Médicale, Commissariat à l'Energie Atomique, Cognitive Neuroimaging Unit, NeuroSpin center, Saclay, 91191, France; and Collège de France, PSL University, Paris, 75231, France
| | - Jean-Rémi King
- Meta AI Research, Paris 75002, France; and Cognitive Neuroimaging Unit NeuroSpin center, 91191, Gif-sur-Yvette, France
- LSP, École normale supérieure, PSL (Paris Sciences & Lettres) University, CNRS, 75005 Paris, France
| |
Collapse
|
3
|
Dunin-Barkowski W, Gorban A. Editorial: Toward and beyond human-level AI, volume II. Front Neurorobot 2023; 16:1120167. [PMID: 36687208 PMCID: PMC9853958 DOI: 10.3389/fnbot.2022.1120167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 12/13/2022] [Indexed: 01/07/2023] Open
Affiliation(s)
- Witali Dunin-Barkowski
- Department of Neuroinformatics, Center for Optical Neural Technologies, Scientific Research Institute for System Analysis, Russian Academy of Sciences, Moscow, Russia
| | - Alexander Gorban
- Department of Mathematics, University of Leicester, Leicester, United Kingdom
- Scientific and Educational Mathematical Center “Mathematics of Future Technology,” Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| |
Collapse
|
4
|
Makarov VA, Lobov SA, Shchanikov S, Mikhaylov A, Kazantsev VB. Toward Reflective Spiking Neural Networks Exploiting Memristive Devices. Front Comput Neurosci 2022; 16:859874. [PMID: 35782090 PMCID: PMC9243340 DOI: 10.3389/fncom.2022.859874] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 05/10/2022] [Indexed: 11/29/2022] Open
Abstract
The design of modern convolutional artificial neural networks (ANNs) composed of formal neurons copies the architecture of the visual cortex. Signals proceed through a hierarchy, where receptive fields become increasingly more complex and coding sparse. Nowadays, ANNs outperform humans in controlled pattern recognition tasks yet remain far behind in cognition. In part, it happens due to limited knowledge about the higher echelons of the brain hierarchy, where neurons actively generate predictions about what will happen next, i.e., the information processing jumps from reflex to reflection. In this study, we forecast that spiking neural networks (SNNs) can achieve the next qualitative leap. Reflective SNNs may take advantage of their intrinsic dynamics and mimic complex, not reflex-based, brain actions. They also enable a significant reduction in energy consumption. However, the training of SNNs is a challenging problem, strongly limiting their deployment. We then briefly overview new insights provided by the concept of a high-dimensional brain, which has been put forward to explain the potential power of single neurons in higher brain stations and deep SNN layers. Finally, we discuss the prospect of implementing neural networks in memristive systems. Such systems can densely pack on a chip 2D or 3D arrays of plastic synaptic contacts directly processing analog information. Thus, memristive devices are a good candidate for implementing in-memory and in-sensor computing. Then, memristive SNNs can diverge from the development of ANNs and build their niche, cognitive, or reflective computations.
Collapse
Affiliation(s)
- Valeri A. Makarov
- Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Sergey A. Lobov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| | - Sergey Shchanikov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Department of Information Technologies, Vladimir State University, Vladimir, Russia
| | - Alexey Mikhaylov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Viktor B. Kazantsev
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| |
Collapse
|
5
|
A Biomorphic Model of Cortical Column for Content-Based Image Retrieval. ENTROPY 2021; 23:e23111458. [PMID: 34828156 PMCID: PMC8620877 DOI: 10.3390/e23111458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 10/22/2021] [Accepted: 10/28/2021] [Indexed: 11/18/2022]
Abstract
How do living systems process information? The search for an answer to this question is ongoing. We have developed an intelligent video analytics system. The process of the formation of detectors for content-based image retrieval aimed at detecting objects of various types simulates the operation of the structural and functional modules for image processing in living systems. The process of detector construction is, in fact, a model of the formation (or activation) of connections in the cortical column (structural and functional unit of information processing in the human and animal brain). The process of content-based image retrieval, that is, the detection of various types of images in the developed system, reproduces the process of “triggering” a model biomorphic column, i.e., a detector in which connections are formed during the learning process. The recognition process is a reaction of the receptive field of the column to the activation by a given signal. Since the learning process of the detector can be visualized, it is possible to see how a column (a detector of specific stimuli) is formed: a face, a digit, a number, etc. The created artificial cognitive system is a biomorphic model of the recognition column of living systems.
Collapse
|
6
|
Fava A, Raychaudhuri S, Rao DA. The Power of Systems Biology: Insights on Lupus Nephritis from the Accelerating Medicines Partnership. Rheum Dis Clin North Am 2021; 47:335-350. [PMID: 34215367 DOI: 10.1016/j.rdc.2021.04.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
The Accelerating Medicines Partnership (AMP) SLE Network united resources from academic centers, government, nonprofit, and industry to accelerate discovery in lupus nephritis (LN). The AMP SLE Network developed a set of protocols for high-throughput analyses to systematically study kidney tissue, urine, and blood in LN. This article summarizes approaches and results from phase 1 of AMP SLE Network effort, including single cell RNA-seq analysis of LN kidney biopsies, cellular and proteomic studies of LN urine, and mass cytometry immunophenotyping of blood cells. This work provides a framework to guide studies of the clinical implications of active cellular/molecular pathways in LN.
Collapse
Affiliation(s)
- Andrea Fava
- Division of Rheumatology, Johns Hopkins University, 1830 East Monument Street, Suite 7500, Baltimore, MD 21205, USA.
| | - Soumya Raychaudhuri
- Division of Rheumatology, Inflammation, Immunity, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Center for Data Sciences, Brigham and Women's Hospital, Building for Transformative Medicine, 60 Fenwood Road, Boston, MA 02115, USA; Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA; Centre for Genetics and Genomics Versus Arthritis, Centre for Musculoskeletal Research, Manchester Academic Health Science Centre, The University of Manchester, Oxford Road, Manchester, UK. https://twitter.com/soumya_boston
| | - Deepak A Rao
- Division of Rheumatology, Inflammation, Immunity, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
7
|
Grechuk B, Gorban AN, Tyukin IY. General stochastic separation theorems with optimal bounds. Neural Netw 2021; 138:33-56. [PMID: 33621897 DOI: 10.1016/j.neunet.2021.01.034] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2020] [Revised: 01/08/2021] [Accepted: 01/29/2021] [Indexed: 11/17/2022]
Abstract
Phenomenon of stochastic separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities. In high-dimensional datasets under broad assumptions each point can be separated from the rest of the set by simple and robust Fisher's discriminant (is Fisher separable). Errors or clusters of errors can be separated from the rest of the data. The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same stochastic separability that holds the keys to understanding the fundamentals of robustness and adaptivity in high-dimensional data-driven AI. To manage errors and analyze vulnerabilities, the stochastic separation theorems should evaluate the probability that the dataset will be Fisher separable in given dimensionality and for a given class of distributions. Explicit and optimal estimates of these separation probabilities are required, and this problem is solved in the present work. The general stochastic separation theorems with optimal probability estimates are obtained for important classes of distributions: log-concave distribution, their convex combinations and product distributions. The standard i.i.d. assumption was significantly relaxed. These theorems and estimates can be used both for correction of high-dimensional data driven AI systems and for analysis of their vulnerabilities. The third area of application is the emergence of memories in ensembles of neurons, the phenomena of grandmother's cells and sparse coding in the brain, and explanation of unexpected effectiveness of small neural ensembles in high-dimensional brain.
Collapse
Affiliation(s)
- Bogdan Grechuk
- Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK.
| | - Alexander N Gorban
- Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK; Lobachevsky University, Nizhni Novgorod, Russia.
| | - Ivan Y Tyukin
- Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK; Lobachevsky University, Nizhni Novgorod, Russia; Norwegian University of Science and Technology, Trondheim, Norway.
| |
Collapse
|
8
|
Calvo Tapia C, Tyukin I, Makarov VA. Universal principles justify the existence of concept cells. Sci Rep 2020; 10:7889. [PMID: 32398873 PMCID: PMC7217959 DOI: 10.1038/s41598-020-64466-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 04/16/2020] [Indexed: 11/08/2022] Open
Abstract
The widespread consensus argues that the emergence of abstract concepts in the human brain, such as a "table", requires complex, perfectly orchestrated interaction of myriads of neurons. However, this is not what converging experimental evidence suggests. Single neurons, the so-called concept cells (CCs), may be responsible for complex tasks performed by humans. This finding, with deep implications for neuroscience and theory of neural networks, has no solid theoretical grounds so far. Our recent advances in stochastic separability of highdimensional data have provided the basis to validate the existence of CCs. Here, starting from a few first principles, we layout biophysical foundations showing that CCs are not only possible but highly likely in brain structures such as the hippocampus. Three fundamental conditions, fulfilled by the human brain, ensure high cognitive functionality of single cells: a hierarchical feedforward organization of large laminar neuronal strata, a suprathreshold number of synaptic entries to principal neurons in the strata, and a magnitude of synaptic plasticity adequate for each neuronal stratum. We illustrate the approach on a simple example of acquiring "musical memory" and show how the concept of musical notes can emerge.
Collapse
Affiliation(s)
- Carlos Calvo Tapia
- Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Plaza de Ciencias 3, Madrid, 28040, Spain
| | - Ivan Tyukin
- University of Leicester, Department of Mathematics, University Road, LE1 7RH, United Kingdom
| | - Valeri A Makarov
- Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Plaza de Ciencias 3, Madrid, 28040, Spain.
- Lobachevsky University of Nizhny Novgorod, Gagarin Ave. 23, Nizhny, Novgorod, 603950, Russia.
| |
Collapse
|
9
|
Lobov SA, Mikhaylov AN, Shamshin M, Makarov VA, Kazantsev VB. Spatial Properties of STDP in a Self-Learning Spiking Neural Network Enable Controlling a Mobile Robot. Front Neurosci 2020; 14:88. [PMID: 32174804 PMCID: PMC7054464 DOI: 10.3389/fnins.2020.00088] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 01/22/2020] [Indexed: 11/13/2022] Open
Abstract
Development of spiking neural networks (SNNs) controlling mobile robots is one of the modern challenges in computational neuroscience and artificial intelligence. Such networks, being replicas of biological ones, are expected to have a higher computational potential than traditional artificial neural networks (ANNs). The critical problem is in the design of robust learning algorithms aimed at building a “living computer” based on SNNs. Here, we propose a simple SNN equipped with a Hebbian rule in the form of spike-timing-dependent plasticity (STDP). The SNN implements associative learning by exploiting the spatial properties of STDP. We show that a LEGO robot controlled by the SNN can exhibit classical and operant conditioning. Competition of spike-conducting pathways in the SNN plays a fundamental role in establishing associations of neural connections. It replaces the irrelevant associations by new ones in response to a change in stimuli. Thus, the robot gets the ability to relearn when the environment changes. The proposed SNN and the stimulation protocol can be further enhanced and tested in developing neuronal cultures, and also admit the use of memristive devices for hardware implementation.
Collapse
Affiliation(s)
- Sergey A Lobov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Alexey N Mikhaylov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Maxim Shamshin
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Valeri A Makarov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Instituto de Matemática Interdisciplinar, Facultad de Ciencias Matemáticas, Universidad Complutense de Madrid, Madrid, Spain
| | - Victor B Kazantsev
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| |
Collapse
|
10
|
Calvo Tapia C, Villacorta-Atienza JA, Díez-Hermano S, Khoruzhko M, Lobov S, Potapov I, Sánchez-Jiménez A, Makarov VA. Semantic Knowledge Representation for Strategic Interactions in Dynamic Situations. Front Neurorobot 2020; 14:4. [PMID: 32116635 PMCID: PMC7031254 DOI: 10.3389/fnbot.2020.00004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 01/14/2020] [Indexed: 11/21/2022] Open
Abstract
Evolved living beings can anticipate the consequences of their actions in complex multilevel dynamic situations. This ability relies on abstracting the meaning of an action. The underlying brain mechanisms of such semantic processing of information are poorly understood. Here we show how our novel concept, known as time compaction, provides a natural way of representing semantic knowledge of actions in time-changing situations. As a testbed, we model a fencing scenario with a subject deciding between attack and defense strategies. The semantic content of each action in terms of lethality, versatility, and imminence is then structured as a spatial (static) map representing a particular fencing (dynamic) situation. The model allows deploying a variety of cognitive strategies in a fast and reliable way. We validate the approach in virtual reality and by using a real humanoid robot.
Collapse
Affiliation(s)
- Carlos Calvo Tapia
- Facultad de CC. Matemáticas, Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
| | | | - Sergio Díez-Hermano
- Biomathematics Unit, Faculty of Biology, Complutense University of Madrid, Madrid, Spain
| | | | - Sergey Lobov
- N. I. Lobachevsky State University, Nizhny Novgorod, Russia
| | - Ivan Potapov
- N. I. Lobachevsky State University, Nizhny Novgorod, Russia
| | - Abel Sánchez-Jiménez
- Biomathematics Unit, Faculty of Biology, Complutense University of Madrid, Madrid, Spain
| | - Valeri A. Makarov
- Facultad de CC. Matemáticas, Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
- N. I. Lobachevsky State University, Nizhny Novgorod, Russia
| |
Collapse
|
11
|
Lobov SA, Chernyshov AV, Krilova NP, Shamshin MO, Kazantsev VB. Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier. SENSORS 2020; 20:s20020500. [PMID: 31963143 PMCID: PMC7014236 DOI: 10.3390/s20020500] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 01/10/2020] [Accepted: 01/14/2020] [Indexed: 12/24/2022]
Abstract
One of the modern trends in the design of human–machine interfaces (HMI) is to involve the so called spiking neuron networks (SNNs) in signal processing. The SNNs can be trained by simple and efficient biologically inspired algorithms. In particular, we have shown that sensory neurons in the input layer of SNNs can simultaneously encode the input signal based both on the spiking frequency rate and on varying the latency in generating spikes. In the case of such mixed temporal-rate coding, the SNN should implement learning working properly for both types of coding. Based on this, we investigate how a single neuron can be trained with pure rate and temporal patterns, and then build a universal SNN that is trained using mixed coding. In particular, we study Hebbian and competitive learning in SNN in the context of temporal and rate coding problems. We show that the use of Hebbian learning through pair-based and triplet-based spike timing-dependent plasticity (STDP) rule is accomplishable for temporal coding, but not for rate coding. Synaptic competition inducing depression of poorly used synapses is required to ensure a neural selectivity in the rate coding. This kind of competition can be implemented by the so-called forgetting function that is dependent on neuron activity. We show that coherent use of the triplet-based STDP and synaptic competition with the forgetting function is sufficient for the rate coding. Next, we propose a SNN capable of classifying electromyographical (EMG) patterns using an unsupervised learning procedure. The neuron competition achieved via lateral inhibition ensures the “winner takes all” principle among classifier neurons. The SNN also provides gradual output response dependent on muscular contraction strength. Furthermore, we modify the SNN to implement a supervised learning method based on stimulation of the target classifier neuron synchronously with the network input. In a problem of discrimination of three EMG patterns, the SNN with supervised learning shows median accuracy 99.5% that is close to the result demonstrated by multi-layer perceptron learned by back propagation of an error algorithm.
Collapse
|
12
|
Gorban AN, Makarov VA, Tyukin IY. High-Dimensional Brain in a High-Dimensional World: Blessing of Dimensionality. ENTROPY 2020; 22:e22010082. [PMID: 33285855 PMCID: PMC7516518 DOI: 10.3390/e22010082] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 01/02/2020] [Accepted: 01/06/2020] [Indexed: 11/16/2022]
Abstract
High-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning. The well-known phenomenon of the “curse of dimensionality” states: many problems become exponentially difficult in high dimensions. Recently, the other side of the coin, the “blessing of dimensionality”, has attracted much attention. It turns out that generic high-dimensional datasets exhibit fairly simple geometric properties. Thus, there is a fundamental tradeoff between complexity and simplicity in high dimensional spaces. Here we present a brief explanatory review of recent ideas, results and hypotheses about the blessing of dimensionality and related simplifying effects relevant to machine learning and neuroscience.
Collapse
Affiliation(s)
- Alexander N. Gorban
- Department of Mathematics, University of Leicester, Leicester LE1 7RH, UK;
- Laboratory of Advanced Methods for High-Dimensional Data Analysis, Lobachevsky University, 603022 Nizhny Novgorod, Russia;
- Correspondence:
| | - Valery A. Makarov
- Laboratory of Advanced Methods for High-Dimensional Data Analysis, Lobachevsky University, 603022 Nizhny Novgorod, Russia;
- Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Avda Complutense s/n, 28040 Madrid, Spain
| | - Ivan Y. Tyukin
- Department of Mathematics, University of Leicester, Leicester LE1 7RH, UK;
- Laboratory of Advanced Methods for High-Dimensional Data Analysis, Lobachevsky University, 603022 Nizhny Novgorod, Russia;
| |
Collapse
|
13
|
Morozov A. Modelling Biological Evolution: Developing Novel Approaches. Bull Math Biol 2019; 81:4620-4624. [PMID: 31617043 DOI: 10.1007/s11538-019-00670-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Affiliation(s)
- Andrew Morozov
- Department of Mathematics, University of Leicester, University Road, Leicester, LE1 7RH, UK.
| |
Collapse
|
14
|
Gorban AN, Makarov VA, Tyukin IY. Symphony of high-dimensional brain: Reply to comments on "The unreasonable effectiveness of small neural ensembles in high-dimensional brain". Phys Life Rev 2019; 29:115-119. [PMID: 31272910 DOI: 10.1016/j.plrev.2019.06.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Accepted: 06/21/2019] [Indexed: 11/19/2022]
Affiliation(s)
- Alexander N Gorban
- Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK; Lobachevsky University, Nizhni Novgorod, Russia.
| | - Valeri A Makarov
- Lobachevsky University, Nizhni Novgorod, Russia; Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, 28040 Madrid, Spain.
| | - Ivan Y Tyukin
- Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK; Lobachevsky University, Nizhni Novgorod, Russia; Saint-Petersburg State Electrotechnical University, Saint-Petersburg, Russia.
| |
Collapse
|
15
|
Gorban AN, Makarov VA, Tyukin IY. The unreasonable effectiveness of small neural ensembles in high-dimensional brain. Phys Life Rev 2018; 29:55-88. [PMID: 30366739 DOI: 10.1016/j.plrev.2018.09.005] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 09/20/2018] [Indexed: 10/28/2022]
Abstract
Complexity is an indisputable, well-known, and broadly accepted feature of the brain. Despite the apparently obvious and widely-spread consensus on the brain complexity, sprouts of the single neuron revolution emerged in neuroscience in the 1970s. They brought many unexpected discoveries, including grandmother or concept cells and sparse coding of information in the brain. In machine learning for a long time, the famous curse of dimensionality seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of dimensionality becomes gradually more and more popular. Ensembles of non-interacting or weakly interacting simple units prove to be an effective tool for solving essentially multidimensional and apparently incomprehensible problems. This approach is especially useful for one-shot (non-iterative) correction of errors in large legacy artificial intelligence systems and when the complete re-training is impossible or too expensive. These simplicity revolutions in the era of complexity have deep fundamental reasons grounded in geometry of multidimensional data spaces. To explore and understand these reasons we revisit the background ideas of statistical physics. In the course of the 20th century they were developed into the concentration of measure theory. The Gibbs equivalence of ensembles with further generalizations shows that the data in high-dimensional spaces are concentrated near shells of smaller dimension. New stochastic separation theorems reveal the fine structure of the data clouds. We review and analyse biological, physical, and mathematical problems at the core of the fundamental question: how can high-dimensional brain organise reliable and fast learning in high-dimensional world of data by simple tools? To meet this challenge, we outline and setup a framework based on statistical physics of data. Two critical applications are reviewed to exemplify the approach: one-shot correction of errors in intellectual systems and emergence of static and associative memories in ensembles of single neurons. Error correctors should be simple; not damage the existing skills of the system; allow fast non-iterative learning and correction of new mistakes without destroying the previous fixes. All these demands can be satisfied by new tools based on the concentration of measure phenomena and stochastic separation theory. We show how a simple enough functional neuronal model is capable of explaining: i) the extreme selectivity of single neurons to the information content of high-dimensional data, ii) simultaneous separation of several uncorrelated informational items from a large set of stimuli, and iii) dynamic learning of new items by associating them with already "known" ones. These results constitute a basis for organisation of complex memories in ensembles of single neurons.
Collapse
Affiliation(s)
- Alexander N Gorban
- Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK; Lobachevsky University, Nizhni Novgorod, Russia.
| | - Valeri A Makarov
- Lobachevsky University, Nizhni Novgorod, Russia; Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Avda Complutense s/n, 28040 Madrid, Spain.
| | - Ivan Y Tyukin
- Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK; Lobachevsky University, Nizhni Novgorod, Russia; Saint-Petersburg State Electrotechnical University, Saint-Petersburg, Russia.
| |
Collapse
|
16
|
Calvo Tapia C, Tyukin IY, Makarov VA. Fast social-like learning of complex behaviors based on motor motifs. Phys Rev E 2018; 97:052308. [PMID: 29906958 DOI: 10.1103/physreve.97.052308] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2018] [Indexed: 01/01/2023]
Abstract
Social learning is widely observed in many species. Less experienced agents copy successful behaviors exhibited by more experienced individuals. Nevertheless, the dynamical mechanisms behind this process remain largely unknown. Here we assume that a complex behavior can be decomposed into a sequence of n motor motifs. Then a neural network capable of activating motor motifs in a given sequence can drive an agent. To account for (n-1)! possible sequences of motifs in a neural network, we employ the winnerless competition approach. We then consider a teacher-learner situation: one agent exhibits a complex movement, while another one aims at mimicking the teacher's behavior. Despite the huge variety of possible motif sequences we show that the learner, equipped with the provided learning model, can rewire "on the fly" its synaptic couplings in no more than (n-1) learning cycles and converge exponentially to the durations of the teacher's motifs. We validate the learning model on mobile robots. Experimental results show that the learner is indeed capable of copying the teacher's behavior composed of six motor motifs in a few learning cycles. The reported mechanism of learning is general and can be used for replicating different functions, including, for example, sound patterns or speech.
Collapse
Affiliation(s)
- Carlos Calvo Tapia
- Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Plaza Ciencias 3, 28040 Madrid, Spain
| | - Ivan Y Tyukin
- University of Leicester, Department of Mathematics, University Road, LE1 7RH, United Kingdom
| | - Valeri A Makarov
- Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Plaza Ciencias 3, 28040 Madrid, Spain.,Lobachevsky State University of Nizhny Novgorod, Gagarin Ave. 23, 603950 Nizhny Novgorod, Russia
| |
Collapse
|
17
|
Latent Factors Limiting the Performance of sEMG-Interfaces. SENSORS 2018; 18:s18041122. [PMID: 29642410 PMCID: PMC5948532 DOI: 10.3390/s18041122] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Revised: 04/03/2018] [Accepted: 04/04/2018] [Indexed: 11/17/2022]
Abstract
Recent advances in recording and real-time analysis of surface electromyographic signals (sEMG) have fostered the use of sEMG human-machine interfaces for controlling personal computers, prostheses of upper limbs, and exoskeletons among others. Despite a relatively high mean performance, sEMG-interfaces still exhibit strong variance in the fidelity of gesture recognition among different users. Here, we systematically study the latent factors determining the performance of sEMG-interfaces in synthetic tests and in an arcade game. We show that the degree of muscle cooperation and the amount of the body fatty tissue are the decisive factors in synthetic tests. Our data suggest that these factors can only be adjusted by long-term training, which promotes fine-tuning of low-level neural circuits driving the muscles. Short-term training has no effect on synthetic tests, but significantly increases the game scoring. This implies that it works at a higher decision-making level, not relevant for synthetic gestures. We propose a procedure that enables quantification of the gestures' fidelity in a dynamic gaming environment. For each individual subject, the approach allows identifying "problematic" gestures that decrease gaming performance. This information can be used for optimizing the training strategy and for adapting the signal processing algorithms to individual users, which could be a way for a qualitative leap in the development of future sEMG-interfaces.
Collapse
|