1
|
Yoshida K, Toyoizumi T. A biological model of nonlinear dimensionality reduction. SCIENCE ADVANCES 2025; 11:eadp9048. [PMID: 39908371 PMCID: PMC11801247 DOI: 10.1126/sciadv.adp9048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 01/06/2025] [Indexed: 02/07/2025]
Abstract
Obtaining appropriate low-dimensional representations from high-dimensional sensory inputs in an unsupervised manner is essential for straightforward downstream processing. Although nonlinear dimensionality reduction methods such as t-distributed stochastic neighbor embedding (t-SNE) have been developed, their implementation in simple biological circuits remains unclear. Here, we develop a biologically plausible dimensionality reduction algorithm compatible with t-SNE, which uses a simple three-layer feedforward network mimicking the Drosophila olfactory circuit. The proposed learning rule, described as three-factor Hebbian plasticity, is effective for datasets such as entangled rings and MNIST, comparable to t-SNE. We further show that the algorithm could be working in olfactory circuits in Drosophila by analyzing the multiple experimental data in previous studies. We lastly suggest that the algorithm is also beneficial for association learning between inputs and rewards, allowing the generalization of these associations to other inputs not yet associated with rewards.
Collapse
Affiliation(s)
- Kensuke Yoshida
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
| |
Collapse
|
2
|
Aceituno PV, Farinha MT, Loidl R, Grewe BF. Learning cortical hierarchies with temporal Hebbian updates. Front Comput Neurosci 2023; 17:1136010. [PMID: 37293353 PMCID: PMC10244748 DOI: 10.3389/fncom.2023.1136010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 04/25/2023] [Indexed: 06/10/2023] Open
Abstract
A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.
Collapse
Affiliation(s)
- Pau Vilimelis Aceituno
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| | | | - Reinhard Loidl
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Benjamin F. Grewe
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
3
|
Yoon HG, Kim P. STDP-based associative memory formation and retrieval. J Math Biol 2023; 86:49. [PMID: 36826758 DOI: 10.1007/s00285-023-01883-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 12/11/2022] [Accepted: 01/31/2023] [Indexed: 02/25/2023]
Abstract
Spike-timing-dependent plasticity (STDP) is a biological process in which the precise order and timing of neuronal spikes affect the degree of synaptic modification. While there has been numerous research focusing on the role of STDP in neural coding, the functional implications of STDP at the macroscopic level in the brain have not been fully explored yet. In this work, we propose a neurodynamical model based on STDP that renders storage and retrieval of a group of associative memories. We showed that the function of STDP at the macroscopic level is to form a "memory plane" in the neural state space which dynamically encodes high dimensional data. We derived the analytic relation between the input, the memory plane, and the induced macroscopic neural oscillations around the memory plane. Such plane produces a limit cycle in reaction to a similar memory cue, which can be used for retrieval of the original input.
Collapse
Affiliation(s)
- Hong-Gyu Yoon
- Department of Mathematical Sciences, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City, 44919, Republic of Korea
| | - Pilwon Kim
- Department of Mathematical Sciences, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City, 44919, Republic of Korea.
| |
Collapse
|
4
|
Birrell S, Abdulali A, Iida F. Reach Space Analysis of Baseline Differential Extrinsic Plasticity Control. Front Neurorobot 2022; 16:848084. [PMID: 35721277 PMCID: PMC9198443 DOI: 10.3389/fnbot.2022.848084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 04/25/2022] [Indexed: 11/13/2022] Open
Abstract
The neuroplasticity rule Differential Extrinsic Plasticity (DEP) has been studied in the context of goal-free simulated agents, producing realistic-looking, environmentally-aware behaviors, but no successful control mechanism has yet been implemented for intentional behavior. The goal of this paper is to determine if “short-circuited DEP,” a simpler, open-loop variant can generate desired trajectories in a robot arm. DEP dynamics, both transient and limit cycles are poorly understood. Experiments were performed to elucidate these dynamics and test the ability of a robot to leverage these dynamics for target reaching and circular motions.
Collapse
|
5
|
Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse. Nat Commun 2022; 13:2811. [PMID: 35589710 PMCID: PMC9120471 DOI: 10.1038/s41467-022-30432-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 04/25/2022] [Indexed: 12/02/2022] Open
Abstract
Neuromorphic computing targets the hardware embodiment of neural network, and device implementation of individual neuron and synapse has attracted considerable attention. The emulation of synaptic plasticity has shown promising results after the advent of memristors. However, neuronal intrinsic plasticity, which involves in learning process through interactions with synaptic plasticity, has been rarely demonstrated. Synaptic and intrinsic plasticity occur concomitantly in learning process, suggesting the need of the simultaneous implementation. Here, we report a neurosynaptic device that mimics synaptic and intrinsic plasticity concomitantly in a single cell. Threshold switch and phase change memory are merged in threshold switch-phase change memory device. Neuronal intrinsic plasticity is demonstrated based on bottom threshold switch layer, which resembles the modulation of firing frequency in biological neuron. Synaptic plasticity is also introduced through the nonvolatile switching of top phase change layer. Intrinsic and synaptic plasticity are simultaneously emulated in a single cell to establish the positive feedback between them. A positive feedback learning loop which mimics the retraining process in biological system is implemented in threshold switch-phase change memory array for accelerated training. Synaptic plasticity and neuronal intrinsic plasticity are both involved in the learning process of hardware artificial neural network. Here, Lee et al. integrate a threshold switch and a phase change memory in a single device, which emulates biological synaptic and intrinsic plasticity simultaneously.
Collapse
|
6
|
Granato G, Cartoni E, Da Rold F, Mattera A, Baldassarre G. Integrating unsupervised and reinforcement learning in human categorical perception: A computational model. PLoS One 2022; 17:e0267838. [PMID: 35536843 PMCID: PMC9089926 DOI: 10.1371/journal.pone.0267838] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 04/14/2022] [Indexed: 11/18/2022] Open
Abstract
Categorical perception identifies a tuning of human perceptual systems that can occur during the execution of a categorisation task. Despite the fact that experimental studies and computational models suggest that this tuning is influenced by task-independent effects (e.g., based on Hebbian and unsupervised learning, UL) and task-dependent effects (e.g., based on reward signals and reinforcement learning, RL), no model studies the UL/RL interaction during the emergence of categorical perception. Here we have investigated the effects of this interaction, proposing a system-level neuro-inspired computational architecture in which a perceptual component integrates UL and RL processes. The model has been tested with a categorisation task and the results show that a balanced mix of unsupervised and reinforcement learning leads to the emergence of a suitable categorical perception and the best performance in the task. Indeed, an excessive unsupervised learning contribution tends to not identify task-relevant features while an excessive reinforcement learning contribution tends to initially learn slowly and then to reach sub-optimal performance. These results are consistent with the experimental evidence regarding categorical activations of extrastriate cortices in healthy conditions. Finally, the results produced by the two extreme cases of our model can explain the existence of several factors that may lead to sensory alterations in autistic people.
Collapse
Affiliation(s)
- Giovanni Granato
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council of Italy, Rome, Italy
- School of Computing, Electronics and Mathematics, University of Plymouth, Plymouth, United Kingdom
- * E-mail:
| | - Emilio Cartoni
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council of Italy, Rome, Italy
| | - Federico Da Rold
- Body Action Language Lab, Institute of Cognitive Sciences and Technologies, National Research Council of Italy, Rome, Italy
| | - Andrea Mattera
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council of Italy, Rome, Italy
| | - Gianluca Baldassarre
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council of Italy, Rome, Italy
| |
Collapse
|
7
|
Triche A, Maida AS, Kumar A. Exploration in neo-Hebbian reinforcement learning: Computational approaches to the exploration-exploitation balance with bio-inspired neural networks. Neural Netw 2022; 151:16-33. [DOI: 10.1016/j.neunet.2022.03.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 03/08/2022] [Accepted: 03/14/2022] [Indexed: 10/18/2022]
|
8
|
A generative spiking neural-network model of goal-directed behaviour and one-step planning. PLoS Comput Biol 2020; 16:e1007579. [PMID: 33290414 PMCID: PMC7748287 DOI: 10.1371/journal.pcbi.1007579] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 12/18/2020] [Accepted: 10/01/2020] [Indexed: 11/21/2022] Open
Abstract
In mammals, goal-directed and planning processes support flexible behaviour used to face new situations that cannot be tackled through more efficient but rigid habitual behaviours. Within the Bayesian modelling approach of brain and behaviour, models have been proposed to perform planning as probabilistic inference but this approach encounters a crucial problem: explaining how such inference might be implemented in brain spiking networks. Recently, the literature has proposed some models that face this problem through recurrent spiking neural networks able to internally simulate state trajectories, the core function at the basis of planning. However, the proposed models have relevant limitations that make them biologically implausible, namely their world model is trained ‘off-line’ before solving the target tasks, and they are trained with supervised learning procedures that are biologically and ecologically not plausible. Here we propose two novel hypotheses on how brain might overcome these problems, and operationalise them in a novel architecture pivoting on a spiking recurrent neural network. The first hypothesis allows the architecture to learn the world model in parallel with its use for planning: to this purpose, a new arbitration mechanism decides when to explore, for learning the world model, or when to exploit it, for planning, based on the entropy of the world model itself. The second hypothesis allows the architecture to use an unsupervised learning process to learn the world model by observing the effects of actions. The architecture is validated by reproducing and accounting for the learning profiles and reaction times of human participants learning to solve a visuomotor learning task that is new for them. Overall, the architecture represents the first instance of a model bridging probabilistic planning and spiking-processes that has a degree of autonomy analogous to the one of real organisms. Goal-directed behaviour relies on brain processes supporting planning of actions based on their expected consequences before performing them in the environment. An important computational modelling approach proposes that the brain performs goal-directed processes on the basis of probability distributions and computations on them. A key challenge of this approach is to explain how these probabilistic processes can rely on the spiking processes of the brain. The literature has recently proposed some models that do so by ‘thinking ahead’ alternative possible action-outcomes based on low-level neuronal stochastic events. However, these models have a limited autonomy as they require to learn how the environment works (‘world model’) before solving the tasks, and use a biologically implausible learning process requiring an ‘external teacher’ to tell how their internal units should respond. Here we present a novel architecture proposing how organisms might overcome these challenging problems. First, the architecture can decide if exploring, to learn the world model, or planning, using such model, by evaluating how confident it is on the model knowledge. Second, the architecture can autonomously learn the world model based on experience. The architecture represents a first fully autonomous planning model relying on a spiking neural network.
Collapse
|
9
|
Locally connected spiking neural networks for unsupervised feature learning. Neural Netw 2019; 119:332-340. [PMID: 31499357 DOI: 10.1016/j.neunet.2019.08.016] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 08/08/2019] [Accepted: 08/14/2019] [Indexed: 11/22/2022]
Abstract
In recent years, spiking neural networks (SNNs) have demonstrated great success in completing various machine learning tasks. We introduce a method for learning image features with locally connected layers in SNNs using a spike-timing-dependent plasticity (STDP) rule. In our approach, sub-networks compete via inhibitory interactions to learn features from different locations of the input space. These locally-connected SNNs (LC-SNNs) manifest key topological features of the spatial interaction of biological neurons. We explore a biologically inspired n-gram classification approach allowing parallel processing over various patches of the image space. We report the classification accuracy of simple two-layer LC-SNNs on two image datasets, which respectively match state-of-art performance and are the first results to date. LC-SNNs have the advantage of fast convergence to a dataset representation, and they require fewer learnable parameters than other SNN approaches with unsupervised learning. Robustness tests demonstrate that LC-SNNs exhibit graceful degradation of performance despite the random deletion of large numbers of synapses and neurons. Our results have been obtained using the BindsNET library, which allows efficient machine learning implementations of spiking neural networks.
Collapse
|