1
|
Lesuis SL, Park S, Hoorn A, Rashid AJ, Mocle AJ, Salter EW, Vislavski S, Gray MT, Torelli AM, DeCristofaro A, Driever WPF, van der Stelt M, Zweifel LS, Collingridge GL, Lefebvre JL, Walters BJ, Frankland PW, Hill MN, Josselyn SA. Stress disrupts engram ensembles in lateral amygdala to generalize threat memory in mice. Cell 2025; 188:121-140.e20. [PMID: 39549697 PMCID: PMC11726195 DOI: 10.1016/j.cell.2024.10.034] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 08/25/2024] [Accepted: 10/17/2024] [Indexed: 11/18/2024]
Abstract
Stress induces aversive memory overgeneralization, a hallmark of many psychiatric disorders. Memories are encoded by a sparse ensemble of neurons active during an event (an engram ensemble). We examined the molecular and circuit processes mediating stress-induced threat memory overgeneralization in mice. Stress, acting via corticosterone, increased the density of engram ensembles supporting a threat memory in lateral amygdala, and this engram ensemble was reactivated by both specific and non-specific retrieval cues (generalized threat memory). Furthermore, we identified a critical role for endocannabinoids, acting retrogradely on parvalbumin-positive (PV+) lateral amygdala interneurons in the formation of a less-sparse engram and memory generalization induced by stress. Glucocorticoid receptor antagonists, endocannabinoid synthesis inhibitors, increasing PV+ neuronal activity, and knocking down cannabinoid receptors in lateral amygdala PV+ neurons restored threat memory specificity and a sparse engram in stressed mice. These findings offer insights into stress-induced memory alterations, providing potential therapeutic avenues for stress-related disorders.
Collapse
Affiliation(s)
- Sylvie L Lesuis
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada; Cellular and Computational Neuroscience, Swammerdam Institute for Life Science, Amsterdam Neuroscience, University of Amsterdam, 1090 GE Amsterdam, the Netherlands
| | - Sungmo Park
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada
| | - Annelies Hoorn
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada; Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Asim J Rashid
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada
| | - Andrew J Mocle
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada; Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Eric W Salter
- Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada; Lunenfeld-Tanenbaum Research Institute, Mount Sinai Hospital, and TANZ Centre for Research in Neurodegenerative Diseases, 600 University Avenue, Toronto, ON M5G 1X5, Canada
| | - Stefan Vislavski
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada
| | - Madison T Gray
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada; Department of Molecular Genetics, University of Toronto, Toronto, ON, Canada
| | - Angelica M Torelli
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada
| | - Antonietta DeCristofaro
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada
| | - Wouter P F Driever
- Department of Molecular Physiology, Leiden Institute of Chemistry, Leiden University and Oncode Institute, Einsteinweg 55, Leiden 2333 CC, the Netherlands
| | - Mario van der Stelt
- Department of Molecular Physiology, Leiden Institute of Chemistry, Leiden University and Oncode Institute, Einsteinweg 55, Leiden 2333 CC, the Netherlands
| | - Larry S Zweifel
- Department of Pharmacology, University of Washington, Seattle, WA 98195, USA; Department of Psychiatry and Behavioral Sciences, University of Washington, 2815 Eastlake Ave E Suite 200, Seattle, WA 98102, USA
| | - Graham L Collingridge
- Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada; Lunenfeld-Tanenbaum Research Institute, Mount Sinai Hospital, and TANZ Centre for Research in Neurodegenerative Diseases, 600 University Avenue, Toronto, ON M5G 1X5, Canada
| | - Julie L Lefebvre
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada; Department of Molecular Genetics, University of Toronto, Toronto, ON, Canada
| | - Brandon J Walters
- Department of Cell and Systems Biology, University of Toronto Mississauga, 3359 Mississauga Rd, Mississauga, ON L5L 1C6, Canada
| | - Paul W Frankland
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada; Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada; Department of Psychology, University of Toronto, Toronto, ON M5S 1A8, Canada.
| | - Matthew N Hill
- Hotchkiss Brain Institute, University of Calgary, 2500 University Drive NW, Calgary, AB T2N 1N4, Canada.
| | - Sheena A Josselyn
- Program in Neurosciences & Mental Health, Hospital for Sick Children, 555 University Ave., Toronto, ON M5G 1X8, Canada; Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada; Department of Psychology, University of Toronto, Toronto, ON M5S 1A8, Canada.
| |
Collapse
|
2
|
Osório M, Sa-Couto L, Wichert A. Can a Hebbian-like learning rule be avoiding the curse of dimensionality in sparse distributed data? BIOLOGICAL CYBERNETICS 2024; 118:267-276. [PMID: 39249119 PMCID: PMC11588804 DOI: 10.1007/s00422-024-00995-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 08/20/2024] [Indexed: 09/10/2024]
Abstract
It is generally assumed that the brain uses something akin to sparse distributed representations. These representations, however, are high-dimensional and consequently they affect classification performance of traditional Machine Learning models due to the "curse of dimensionality". In tasks for which there is a vast amount of labeled data, Deep Networks seem to solve this issue with many layers and a non-Hebbian backpropagation algorithm. The brain, however, seems to be able to solve the problem with few layers. In this work, we hypothesize that this happens by using Hebbian learning. Actually, the Hebbian-like learning rule of Restricted Boltzmann Machines learns the input patterns asymmetrically. It exclusively learns the correlation between non-zero values and ignores the zeros, which represent the vast majority of the input dimensionality. By ignoring the zeros the "curse of dimensionality" problem can be avoided. To test our hypothesis, we generated several sparse datasets and compared the performance of a Restricted Boltzmann Machine classifier with some Backprop-trained networks. The experiments using these codes confirm our initial intuition as the Restricted Boltzmann Machine shows a good generalization performance, while the Neural Networks trained with the backpropagation algorithm overfit the training data.
Collapse
Affiliation(s)
- Maria Osório
- Department of Computer Science and Engineering, INESC-ID & Instituto Superior Técnico - University of Lisbon, Av. Prof. Dr. Aníbal Cavaco Silva, Porto Salvo, 2744-016, Lisbon, Portugal.
| | - Luis Sa-Couto
- Department of Computer Science and Engineering, INESC-ID & Instituto Superior Técnico - University of Lisbon, Av. Prof. Dr. Aníbal Cavaco Silva, Porto Salvo, 2744-016, Lisbon, Portugal
| | - Andreas Wichert
- Department of Computer Science and Engineering, INESC-ID & Instituto Superior Técnico - University of Lisbon, Av. Prof. Dr. Aníbal Cavaco Silva, Porto Salvo, 2744-016, Lisbon, Portugal
| |
Collapse
|
3
|
Ravichandran N, Lansner A, Herman P. Spiking representation learning for associative memories. Front Neurosci 2024; 18:1439414. [PMID: 39371606 PMCID: PMC11450452 DOI: 10.3389/fnins.2024.1439414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Accepted: 08/29/2024] [Indexed: 10/08/2024] Open
Abstract
Networks of interconnected neurons communicating through spiking signals offer the bedrock of neural computations. Our brain's spiking neural networks have the computational capacity to achieve complex pattern recognition and cognitive functions effortlessly. However, solving real-world problems with artificial spiking neural networks (SNNs) has proved to be difficult for a variety of reasons. Crucially, scaling SNNs to large networks and processing large-scale real-world datasets have been challenging, especially when compared to their non-spiking deep learning counterparts. The critical operation that is needed of SNNs is the ability to learn distributed representations from data and use these representations for perceptual, cognitive and memory operations. In this work, we introduce a novel SNN that performs unsupervised representation learning and associative memory operations leveraging Hebbian synaptic and activity-dependent structural plasticity coupled with neuron-units modelled as Poisson spike generators with sparse firing (~1 Hz mean and ~100 Hz maximum firing rate). Crucially, the architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories. We evaluated the model on properties relevant for attractor-based associative memories such as pattern completion, perceptual rivalry, distortion resistance, and prototype extraction.
Collapse
Affiliation(s)
- Naresh Ravichandran
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anders Lansner
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Department of Mathematics, Stockholm University, Stockholm, Sweden
| | - Pawel Herman
- Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Digital Futures, KTH Royal Institute of Technology, Stockholm, Sweden
- Swedish e-Science Research Centre (SeRC), Stockholm, Sweden
| |
Collapse
|
4
|
Sacouto L, Wichert A. Competitive learning to generate sparse representations for associative memory. Neural Netw 2023; 168:32-43. [PMID: 37734137 DOI: 10.1016/j.neunet.2023.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 08/07/2023] [Accepted: 09/03/2023] [Indexed: 09/23/2023]
Abstract
One of the most well established brain principles, Hebbian learning, has led to the theoretical concept of neural assemblies. Based on it, many interesting brain theories have spawned. Palm's work implements this concept through multiple binary Willshaw associative memories, in a model that not only has a wide cognitive explanatory power but also makes neuroscientific predictions. Yet, Willshaw's associative memory can only achieve top capacity when the stored vectors are extremely sparse (number of active bits can grow logarithmically with the vector's length). This strict requirement makes it difficult to apply any model that uses this associative memory, like Palm's, to real data. Hence the fact that most works apply the memory to optimal randomly generated codes that do not represent any information. This issue creates the need for encoders that can take real data, and produce sparse representations - a problem which is also raised following Barlow's efficient coding principle. In this work, we propose a biologically-constrained network that encodes images into codes that are suitable for Willshaw's associative memory. The network is organized into groups of neurons that specialize on local receptive fields, and learn through a competitive scheme. After conducting auto- and hetero-association experiments on two visual data sets, we can conclude that our network not only beats sparse coding baselines, but also that it comes close to the performance achieved using optimal random codes.
Collapse
Affiliation(s)
- Luis Sacouto
- INESC-id & Instituto Superior Tecnico, University of Lisbon, Av. Rovisco Pais 1, Lisbon, 1049-001, Portugal.
| | - Andreas Wichert
- INESC-id & Instituto Superior Tecnico, University of Lisbon, Av. Rovisco Pais 1, Lisbon, 1049-001, Portugal.
| |
Collapse
|
5
|
Sa-Couto L, Wichert A. Self-organizing maps on "what-where" codes towards fully unsupervised classification. BIOLOGICAL CYBERNETICS 2023:10.1007/s00422-023-00963-y. [PMID: 37188974 DOI: 10.1007/s00422-023-00963-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 04/14/2023] [Indexed: 05/17/2023]
Abstract
Interest in unsupervised learning architectures has been rising. Besides being biologically unnatural, it is costly to depend on large labeled data sets to get a well-performing classification system. Therefore, both the deep learning community and the more biologically-inspired models community have focused on proposing unsupervised techniques that can produce adequate hidden representations which can then be fed to a simpler supervised classifier. Despite great success with this approach, an ultimate dependence on a supervised model remains, which forces the number of classes to be known beforehand, and makes the system depend on labels to extract concepts. To overcome this limitation, recent work has been proposed that shows how a self-organizing map (SOM) can be used as a completely unsupervised classifier. However, to achieve success it required deep learning techniques to generate high quality embeddings. The purpose of this work is to show that we can use our previously proposed What-Where encoder in tandem with the SOM to get an end-to-end unsupervised system that is Hebbian. Such system, requires no labels to train nor does it require knowledge of which classes exist beforehand. It can be trained online and adapt to new classes that may emerge. As in the original work, we use the MNIST data set to run an experimental analysis and verify that the system achieves similar accuracies to the best ones reported thus far. Furthermore, we extend the analysis to the more difficult Fashion-MNIST problem and conclude that the system still performs.
Collapse
Affiliation(s)
- Luis Sa-Couto
- Department of Computer Science and Engineering, INESC-ID and Instituto Superior Técnico - University of Lisbon, Av. Prof. Dr. Aníbal Cavaco Silva, Porto Salvo, 2744-016, Lisbon, Portugal.
| | - Andreas Wichert
- Department of Computer Science and Engineering, INESC-ID and Instituto Superior Técnico - University of Lisbon, Av. Prof. Dr. Aníbal Cavaco Silva, Porto Salvo, 2744-016, Lisbon, Portugal
| |
Collapse
|
8
|
Sa-Couto L, Wichert A. Simple Convolutional-Based Models: Are They Learning the Task or the Data? Neural Comput 2021; 33:3334-3350. [PMID: 34710905 DOI: 10.1162/neco_a_01446] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 06/16/2021] [Indexed: 11/04/2022]
Abstract
Convolutional neural networks (CNNs) evolved from Fukushima's neocognitron model, which is based on the ideas of Hubel and Wiesel about the early stages of the visual cortex. Unlike other branches of neocognitron-based models, the typical CNN is based on end-to-end supervised learning by backpropagation and removes the focus from built-in invariance mechanisms, using pooling not as a way to tolerate small shifts but as a regularization tool that decreases model complexity. These properties of end-to-end supervision and flexibility of structure allow the typical CNN to become highly tuned to the training data, leading to extremely high accuracies on typical visual pattern recognition data sets. However, in this work, we hypothesize that there is a flip side to this capability, a hidden overfitting. More concretely, a supervised, backpropagation based CNN will outperform a neocognitron/map transformation cascade (MTC) when trained and tested inside the same data set. Yet if we take both models trained and test them on the same task but on another data set (without retraining), the overfitting appears. Other neocognitron descendants like the What-Where model go in a different direction. In these models, learning remains unsupervised, but more structure is added to capture invariance to typical changes. Knowing that, we further hypothesize that if we repeat the same experiments with this model, the lack of supervision may make it worse than the typical CNN inside the same data set, but the added structure will make it generalize even better to another one. To put our hypothesis to the test, we choose the simple task of handwritten digit classification and take two well-known data sets of it: MNIST and ETL-1. To try to make the two data sets as similar as possible, we experiment with several types of preprocessing. However, regardless of the type in question, the results align exactly with expectation.
Collapse
Affiliation(s)
- Luis Sa-Couto
- Department of Computer Science and Engineering, INESC-ID and Instituto Superior, Técnico University of Lisbon, 2744-016 Porto Salvo, Portugal
| | - Andreas Wichert
- Department of Computer Science and Engineering, INESC-ID and Instituto Superior, Técnico University of Lisbon, 2744-016 Porto Salvo, Portugal
| |
Collapse
|