1
|
Yang L, Liu W, Shi L, Wu J, Zhang W, Chuang YA, Redding-Ochoa J, Kirkwood A, Savonenko AV, Worley PF. NMDA Receptor-Arc Signaling Is Required for Memory Updating and Is Disrupted in Alzheimer's Disease. Biol Psychiatry 2023; 94:706-720. [PMID: 36796600 PMCID: PMC10423741 DOI: 10.1016/j.biopsych.2023.02.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 01/31/2023] [Accepted: 02/06/2023] [Indexed: 02/16/2023]
Abstract
BACKGROUND Memory deficits are central to many neuropsychiatric diseases. During acquisition of new information, memories can become vulnerable to interference, yet mechanisms that underlie interference are unknown. METHODS We describe a novel transduction pathway that links the NMDA receptor (NMDAR) to AKT signaling via the immediate early gene Arc and evaluate its role in memory. The signaling pathway is validated using biochemical tools and transgenic mice, and function is evaluated in assays of synaptic plasticity and behavior. The translational relevance is evaluated in human postmortem brain. RESULTS Arc is dynamically phosphorylated by CaMKII (calcium/calmodulin-dependent protein kinase II) and binds the NMDAR subunits NR2A/NR2B and a previously unstudied PI3K (phosphoinositide 3-kinase) adapter p55PIK (PIK3R3) in vivo in response to novelty or tetanic stimulation in acute slices. NMDAR-Arc-p55PIK recruits p110α PI3K and mTORC2 (mechanistic target of rapamycin complex 2) to activate AKT. NMDAR-Arc-p55PIK-PI3K-mTORC2-AKT assembly occurs within minutes of exploratory behavior and localizes to sparse synapses throughout hippocampal and cortical regions. Studies using conditional (Nestin-Cre) p55PIK deletion mice indicate that NMDAR-Arc-p55PIK-PI3K-mTORC2-AKT functions to inhibit GSK3 and mediates input-specific metaplasticity that protects potentiated synapses from subsequent depotentiation. p55PIK conditional knockout mice perform normally in multiple behaviors including working memory and long-term memory tasks but exhibit deficits indicative of increased vulnerability to interference in both short-term and long-term paradigms. The NMDAR-AKT transduction complex is reduced in postmortem brain of individuals with early Alzheimer's disease. CONCLUSIONS A novel function of Arc mediates synapse-specific NMDAR-AKT signaling and metaplasticity that contributes to memory updating and is disrupted in human cognitive disease.
Collapse
Affiliation(s)
- Liuqing Yang
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Wenxue Liu
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Linyuan Shi
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Jing Wu
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Wenchi Zhang
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Yang-An Chuang
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Javier Redding-Ochoa
- Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Alfredo Kirkwood
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Alena V Savonenko
- Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, Maryland.
| | - Paul F Worley
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, Maryland; Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, Maryland.
| |
Collapse
|
2
|
Aziz AZB, Adams J, Elhabian S. Progressive DeepSSM: Training Methodology for Image-To-Shape Deep Models. SHAPE IN MEDICAL IMAGING : INTERNATIONAL WORKSHOP, SHAPEMI 2023, HELD IN CONJUNCTION WITH MICCAI 2023, VANCOUVER, BC, CANADA, OCTOBER 8, 2023, PROCEEDINGS. SHAPEMI (WORKSHOP) (2023 : VANCOUVER, B.C.) 2023; 14350:157-172. [PMID: 38745942 PMCID: PMC11090218 DOI: 10.1007/978-3-031-46914-5_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Statistical shape modeling (SSM) is an enabling quantitative tool to study anatomical shapes in various medical applications. However, directly using 3D images in these applications still has a long way to go. Recent deep learning methods have paved the way for reducing the substantial preprocessing steps to construct SSMs directly from unsegmented images. Nevertheless, the performance of these models is not up to the mark. Inspired by multiscale/multiresolution learning, we propose a new training strategy, progressive DeepSSM, to train image-to-shape deep learning models. The training is performed in multiple scales, and each scale utilizes the output from the previous scale. This strategy enables the model to learn coarse shape features in the first scales and gradually learn detailed fine shape features in the later scales. We leverage shape priors via segmentation-guided multi-task learning and employ deep supervision loss to ensure learning at each scale. Experiments show the superiority of models trained by the proposed strategy from both quantitative and qualitative perspectives. This training methodology can be employed to improve the stability and accuracy of any deep learning method for inferring statistical representations of anatomies from medical images and can be adopted by existing deep learning methods to improve model accuracy and training stability.
Collapse
Affiliation(s)
- Abu Zahid Bin Aziz
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, USA
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah, USA
| | - Jadie Adams
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, USA
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah, USA
| | - Shireen Elhabian
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, USA
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah, USA
| |
Collapse
|
3
|
Mazumder P, Singh P. Mitigate forgetting in few-shot class-incremental learning using different image views. Neural Netw 2023; 165:999-1009. [PMID: 37467587 DOI: 10.1016/j.neunet.2023.06.043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 06/06/2023] [Accepted: 06/28/2023] [Indexed: 07/21/2023]
Abstract
In the few-shot class incremental learning (FSCIL) setting, new classes with few training examples become available incrementally, and deep learning models suffer from catastrophic forgetting of the previous classes when trained on new classes. Data augmentation techniques are generally used to increase the training data and improve the model performance. In this work, we demonstrate that differently augmented views of the same image obtained by applying data augmentations may not necessarily activate the same set of neurons in the model. Therefore, the information gained by a model regarding a class, when trained using data augmentation, may not necessarily be stored in the same set of neurons in the model. Consequently, during incremental training, even if some of the model weights that store the previously seen class information for a particular view get overwritten, the information of the previous classes for the other views may still remain intact in the other model weights. Therefore, the impact of catastrophic forgetting on the model predictions is different for different data augmentations used during training. Based on this, we present an Augmentation-based Prediction Rectification (APR) approach to reduce the impact of catastrophic forgetting in the FSCIL setting. APR can also augment other FSCIL approaches and significantly improve their performance. We also propose a novel feature synthesis module (FSM) for synthesizing features relevant to the previously seen classes without requiring training data from these classes. FSM outperforms other generative approaches in this setting. We experimentally show that our approach outperforms other methods on benchmark datasets.
Collapse
|
4
|
Mazumder P, Karim MA, Joshi I, Singh P. Leveraging joint incremental learning objective with data ensemble for class incremental learning. Neural Netw 2023; 161:202-212. [PMID: 36774860 DOI: 10.1016/j.neunet.2023.01.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 11/27/2022] [Accepted: 01/18/2023] [Indexed: 01/26/2023]
Abstract
A class-incremental learning problem is characterized by training data becoming available in a phase-by-phase manner. Deep learning models suffer from catastrophic forgetting of the classes in the older phases as they get trained on the classes introduced in the new phase. In this work, we show that the change in orientation of an image has a considerable effect on the model prediction accuracy, which in turn demonstrates the different rates of catastrophic forgetting for the different orientations of the same image, which is a novel finding. Based on this, we propose a data-ensemble approach that combines the predictions for the different orientations of the image to help the model retain information regarding the previously seen classes and thereby reduce the rate of forgetting in the model predictions. However, we cannot directly use the data-ensemble approach if the model is trained using traditional techniques. Therefore, we also propose a novel training approach using a joint-incremental learning objective (JILO) that involves jointly training the network with two incremental learning objectives, i.e., the class-incremental learning objective and our proposed data-incremental learning objective. We empirically demonstrate that JILO is vital to the data-ensemble approach. We apply our proposed approach to state-of-the-art class-incremental learning methods and empirically show that our approach significantly improves the performance of these methods. Our proposed approach significantly improves the performance of the state-of-the-art method (AANets) on the CIFAR-100 dataset by absolute margins of 3.30%, 4.28%, 3.55%, 4.03%, for the number of phases P=50, 25, 10, and 5, respectively, which establishes the efficacy of the proposed work.
Collapse
|
5
|
Zhao J, Zhang X, Zhao B, Hu W, Diao T, Wang L, Zhong Y, Li Q. Genetic dissection of mutual interference between two consecutive learning tasks in Drosophila. eLife 2023; 12:83516. [PMID: 36897069 PMCID: PMC10030115 DOI: 10.7554/elife.83516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 03/09/2023] [Indexed: 03/11/2023] Open
Abstract
Animals can continuously learn different tasks to adapt to changing environments and, therefore, have strategies to effectively cope with inter-task interference, including both proactive interference (Pro-I) and retroactive interference (Retro-I). Many biological mechanisms are known to contribute to learning, memory, and forgetting for a single task, however, mechanisms involved only when learning sequential different tasks are relatively poorly understood. Here, we dissect the respective molecular mechanisms of Pro-I and Retro-I between two consecutive associative learning tasks in Drosophila. Pro-I is more sensitive to an inter-task interval (ITI) than Retro-I. They occur together at short ITI (<20 min), while only Retro-I remains significant at ITI beyond 20 min. Acutely overexpressing Corkscrew (CSW), an evolutionarily conserved protein tyrosine phosphatase SHP2, in mushroom body (MB) neurons reduces Pro-I, whereas acute knockdown of CSW exacerbates Pro-I. Such function of CSW is further found to rely on the γ subset of MB neurons and the downstream Raf/MAPK pathway. In contrast, manipulating CSW does not affect Retro-I as well as a single learning task. Interestingly, manipulation of Rac1, a molecule that regulates Retro-I, does not affect Pro-I. Thus, our findings suggest that learning different tasks consecutively triggers distinct molecular mechanisms to tune proactive and retroactive interference.
Collapse
Affiliation(s)
- Jianjian Zhao
- School of Life Sciences, IDG/McGovern Institute for Brain Research, MOE Key Laboratory of Protein Sciences, Tsinghua University, Beijing, China
- Tsinghua-Peking Center for Life Sciences, Beijing, China
| | - Xuchen Zhang
- School of Life Sciences, IDG/McGovern Institute for Brain Research, MOE Key Laboratory of Protein Sciences, Tsinghua University, Beijing, China
- Tsinghua-Peking Center for Life Sciences, Beijing, China
| | - Bohan Zhao
- School of Life Sciences, IDG/McGovern Institute for Brain Research, MOE Key Laboratory of Protein Sciences, Tsinghua University, Beijing, China
- Tsinghua-Peking Center for Life Sciences, Beijing, China
| | - Wantong Hu
- School of Life Sciences, IDG/McGovern Institute for Brain Research, MOE Key Laboratory of Protein Sciences, Tsinghua University, Beijing, China
- Tsinghua-Peking Center for Life Sciences, Beijing, China
| | - Tongxin Diao
- School of Life Sciences, IDG/McGovern Institute for Brain Research, MOE Key Laboratory of Protein Sciences, Tsinghua University, Beijing, China
- Tsinghua-Peking Center for Life Sciences, Beijing, China
| | - Liyuan Wang
- School of Life Sciences, IDG/McGovern Institute for Brain Research, MOE Key Laboratory of Protein Sciences, Tsinghua University, Beijing, China
- Tsinghua-Peking Center for Life Sciences, Beijing, China
| | - Yi Zhong
- School of Life Sciences, IDG/McGovern Institute for Brain Research, MOE Key Laboratory of Protein Sciences, Tsinghua University, Beijing, China
- Tsinghua-Peking Center for Life Sciences, Beijing, China
| | - Qian Li
- School of Life Sciences, IDG/McGovern Institute for Brain Research, MOE Key Laboratory of Protein Sciences, Tsinghua University, Beijing, China
- Tsinghua-Peking Center for Life Sciences, Beijing, China
| |
Collapse
|
6
|
Chen J, Xiang Y. A robust and anti-forgettiable model for class-incremental learning. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04239-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
7
|
Liu H, Zhou Y, Liu B, Zhao J, Yao R, Shao Z. Incremental learning with neural networks for computer vision: a survey. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10294-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
8
|
Rohlfs C. A descriptive analysis of olfactory sensation and memory in Drosophila and its relation to artificial neural networks. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.10.068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
9
|
J. Ost K, W. Anderson D, W. Cadotte D. Delivering Precision Medicine to Patients with Spinal Cord Disorders; Insights into Applications of Bioinformatics and Machine Learning from Studies of Degenerative Cervical Myelopathy. ARTIF INTELL 2021. [DOI: 10.5772/intechopen.98713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
With the common adoption of electronic health records and new technologies capable of producing an unprecedented scale of data, a shift must occur in how we practice medicine in order to utilize these resources. We are entering an era in which the capacity of even the most clever human doctor simply is insufficient. As such, realizing “personalized” or “precision” medicine requires new methods that can leverage the massive amounts of data now available. Machine learning techniques provide one important toolkit in this venture, as they are fundamentally designed to deal with (and, in fact, benefit from) massive datasets. The clinical applications for such machine learning systems are still in their infancy, however, and the field of medicine presents a unique set of design considerations. In this chapter, we will walk through how we selected and adjusted the “Progressive Learning framework” to account for these considerations in the case of Degenerative Cervical Myeolopathy. We additionally compare a model designed with these techniques to similar static models run in “perfect world” scenarios (free of the clinical issues address), and we use simulated clinical data acquisition scenarios to demonstrate the advantages of our machine learning approach in providing personalized diagnoses.
Collapse
|