151
|
Wang Q, Zhou Y. FedSPL: federated self-paced learning for privacy-preserving disease diagnosis. Brief Bioinform 2021; 23:6454650. [PMID: 34874995 DOI: 10.1093/bib/bbab498] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 10/28/2021] [Accepted: 10/30/2021] [Indexed: 12/18/2022] Open
Abstract
The growing expansion of data availability in medical fields could help improve the performance of machine learning methods. However, with healthcare data, using multi-institutional datasets is challenging due to privacy and security concerns. Therefore, privacy-preserving machine learning methods are required. Thus, we use a federated learning model to train a shared global model, which is a central server that does not contain private data, and all clients maintain the sensitive data in their own institutions. The scattered training data are connected to improve model performance, while preserving data privacy. However, in the federated training procedure, data errors or noise can reduce learning performance. Therefore, we introduce the self-paced learning, which can effectively select high-confidence samples and drop high noisy samples to improve the performances of the training model and reduce the risk of data privacy leakage. We propose the federated self-paced learning (FedSPL), which combines the advantage of federated learning and self-paced learning. The proposed FedSPL model was evaluated on gene expression data distributed across different institutions where the privacy concerns must be considered. The results demonstrate that the proposed FedSPL model is secure, i.e. it does not expose the original record to other parties, and the computational overhead during training is acceptable. Compared with learning methods based on the local data of all parties, the proposed model can significantly improve the predicted F1-score by approximately 4.3%. We believe that the proposed method has the potential to benefit clinicians in gene selections and disease prognosis.
Collapse
Affiliation(s)
- Qingyong Wang
- Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, China
| | - Yun Zhou
- Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, China
| |
Collapse
|
152
|
Classification of Initial Stages of Alzheimer’s Disease through Pet Neuroimaging Modality and Deep Learning: Quantifying the Impact of Image Filtering Approaches. MATHEMATICS 2021. [DOI: 10.3390/math9233101] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Alzheimer’s disease (AD) is a leading health concern affecting the elderly population worldwide. It is defined by amyloid plaques, neurofibrillary tangles, and neuronal loss. Neuroimaging modalities such as positron emission tomography (PET) and magnetic resonance imaging are routinely used in clinical settings to monitor the alterations in the brain during the course of progression of AD. Deep learning techniques such as convolutional neural networks (CNNs) have found numerous applications in healthcare and other technologies. Together with neuroimaging modalities, they can be deployed in clinical settings to learn effective representations of data for different tasks such as classification, segmentation, detection, etc. Image filtering methods are instrumental in making images viable for image processing operations and have found numerous applications in image-processing-related tasks. In this work, we deployed 3D-CNNs to learn effective representations of PET modality data to quantify the impact of different image filtering approaches. We used box filtering, median filtering, Gaussian filtering, and modified Gaussian filtering approaches to preprocess the images and use them for classification using 3D-CNN architecture. Our findings suggest that these approaches are nearly equivalent and have no distinct advantage over one another. For the multiclass classification task between normal control (NC), mild cognitive impairment (MCI), and AD classes, the 3D-CNN architecture trained using Gaussian-filtered data performed the best. For binary classification between NC and MCI classes, the 3D-CNN architecture trained using median-filtered data performed the best, while, for binary classification between AD and MCI classes, the 3D-CNN architecture trained using modified Gaussian-filtered data performed the best. Finally, for binary classification between AD and NC classes, the 3D-CNN architecture trained using box-filtered data performed the best.
Collapse
|
153
|
Dyrba M, Hanzig M, Altenstein S, Bader S, Ballarini T, Brosseron F, Buerger K, Cantré D, Dechent P, Dobisch L, Düzel E, Ewers M, Fliessbach K, Glanz W, Haynes JD, Heneka MT, Janowitz D, Keles DB, Kilimann I, Laske C, Maier F, Metzger CD, Munk MH, Perneczky R, Peters O, Preis L, Priller J, Rauchmann B, Roy N, Scheffler K, Schneider A, Schott BH, Spottke A, Spruth EJ, Weber MA, Ertl-Wagner B, Wagner M, Wiltfang J, Jessen F, Teipel SJ. Improving 3D convolutional neural network comprehensibility via interactive visualization of relevance maps: evaluation in Alzheimer's disease. Alzheimers Res Ther 2021; 13:191. [PMID: 34814936 PMCID: PMC8611898 DOI: 10.1186/s13195-021-00924-2] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 10/25/2021] [Indexed: 11/29/2022]
Abstract
Background Although convolutional neural networks (CNNs) achieve high diagnostic accuracy for detecting Alzheimer’s disease (AD) dementia based on magnetic resonance imaging (MRI) scans, they are not yet applied in clinical routine. One important reason for this is a lack of model comprehensibility. Recently developed visualization methods for deriving CNN relevance maps may help to fill this gap as they allow the visualization of key input image features that drive the decision of the model. We investigated whether models with higher accuracy also rely more on discriminative brain regions predefined by prior knowledge. Methods We trained a CNN for the detection of AD in N = 663 T1-weighted MRI scans of patients with dementia and amnestic mild cognitive impairment (MCI) and verified the accuracy of the models via cross-validation and in three independent samples including in total N = 1655 cases. We evaluated the association of relevance scores and hippocampus volume to validate the clinical utility of this approach. To improve model comprehensibility, we implemented an interactive visualization of 3D CNN relevance maps, thereby allowing intuitive model inspection. Results Across the three independent datasets, group separation showed high accuracy for AD dementia versus controls (AUC ≥ 0.91) and moderate accuracy for amnestic MCI versus controls (AUC ≈ 0.74). Relevance maps indicated that hippocampal atrophy was considered the most informative factor for AD detection, with additional contributions from atrophy in other cortical and subcortical regions. Relevance scores within the hippocampus were highly correlated with hippocampal volumes (Pearson’s r ≈ −0.86, p < 0.001). Conclusion The relevance maps highlighted atrophy in regions that we had hypothesized a priori. This strengthens the comprehensibility of the CNN models, which were trained in a purely data-driven manner based on the scans and diagnosis labels. The high hippocampus relevance scores as well as the high performance achieved in independent samples support the validity of the CNN models in the detection of AD-related MRI abnormalities. The presented data-driven and hypothesis-free CNN modeling approach might provide a useful tool to automatically derive discriminative features for complex diagnostic tasks where clear clinical criteria are still missing, for instance for the differential diagnosis between various types of dementia. Supplementary Information The online version contains supplementary material available at 10.1186/s13195-021-00924-2.
Collapse
Affiliation(s)
- Martin Dyrba
- German Center for Neurodegenerative Diseases (DZNE), Rostock, Germany.
| | - Moritz Hanzig
- German Center for Neurodegenerative Diseases (DZNE), Rostock, Germany.,Institute of Visual and Analytic Computing, University of Rostock, Rostock, Germany
| | - Slawek Altenstein
- German Center for Neurodegenerative Diseases (DZNE), Berlin, Germany.,Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, Campus Charité Mitte, Berlin, Germany
| | - Sebastian Bader
- Institute of Visual and Analytic Computing, University of Rostock, Rostock, Germany
| | | | - Frederic Brosseron
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany.,Department for Neurodegenerative Diseases and Geriatric Psychiatry, University Hospital Bonn, Bonn, Germany
| | - Katharina Buerger
- German Center for Neurodegenerative Diseases (DZNE), Munich, Germany.,Institute for Stroke and Dementia Research (ISD), University Hospital, Ludwig Maximilian University, Munich, Germany
| | - Daniel Cantré
- Institute of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, Rostock University Medical Center, Rostock, Germany
| | - Peter Dechent
- MR-Research in Neurosciences, Department of Cognitive Neurology, Georg-August-University, Goettingen, Germany
| | - Laura Dobisch
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany
| | - Emrah Düzel
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany.,Institute of Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke University, Magdeburg, Germany
| | - Michael Ewers
- German Center for Neurodegenerative Diseases (DZNE), Munich, Germany.,Institute for Stroke and Dementia Research (ISD), University Hospital, Ludwig Maximilian University, Munich, Germany
| | - Klaus Fliessbach
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany.,Department for Neurodegenerative Diseases and Geriatric Psychiatry, University Hospital Bonn, Bonn, Germany
| | - Wenzel Glanz
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany
| | | | - Michael T Heneka
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany.,Department for Neurodegenerative Diseases and Geriatric Psychiatry, University Hospital Bonn, Bonn, Germany
| | - Daniel Janowitz
- Institute for Stroke and Dementia Research (ISD), University Hospital, Ludwig Maximilian University, Munich, Germany
| | - Deniz B Keles
- Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, Campus Benjamin Franklin, Berlin, Germany
| | - Ingo Kilimann
- German Center for Neurodegenerative Diseases (DZNE), Rostock, Germany.,Department of Psychosomatic Medicine, Rostock University Medical Center, Rostock, Germany
| | - Christoph Laske
- German Center for Neurodegenerative Diseases (DZNE), Tuebingen, Germany.,Section for Dementia Research, Hertie Institute for Clinical Brain Research, Tuebingen, Germany.,Department of Psychiatry and Psychotherapy, University of Tuebingen, Tuebingen, Germany
| | - Franziska Maier
- Department of Psychiatry, Medical Faculty, University of Cologne, Cologne, Germany
| | - Coraline D Metzger
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany.,Institute of Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke University, Magdeburg, Germany.,Department of Psychiatry and Psychotherapy, Otto-von-Guericke University, Magdeburg, Germany
| | - Matthias H Munk
- German Center for Neurodegenerative Diseases (DZNE), Tuebingen, Germany.,Department of Psychiatry and Psychotherapy, University of Tuebingen, Tuebingen, Germany.,Systems Neurophysiology, Department of Biology, Darmstadt University of Technology, Darmstadt, Germany
| | - Robert Perneczky
- German Center for Neurodegenerative Diseases (DZNE), Munich, Germany.,Department of Psychiatry and Psychotherapy, University Hospital, Ludwig Maximilian University, Munich, Germany.,Munich Cluster for Systems Neurology (SyNergy), Ludwig Maximilian University, Munich, Germany.,Ageing Epidemiology Research Unit (AGE), School of Public Health, Imperial College London, London, UK
| | - Oliver Peters
- German Center for Neurodegenerative Diseases (DZNE), Berlin, Germany.,Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, Campus Benjamin Franklin, Berlin, Germany
| | - Lukas Preis
- German Center for Neurodegenerative Diseases (DZNE), Berlin, Germany.,Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, Campus Benjamin Franklin, Berlin, Germany
| | - Josef Priller
- German Center for Neurodegenerative Diseases (DZNE), Berlin, Germany.,Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, Campus Charité Mitte, Berlin, Germany.,Department of Psychiatry and Psychotherapy, Klinikum rechts der Isar, Technical University Munich, Munich, Germany
| | - Boris Rauchmann
- Department of Psychiatry and Psychotherapy, University Hospital, Ludwig Maximilian University, Munich, Germany
| | - Nina Roy
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Klaus Scheffler
- Department for Biomedical Magnetic Resonance, University of Tuebingen, Tuebingen, Germany
| | - Anja Schneider
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany.,Department for Neurodegenerative Diseases and Geriatric Psychiatry, University Hospital Bonn, Bonn, Germany
| | - Björn H Schott
- German Center for Neurodegenerative Diseases (DZNE), Goettingen, Germany.,Department of Psychiatry and Psychotherapy, University Medical Center Goettingen, Goettingen, Germany.,Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Annika Spottke
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany.,Department of Neurology, University Hospital Bonn, Bonn, Germany
| | - Eike J Spruth
- German Center for Neurodegenerative Diseases (DZNE), Berlin, Germany.,Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, Campus Charité Mitte, Berlin, Germany
| | - Marc-André Weber
- Institute of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, Rostock University Medical Center, Rostock, Germany
| | - Birgit Ertl-Wagner
- Institute for Clinical Radiology, Ludwig Maximilian University, Munich, Germany.,Department of Medical Imaging, University of Toronto, Toronto, Canada
| | - Michael Wagner
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany.,Department for Neurodegenerative Diseases and Geriatric Psychiatry, University Hospital Bonn, Bonn, Germany
| | - Jens Wiltfang
- German Center for Neurodegenerative Diseases (DZNE), Goettingen, Germany.,Department of Psychiatry and Psychotherapy, University Medical Center Goettingen, Goettingen, Germany.,Neurosciences and Signaling Group, Institute of Biomedicine (iBiMED), Department of Medical Sciences, University of Aveiro, Aveiro, Portugal
| | - Frank Jessen
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany.,Department of Psychiatry, Medical Faculty, University of Cologne, Cologne, Germany.,Excellence Cluster on Cellular Stress Responses in Aging-Associated Diseases (CECAD), University of Cologne, Cologne, Germany
| | - Stefan J Teipel
- German Center for Neurodegenerative Diseases (DZNE), Rostock, Germany.,Department of Psychosomatic Medicine, Rostock University Medical Center, Rostock, Germany
| | | |
Collapse
|
154
|
Deep learning based pipelines for Alzheimer's disease diagnosis: A comparative study and a novel deep-ensemble method. Comput Biol Med 2021; 141:105032. [PMID: 34838263 DOI: 10.1016/j.compbiomed.2021.105032] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 10/23/2021] [Accepted: 11/10/2021] [Indexed: 12/14/2022]
Abstract
BACKGROUND Alzheimer's disease is a chronic neurodegenerative disease that destroys brain cells, causing irreversible degeneration of cognitive functions and dementia. Its causes are not yet fully understood, and there is no curative treatment. However, neuroimaging tools currently offer help in clinical diagnosis, and, recently, deep learning methods have rapidly become a key methodology applied to these tools. The reason is that they require little or no image preprocessing and can automatically infer an optimal representation of the data from raw images without requiring prior feature selection, resulting in a more objective and less biased process. However, training a reliable model is challenging due to the significant differences in brain image types. METHODS We aim to contribute to the research and study of Alzheimer's disease through computer-aided diagnosis (CAD) by comparing different deep learning models. In this work, there are three main objectives: i) to present a fully automated deep-ensemble approach for dementia-level classification from brain images, ii) to compare different deep learning architectures to obtain the most suitable one for the task, and (iii) evaluate the robustness of the proposed strategy in a deep learning framework to detect Alzheimer's disease and recognise different levels of dementia. The proposed approach is specifically designed to be potential support for clinical care based on patients' brain images. RESULTS Our strategy was developed and tested on three MRI and one fMRI public datasets with heterogeneous characteristics. By performing a comprehensive analysis of binary classification (Alzheimer's disease status or not) and multiclass classification (recognising different levels of dementia), the proposed approach can exceed state of the art in both tasks, reaching an accuracy of 98.51% in the binary case, and 98.67% in the multiclass case averaged over the four different data sets. CONCLUSION We strongly believe that integrating the proposed deep-ensemble approach will result in robust and reliable CAD systems, considering the numerous cross-dataset experiments performed. Being tested on MRIs and fMRIs, our strategy can be easily extended to other imaging techniques. In conclusion, we found that our deep-ensemble strategy could be efficiently applied for this task with a considerable potential benefit for patient management.
Collapse
|
155
|
Zhang P, Lin S, Qiao J, Tu Y. Diagnosis of Alzheimer's Disease with Ensemble Learning Classifier and 3D Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2021; 21:7634. [PMID: 34833710 PMCID: PMC8623279 DOI: 10.3390/s21227634] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 11/06/2021] [Accepted: 11/15/2021] [Indexed: 12/21/2022]
Abstract
Alzheimer's disease (AD), the most common type of dementia, is a progressive disease beginning with mild memory loss, possibly leading to loss of the ability to carry on a conversation and respond to environments. It can seriously affect a person's ability to carry out daily activities. Therefore, early diagnosis of AD is conducive to better treatment and avoiding further deterioration of the disease. Magnetic resonance imaging (MRI) has become the main tool for humans to study brain tissues. It can clearly reflect the internal structure of a brain and plays an important role in the diagnosis of Alzheimer's disease. MRI data is widely used for disease diagnosis. In this paper, based on MRI data, a method combining a 3D convolutional neural network and ensemble learning is proposed to improve the diagnosis accuracy. Then, a data denoising module is proposed to reduce boundary noise. The experimental results on ADNI dataset demonstrate that the model proposed in this paper improves the training speed of the neural network and achieves 95.2% accuracy in AD vs. NC (normal control) task and 77.8% accuracy in sMCI (stable mild cognitive impairment) vs. pMCI (progressive mild cognitive impairment) task in the diagnosis of Alzheimer's disease.
Collapse
Affiliation(s)
| | - Shukuan Lin
- Department of Computer Science and Engineering, Northeastern University, Shenyang 110819, China; (P.Z.); (J.Q.); (Y.T.)
| | | | | |
Collapse
|
156
|
Qiao H, Chen L, Zhu F. A Fusion of Multi-view 2D and 3D Convolution Neural Network based MRI for Alzheimer's Disease Diagnosis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3317-3321. [PMID: 34891950 DOI: 10.1109/embc46164.2021.9629923] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Alzheimer's disease (AD) is a neurodegenerative disease leading to irreversible and progressive brain damage. Close monitoring is essential for slowing down the progression of AD. Magnetic Resonance Imaging (MRI) has been widely used for AD diagnosis and disease monitoring. Previous studies usually focused on extracting features from whole image or specific slices separately, but ignore the characteristics of each slice from multiple perspectives and the complementarity between features at different scales. In this study, we proposed a novel classification method based on the fusion of multi-view 2D and 3D convolutions for MRI-based AD diagnosis. Specifically, we first use multiple sub-networks to extract the local slice-level feature of each slice in different dimensions. Then a 3D convolution network was used to extract the global subject-level information of MRI. Finally, local and global information were fused to acquire more discriminative features. Experiments conducted on the ADNI-1 and ADNI-2 dataset demonstrated the superiority of this proposed model over other state-of-the-art methods for their ability to discriminate AD and Normal Controls (NC). Our model achieves 90.2% and 85.2% of accuracy on ADNI-2 and ADNI-1 respectively, thus it can be effective in AD diagnosis. The source code of our model is freely available at https://github.com/fengduqianhe/ADMultiView.
Collapse
|
157
|
Wang Z, Peng D, Shang Y, Gao J. Autistic Spectrum Disorder Detection and Structural Biomarker Identification Using Self-Attention Model and Individual-Level Morphological Covariance Brain Networks. Front Neurosci 2021; 15:756868. [PMID: 34712116 PMCID: PMC8547518 DOI: 10.3389/fnins.2021.756868] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 09/06/2021] [Indexed: 11/13/2022] Open
Abstract
Autism spectrum disorder (ASD) is a range of neurodevelopmental disorders, which brings enormous burdens to the families of patients and society. However, due to the lack of representation of variance for diseases and the absence of biomarkers for diagnosis, the early detection and intervention of ASD are remarkably challenging. In this study, we proposed a self-attention deep learning framework based on the transformer model on structural MR images from the ABIDE consortium to classify ASD patients from normal controls and simultaneously identify the structural biomarkers. In our work, the individual structural covariance networks are used to perform ASD/NC classification via a self-attention deep learning framework, instead of the original structural MR data, to take full advantage of the coordination patterns of morphological features between brain regions. The self-attention deep learning framework based on the transformer model can extract both local and global information from the input data, making it more suitable for the brain network data than the CNN- structural model. Meanwhile, the potential diagnosis structural biomarkers are identified by the self-attention coefficients map. The experimental results showed that our proposed method outperforms most of the current methods for classifying ASD patients with the ABIDE data and achieves a classification accuracy of 72.5% across different sites. Furthermore, the potential diagnosis biomarkers were found mainly located in the prefrontal cortex, temporal cortex, and cerebellum, which may be treated as the early biomarkers for the ASD diagnosis. Our study demonstrated that the self-attention deep learning framework is an effective way to diagnose ASD and establish the potential biomarkers for ASD.
Collapse
Affiliation(s)
- Zhengning Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Dawei Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yongbin Shang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jingjing Gao
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
158
|
Guan H, Wang C, Cheng J, Jing J, Liu T. A parallel attention-augmented bilinear network for early magnetic resonance imaging-based diagnosis of Alzheimer's disease. Hum Brain Mapp 2021; 43:760-772. [PMID: 34676625 PMCID: PMC8720194 DOI: 10.1002/hbm.25685] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Revised: 09/15/2021] [Accepted: 09/28/2021] [Indexed: 11/16/2022] Open
Abstract
Structural magnetic resonance imaging (sMRI) can capture the spatial patterns of brain atrophy in Alzheimer's disease (AD) and incipient dementia. Recently, many sMRI‐based deep learning methods have been developed for AD diagnosis. Some of these methods utilize neural networks to extract high‐level representations on the basis of handcrafted features, while others attempt to learn useful features from brain regions proposed by a separate module. However, these methods require considerable manual engineering. Their stepwise training procedures would introduce cascading errors. Here, we propose the parallel attention‐augmented bilinear network, a novel deep learning framework for AD diagnosis. Based on a 3D convolutional neural network, the framework directly learns both global and local features from sMRI scans without any prior knowledge. The framework is lightweight and suitable for end‐to‐end training. We evaluate the framework on two public datasets (ADNI‐1 and ADNI‐2) containing 1,340 subjects. On both the AD classification and mild cognitive impairment conversion prediction tasks, our framework achieves competitive results. Furthermore, we generate heat maps that highlight discriminative areas for visual interpretation. Experiments demonstrate the effectiveness of the proposed framework when medical priors are unavailable or the computing resources are limited. The proposed framework is general for 3D medical image analysis with both efficiency and interpretability.
Collapse
Affiliation(s)
- Hao Guan
- School of Computer Science, Faculty of Engineering, The University of Sydney, Darlington, New South Wales, Australia
| | - Chaoyue Wang
- School of Computer Science, Faculty of Engineering, The University of Sydney, Darlington, New South Wales, Australia
| | - Jian Cheng
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| | - Jing Jing
- China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Tao Liu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China.,Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| |
Collapse
|
159
|
Priyanka A, Ganesan K. Hippocampus segmentation and classification for dementia analysis using pre-trained neural network models. BIOMED ENG-BIOMED TE 2021; 66:581-592. [PMID: 34626530 DOI: 10.1515/bmt-2021-0070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 09/08/2021] [Indexed: 11/15/2022]
Abstract
The diagnostic and clinical overlap of early mild cognitive impairment (EMCI), mild cognitive impairment (MCI), late mild cognitive impairment (LMCI) and Alzheimer disease (AD) is a vital oncological issue in dementia disorder. This study is designed to examine Whole brain (WB), grey matter (GM) and Hippocampus (HC) morphological variation and identify the prominent biomarkers in MR brain images of demented subjects to understand the severity progression. Curve evolution based on shape constraint is carried out to segment the complex brain structure such as HC and GM. Pre-trained models are used to observe the severity variation in these regions. This work is evaluated on ADNI database. The outcome of the proposed work shows that curve evolution method could segment HC and GM regions with better correlation. Pre-trained models are able to show significant severity difference among WB, GM and HC regions for the considered classes. Further, prominent variation is observed between AD vs. EMCI, AD vs. MCI and AD vs. LMCI in the whole brain, GM and HC. It is concluded that AlexNet model for HC region result in better classification for AD vs. EMCI, AD vs. MCI and AD vs. LMCI with an accuracy of 93, 78.3 and 91% respectively.
Collapse
Affiliation(s)
- Ahana Priyanka
- Department of Electronics Engineering, Madras Institute of Technology, Chennai, India
| | - Kavitha Ganesan
- Department of Electronics Engineering, Madras Institute of Technology, Chennai, India
| |
Collapse
|
160
|
Lao H, Zhang X. Regression and Classification of Alzheimers Disease Diagnosis using NMF-TDNet Features from 3D Brain MR Image. IEEE J Biomed Health Inform 2021; 26:1103-1115. [PMID: 34543210 DOI: 10.1109/jbhi.2021.3113668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
With the development of deep learning and medical imaging technology, many researchers use convolutional neural network(CNN) to obtain deep-level features of medical image in order to better classify Alzheimer's disease (AD) and predict clinical scores. The principal component analysis network (PCANet) is a lightweight deep-learning network that mainly uses principal component analysis (PCA) to generate multilevel filter banks for the centralized learning of samples and then performs binarization and generates blockwise histograms to obtain image features. However, the dimensions of the extracted PCANet features reaching tens of thousands or even hundreds of thousands, and the formation of the multilevel filter banks is sample data dependent, reducing the flexibility of PCANet. In this paper, based on the idea of PCANet, we propose a data-independent network called the nonnegative matrix factorization tensor decomposition network (NMF-TDNet), which improves the computational efficiency and solves the data dependence problem of PCANet. In this network, we use nonnegative matrix factorization (NMF) instead of PCA to create multilevel filter banks for sample learning, then uses the learning results to build a higher-order tensor and perform tensor decomposition (TD) to achieve data dimensionality reduction, producing the final image features. Finally, our method use these features as the input of the support vector machine (SVM) for AD classification diagnosis and clinical score prediction. The performance of our method is extensively evaluated on the ADNI-1, ADNI-2 and OASIS datasets. The experimental results show that NMF-TDNet can achieve data dimensionality reduction (the dimensionality of the extracted features numbers only a few hundred dimensions, far less than the hundreds of thousands required by PCANet) and the NMF-TDNet features as input achieved superior performance than using PCANet features as input.
Collapse
|
161
|
Zhu W, Sun L, Huang J, Han L, Zhang D. Dual Attention Multi-Instance Deep Learning for Alzheimer's Disease Diagnosis With Structural MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2354-2366. [PMID: 33939609 DOI: 10.1109/tmi.2021.3077079] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Structural magnetic resonance imaging (sMRI) is widely used for the brain neurological disease diagnosis, which could reflect the variations of brain. However, due to the local brain atrophy, only a few regions in sMRI scans have obvious structural changes, which are highly correlative with pathological features. Hence, the key challenge of sMRI-based brain disease diagnosis is to enhance the identification of discriminative features. To address this issue, we propose a dual attention multi-instance deep learning network (DA-MIDL) for the early diagnosis of Alzheimer's disease (AD) and its prodromal stage mild cognitive impairment (MCI). Specifically, DA-MIDL consists of three primary components: 1) the Patch-Nets with spatial attention blocks for extracting discriminative features within each sMRI patch whilst enhancing the features of abnormally changed micro-structures in the cerebrum, 2) an attention multi-instance learning (MIL) pooling operation for balancing the relative contribution of each patch and yield a global different weighted representation for the whole brain structure, and 3) an attention-aware global classifier for further learning the integral features and making the AD-related classification decisions. Our proposed DA-MIDL model is evaluated on the baseline sMRI scans of 1689 subjects from two independent datasets (i.e., ADNI and AIBL). The experimental results show that our DA-MIDL model can identify discriminative pathological locations and achieve better classification performance in terms of accuracy and generalizability, compared with several state-of-the-art methods.
Collapse
|
162
|
Qiao H, Chen L, Ye Z, Zhu F. Early Alzheimer's disease diagnosis with the contrastive loss using paired structural MRIs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106282. [PMID: 34343744 DOI: 10.1016/j.cmpb.2021.106282] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 07/08/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Alzheimer's Disease (AD) is a chronic and fatal neurodegenerative disease with progressive impairment of memory. Brain structural magnetic resonance imaging (sMRI) has been widely applied as important biomarkers of AD. Various machine learning approaches, especially deep learning-based models, have been proposed for the early diagnosis of AD and monitoring the disease progression on sMRI data. However, the requirement for a large number of training images still hinders the extensive usage of AD diagnosis. In addition, due to the similarities in human whole-brain structure, finding the subtle brain changes is essential to extract discriminative features from limited sMRI data effectively. METHODS In this work, we proposed two types of contrastive losses with paired sMRIs to promote the diagnostic performance using group categories (G-CAT) and varying subject mini-mental state examination (S-MMSE) information, respectively. Specifically, G-CAT contrastive loss layer was used to learn the closer feature representation from sMRIs with the same categories, while ranking information from S-MMSE assists the model to explore subtle changes between individuals. RESULTS The model was trained on ADNI-1. Comparison with baseline methods was performed on MIRIAD and ADNI-2. For the classification task on MIRIAD, S-MMSE achieves 93.5% of accuracy, 96.6% of sensitivity, and 94.9% of specificity, respectively. G-CAT and S-MMSE both reach remarkable performance in terms of classification sensitivity and specificity respectively. Comparing with state-of-the-art methods, we found this proposed method could achieve comparable results with other approaches. CONCLUSION The proposed model could extract discriminative features under whole-brain similarity. Extensive experiments also support the accuracy of this model, i.e., it provides better ability to identify uncertain samples, especially for the classification task of subjects with MMSE in 22-27. Source code is freely available at https://github.com/fengduqianhe/ADComparative.
Collapse
Affiliation(s)
- Hezhe Qiao
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China; University of Chinese Academy of Sciences, BeiJing 100049, China.
| | - Lin Chen
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China.
| | - Zi Ye
- Johns Hopkins University, Baltimore, MD 21218, United States of America.
| | - Fan Zhu
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China.
| |
Collapse
|
163
|
Song X, Mao M, Qian X. Auto-Metric Graph Neural Network Based on a Meta-Learning Strategy for the Diagnosis of Alzheimer's Disease. IEEE J Biomed Health Inform 2021; 25:3141-3152. [PMID: 33493122 DOI: 10.1109/jbhi.2021.3053568] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Alzheimer's disease (AD) is the most common cognitive disorder. In recent years, many computer-aided diagnosis techniques have been proposed for AD diagnosis and progression predictions. Among them, graph neural networks (GNNs) have received extensive attention owing to their ability to effectively fuse multimodal features and model the correlation between samples. However, many GNNs for node classification use an entire dataset to construct a large fixed-graph structure, which cannot be used for independent testing. To overcome this limitation while maintaining the advantages of the GNN, we propose an auto-metric GNN (AMGNN) model for AD diagnosis. First, a metric-based meta-learning strategy is introduced to realize inductive learning for independent testing through multiple node classification tasks. In the meta-tasks, the small graphs help make the model insensitive to the sample size, thus improving the performance under small sample size conditions. Furthermore, an AMGNN layer with a probability constraint is designed to realize node similarity metric learning and effectively fuse multimodal data. We verified the model on two tasks based on the TADPOLE dataset: early AD diagnosis and mild cognitive impairment (MCI) conversion prediction. Our model provides excellent performance on both tasks with accuracies of 94.44% and 87.50% and median accuracies of 94.19% and 86.25%, respectively. These results show that our model improves flexibility while ensuring a good classification performance, thus promoting the development of graph-based deep learning algorithms for disease diagnosis.
Collapse
|
164
|
Calhoun VD, Pearlson GD, Sui J. Data-driven approaches to neuroimaging biomarkers for neurological and psychiatric disorders: emerging approaches and examples. Curr Opin Neurol 2021; 34:469-479. [PMID: 34054110 PMCID: PMC8263510 DOI: 10.1097/wco.0000000000000967] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
PURPOSE OF REVIEW The 'holy grail' of clinical applications of neuroimaging to neurological and psychiatric disorders via personalized biomarkers has remained mostly elusive, despite considerable effort. However, there are many reasons to continue to be hopeful, as the field has made remarkable advances over the past few years, fueled by a variety of converging technical and data developments. RECENT FINDINGS We discuss a number of advances that are accelerating the push for neuroimaging biomarkers including the advent of the 'neuroscience big data' era, biomarker data competitions, the development of more sophisticated algorithms including 'guided' data-driven approaches that facilitate automation of network-based analyses, dynamic connectivity, and deep learning. Another key advance includes multimodal data fusion approaches which can provide convergent and complementary evidence pointing to possible mechanisms as well as increase predictive accuracy. SUMMARY The search for clinically relevant neuroimaging biomarkers for neurological and psychiatric disorders is rapidly accelerating. Here, we highlight some of these aspects, provide recent examples from studies in our group, and link to other ongoing work in the field. It is critical that access and use of these advanced approaches becomes mainstream, this will help propel the community forward and facilitate the production of robust and replicable neuroimaging biomarkers.
Collapse
Affiliation(s)
- Vince D Calhoun
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, Georgia
| | - Godfrey D Pearlson
- Department of Psychiatry and Neuroscience, Yale School of Medicine, New Haven, Connecticut, USA
| | - Jing Sui
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, Georgia
- Institute of Automation, Chinese Academy of Sciences, and the University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
165
|
Kang W, Lin L, Zhang B, Shen X, Wu S. Multi-model and multi-slice ensemble learning architecture based on 2D convolutional neural networks for Alzheimer's disease diagnosis. Comput Biol Med 2021; 136:104678. [PMID: 34329864 DOI: 10.1016/j.compbiomed.2021.104678] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 07/20/2021] [Accepted: 07/20/2021] [Indexed: 12/21/2022]
Abstract
Alzheimer's Disease (AD) is a chronic neurodegenerative disease without effective medications or supplemental treatments. Thus, predicting AD progression is crucial for clinical practice and medical research. Due to limited neuroimaging data, two-dimensional convolutional neural networks (2D CNNs) have been commonly adopted to differentiate among cognitively normal subjects (CN), people with mild cognitive impairment (MCI), and AD patients. Therefore, this paper proposes an ensemble learning (EL) architecture based on 2D CNNs, using a multi-model and multi-slice ensemble. First, the top 11 coronal slices of grey matter density maps for AD versus CN classifications were selected. Second, the discriminator of a generative adversarial network, VGG16, and ResNet50 were trained with the selected slices, and the majority voting scheme was used to merge the multi-slice decisions of each model. Afterwards, those three classifiers were used to construct an ensemble model. Multi-slice ensemble learning was designed to obtain spatial features, while multi-model integration reduced the prediction error rate. Finally, transfer learning was used in domain adaptation to refine those CNNs, moving them from working solely with AD versus CN classifications to being applicable to other tasks. This ensemble approach achieved accuracy values of 90.36%, 77.19%, and 72.36% when classifying AD versus CN, AD versus MCI, and MCI versus CN, respectively. Compared with other state-of-the-art 2D studies, the proposed approach provides an effective, accurate, automatic diagnosis along the AD continuum. This technique may enhance AD diagnostics when the sample size is limited.
Collapse
Affiliation(s)
- Wenjie Kang
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing, 100124, China
| | - Lan Lin
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing, 100124, China.
| | - Baiwen Zhang
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing, 100124, China
| | - Xiaoqi Shen
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing, 100124, China
| | - Shuicai Wu
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing, 100124, China
| | -
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing, 100124, China
| |
Collapse
|
166
|
Ning Z, Tu C, Di X, Feng Q, Zhang Y. Deep cross-view co-regularized representation learning for glioma subtype identification. Med Image Anal 2021; 73:102160. [PMID: 34303890 DOI: 10.1016/j.media.2021.102160] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 05/04/2021] [Accepted: 06/29/2021] [Indexed: 10/20/2022]
Abstract
The new subtypes of diffuse gliomas are recognized by the World Health Organization (WHO) on the basis of genotypes, e.g., isocitrate dehydrogenase and chromosome arms 1p/19q, in addition to the histologic phenotype. Glioma subtype identification can provide valid guidances for both risk-benefit assessment and clinical decision. The feature representations of gliomas in magnetic resonance imaging (MRI) have been prevalent for revealing underlying subtype status. However, since gliomas are highly heterogeneous tumors with quite variable imaging phenotypes, learning discriminative feature representations in MRI for gliomas remains challenging. In this paper, we propose a deep cross-view co-regularized representation learning framework for glioma subtype identification, in which view representation learning and multiple constraints are integrated into a unified paradigm. Specifically, we first learn latent view-specific representations based on cross-view images generated from MRI via a bi-directional mapping connecting original imaging space and latent space, and view-correlated regularizer and output-consistent regularizer in the latent space are employed to explore view correlation and derive view consistency, respectively. We further learn view-sharable representations which can explore complementary information of multiple views by projecting the view-specific representations into a holistically shared space and enhancing via adversary learning strategy. Finally, the view-specific and view-sharable representations are incorporated for identifying glioma subtype. Experimental results on multi-site datasets demonstrate the proposed method outperforms several state-of-the-art methods in detection of glioma subtype status.
Collapse
Affiliation(s)
- Zhenyuan Ning
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China
| | - Chao Tu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China
| | - Xiaohui Di
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
167
|
Zhang Z, Gao L, Jin G, Guo L, Yao Y, Dong L, Han J. THAN: task-driven hierarchical attention network for the diagnosis of mild cognitive impairment and Alzheimer's disease. Quant Imaging Med Surg 2021; 11:3338-3354. [PMID: 34249658 PMCID: PMC8249997 DOI: 10.21037/qims-21-91] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 04/26/2021] [Indexed: 11/06/2022]
Abstract
BACKGROUND To assist doctors to diagnose mild cognitive impairment (MCI) and Alzheimer's disease (AD) early and accurately, convolutional neural networks based on structural magnetic resonance imaging (sMRI) images have been developed and shown excellent performance. However, they are still limited in their capacity in extracting discriminative features because of large sMRI image volumes yet small lesion regions and the small number of sMRI images. METHODS We proposed a task-driven hierarchical attention network (THAN) taking advantage of the merits of patch-based and attention-based convolutional neural networks for MCI and AD diagnosis. THAN consists of an information sub-network and a hierarchical attention sub-network. In the information sub-network, an information map extractor, a patch-assistant module, and a mutual-boosting loss function are designed to generate a task-driven information map, which automatically highlights disease-related regions and their importance for final classification. In the hierarchical attention sub-network, a visual attention module and a semantic attention module are devised based on the information map to extract discriminative features for disease diagnosis. RESULTS Extensive experiments were conducted for four classification tasks: MCI versus (vs.) normal controls (NC), AD vs. NC, AD vs. MCI, and AD vs. MCI vs. NC. Results demonstrated that THAN attained the accuracy of 81.6% for MCI vs. NC, 93.5% for AD vs. NC, 80.8% for AD vs. MCI, and 62.9% for AD vs. MCI vs. NC. It outperformed advanced attention-based and patch-based methods. Moreover, information maps generated by the information sub-network could highlight the potential biomarkers of MCI and AD, such as the hippocampus and ventricles. Furthermore, when the visual and semantic attention modules were combined, the performance of the four tasks was highly improved. CONCLUSIONS The information sub-network can automatically highlight the disease-related regions. The hierarchical attention sub-network can extract discriminative visual and semantic features. Through the two sub-networks, THAN fully exploits the visual and semantic features of disease-related regions and meanwhile considers global features of sMRI images, which finally facilitate the diagnosis of MCI and AD.
Collapse
Affiliation(s)
- Zhehao Zhang
- Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
| | - Linlin Gao
- Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
| | - Guang Jin
- Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
| | - Lijun Guo
- Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
| | - Yudong Yao
- Research Institute for Medical and Biological Engineering, Ningbo University, Ningbo, China
| | - Li Dong
- Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
| | - Jinming Han
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - the Alzheimer’s Disease NeuroImaging Initiative
- Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
- Research Institute for Medical and Biological Engineering, Ningbo University, Ningbo, China
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
168
|
Guan H, Liu Y, Yang E, Yap PT, Shen D, Liu M. Multi-site MRI harmonization via attention-guided deep domain adaptation for brain disorder identification. Med Image Anal 2021; 71:102076. [PMID: 33930828 PMCID: PMC8184627 DOI: 10.1016/j.media.2021.102076] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 12/21/2020] [Accepted: 04/03/2021] [Indexed: 01/18/2023]
Abstract
Structural magnetic resonance imaging (MRI) has shown great clinical and practical values in computer-aided brain disorder identification. Multi-site MRI data increase sample size and statistical power, but are susceptible to inter-site heterogeneity caused by different scanners, scanning protocols, and subject cohorts. Multi-site MRI harmonization (MMH) helps alleviate the inter-site difference for subsequent analysis. Some MMH methods performed at imaging level or feature extraction level are concise but lack robustness and flexibility to some extent. Even though several machine/deep learning-based methods have been proposed for MMH, some of them require a portion of labeled data in the to-be-analyzed target domain or ignore the potential contributions of different brain regions to the identification of brain disorders. In this work, we propose an attention-guided deep domain adaptation (AD2A) framework for MMH and apply it to automated brain disorder identification with multi-site MRIs. The proposed framework does not need any category label information of target data, and can also automatically identify discriminative regions in whole-brain MR images. Specifically, the proposed AD2A is composed of three key modules: (1) an MRI feature encoding module to extract representations of input MRIs, (2) an attention discovery module to automatically locate discriminative dementia-related regions in each whole-brain MRI scan, and (3) a domain transfer module trained with adversarial learning for knowledge transfer between the source and target domains. Experiments have been performed on 2572 subjects from four benchmark datasets with T1-weighted structural MRIs, with results demonstrating the effectiveness of the proposed method in both tasks of brain disorder identification and disease progression prediction.
Collapse
Affiliation(s)
- Hao Guan
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Yunbi Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Erkun Yang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
169
|
Fu T, Dai LJ, Wu SY, Xiao Y, Ma D, Jiang YZ, Shao ZM. Spatial architecture of the immune microenvironment orchestrates tumor immunity and therapeutic response. J Hematol Oncol 2021; 14:98. [PMID: 34172088 PMCID: PMC8234625 DOI: 10.1186/s13045-021-01103-4] [Citation(s) in RCA: 266] [Impact Index Per Article: 66.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 06/03/2021] [Indexed: 02/08/2023] Open
Abstract
Tumors are not only aggregates of malignant cells but also well-organized complex ecosystems. The immunological components within tumors, termed the tumor immune microenvironment (TIME), have long been shown to be strongly related to tumor development, recurrence and metastasis. However, conventional studies that underestimate the potential value of the spatial architecture of the TIME are unable to completely elucidate its complexity. As innovative high-flux and high-dimensional technologies emerge, researchers can more feasibly and accurately detect and depict the spatial architecture of the TIME. These findings have improved our understanding of the complexity and role of the TIME in tumor biology. In this review, we first epitomized some representative emerging technologies in the study of the spatial architecture of the TIME and categorized the description methods used to characterize these structures. Then, we determined the functions of the spatial architecture of the TIME in tumor biology and the effects of the gradient of extracellular nonspecific chemicals (ENSCs) on the TIME. We also discussed the potential clinical value of our understanding of the spatial architectures of the TIME, as well as current limitations and future prospects in this novel field. This review will bring spatial architectures of the TIME, an emerging dimension of tumor ecosystem research, to the attention of more researchers and promote its application in tumor research and clinical practice.
Collapse
Affiliation(s)
- Tong Fu
- Department of Breast Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Key Laboratory of Breast Cancer in Shanghai, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Lei-Jie Dai
- Department of Breast Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Key Laboratory of Breast Cancer in Shanghai, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Song-Yang Wu
- Department of Breast Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Key Laboratory of Breast Cancer in Shanghai, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Yi Xiao
- Department of Breast Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Key Laboratory of Breast Cancer in Shanghai, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Ding Ma
- Department of Breast Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.
- Key Laboratory of Breast Cancer in Shanghai, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China.
| | - Yi-Zhou Jiang
- Department of Breast Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.
- Key Laboratory of Breast Cancer in Shanghai, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China.
| | - Zhi-Ming Shao
- Department of Breast Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.
- Key Laboratory of Breast Cancer in Shanghai, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China.
| |
Collapse
|
170
|
Xia X, Feng B, Wang J, Hua Q, Yang Y, Sheng L, Mou Y, Hu W. Deep Learning for Differentiating Benign From Malignant Parotid Lesions on MR Images. Front Oncol 2021; 11:632104. [PMID: 34249680 PMCID: PMC8262843 DOI: 10.3389/fonc.2021.632104] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Accepted: 06/07/2021] [Indexed: 12/29/2022] Open
Abstract
Purpose/Objectives(s) Salivary gland tumors are a rare, histologically heterogeneous group of tumors. The distinction between malignant and benign tumors of the parotid gland is clinically important. This study aims to develop and evaluate a deep-learning network for diagnosing parotid gland tumors via the deep learning of MR images. Materials/Methods Two hundred thirty-three patients with parotid gland tumors were enrolled in this study. Histology results were available for all tumors. All patients underwent MRI scans, including T1-weighted, CE-T1-weighted and T2-weighted imaging series. The parotid glands and tumors were segmented on all three MR image series by a radiologist with 10 years of clinical experience. A total of 3791 parotid gland region images were cropped from the MR images. A label (pleomorphic adenoma and Warthin tumor, malignant tumor or free of tumor), which was based on histology results, was assigned to each image. To train the deep-learning model, these data were randomly divided into a training dataset (90%, comprising 3035 MR images from 212 patients: 714 pleomorphic adenoma images, 558 Warthin tumor images, 861 malignant tumor images, and 902 images free of tumor) and a validation dataset (10%, comprising 275 images from 21 patients: 57 pleomorphic adenoma images, 36 Warthin tumor images, 93 malignant tumor images, and 89 images free of tumor). A modified ResNet model was developed to classify these images. The input images were resized to 224x224 pixels, including four channels (T1-weighted tumor images only, T2-weighted tumor images only, CE-T1-weighted tumor images only and parotid gland images). Random image flipping and contrast adjustment were used for data enhancement. The model was trained for 1200 epochs with a learning rate of 1e-6, and the Adam optimizer was implemented. It took approximately 2 hours to complete the whole training procedure. The whole program was developed with PyTorch (version 1.2). Results The model accuracy with the training dataset was 92.94% (95% CI [0.91, 0.93]). The micro-AUC was 0.98. The experimental results showed that the accuracy of the final algorithm in the diagnosis and staging of parotid cancer was 82.18% (95% CI [0.77, 0.86]). The micro-AUC was 0.93. Conclusion The proposed model may be used to assist clinicians in the diagnosis of parotid tumors. However, future larger-scale multicenter studies are required for full validation.
Collapse
Affiliation(s)
- Xianwu Xia
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Bin Feng
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Qianjin Hua
- Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Yide Yang
- Department of Infectious Disease, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Liang Sheng
- Department of Radiology, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Yonghua Mou
- Department of Hepatobiliary Surgery, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| |
Collapse
|
171
|
Odusami M, Maskeliūnas R, Damaševičius R, Krilavičius T. Analysis of Features of Alzheimer's Disease: Detection of Early Stage from Functional Brain Changes in Magnetic Resonance Images Using a Finetuned ResNet18 Network. Diagnostics (Basel) 2021; 11:1071. [PMID: 34200832 PMCID: PMC8230447 DOI: 10.3390/diagnostics11061071] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 06/04/2021] [Accepted: 06/08/2021] [Indexed: 12/12/2022] Open
Abstract
One of the first signs of Alzheimer's disease (AD) is mild cognitive impairment (MCI), in which there are small variants of brain changes among the intermediate stages. Although there has been an increase in research into the diagnosis of AD in its early levels of developments lately, brain changes, and their complexity for functional magnetic resonance imaging (fMRI), makes early detection of AD difficult. This paper proposes a deep learning-based method that can predict MCI, early MCI (EMCI), late MCI (LMCI), and AD. The Alzheimer's Disease Neuroimaging Initiative (ADNI) fMRI dataset consisting of 138 subjects was used for evaluation. The finetuned ResNet18 network achieved a classification accuracy of 99.99%, 99.95%, and 99.95% on EMCI vs. AD, LMCI vs. AD, and MCI vs. EMCI classification scenarios, respectively. The proposed model performed better than other known models in terms of accuracy, sensitivity, and specificity.
Collapse
Affiliation(s)
- Modupe Odusami
- Department of Multimedia Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania; (M.O.); (R.M.)
| | - Rytis Maskeliūnas
- Department of Multimedia Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania; (M.O.); (R.M.)
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 44248 Kaunas, Lithuania;
| | - Tomas Krilavičius
- Department of Applied Informatics, Vytautas Magnus University, 44248 Kaunas, Lithuania;
| |
Collapse
|
172
|
Ning Z, Xiao Q, Feng Q, Chen W, Zhang Y. Relation-Induced Multi-Modal Shared Representation Learning for Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1632-1645. [PMID: 33651685 DOI: 10.1109/tmi.2021.3063150] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The fusion of multi-modal data (e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)) has been prevalent for accurate identification of Alzheimer's disease (AD) by providing complementary structural and functional information. However, most of the existing methods simply concatenate multi-modal features in the original space and ignore their underlying associations which may provide more discriminative characteristics for AD identification. Meanwhile, how to overcome the overfitting issue caused by high-dimensional multi-modal data remains appealing. To this end, we propose a relation-induced multi-modal shared representation learning method for AD diagnosis. The proposed method integrates representation learning, dimension reduction, and classifier modeling into a unified framework. Specifically, the framework first obtains multi-modal shared representations by learning a bi-directional mapping between original space and shared space. Within this shared space, we utilize several relational regularizers (including feature-feature, feature-label, and sample-sample regularizers) and auxiliary regularizers to encourage learning underlying associations inherent in multi-modal data and alleviate overfitting, respectively. Next, we project the shared representations into the target space for AD diagnosis. To validate the effectiveness of our proposed approach, we conduct extensive experiments on two independent datasets (i.e., ADNI-1 and ADNI-2), and the experimental results demonstrate that our proposed method outperforms several state-of-the-art methods.
Collapse
|
173
|
Xu X, Lian C, Wang S, Zhu T, Chen RC, Wang AZ, Royce TJ, Yap PT, Shen D, Lian J. Asymmetric multi-task attention network for prostate bed segmentation in computed tomography images. Med Image Anal 2021; 72:102116. [PMID: 34217953 DOI: 10.1016/j.media.2021.102116] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 05/18/2021] [Accepted: 05/21/2021] [Indexed: 10/21/2022]
Abstract
Post-prostatectomy radiotherapy requires accurate annotation of the prostate bed (PB), i.e., the residual tissue after the operative removal of the prostate gland, to minimize side effects on surrounding organs-at-risk (OARs). However, PB segmentation in computed tomography (CT) images is a challenging task, even for experienced physicians. This is because PB is almost a "virtual" target with non-contrast boundaries and highly variable shapes depending on neighboring OARs. In this work, we propose an asymmetric multi-task attention network (AMTA-Net) for the concurrent segmentation of PB and surrounding OARs. Our AMTA-Net mimics experts in delineating the non-contrast PB by explicitly leveraging its critical dependency on the neighboring OARs (i.e., the bladder and rectum), which are relatively easy to distinguish in CT images. Specifically, we first adopt a U-Net as the backbone network for the low-level (or prerequisite) task of the OAR segmentation. Then, we build an attention sub-network upon the backbone U-Net with a series of cascaded attention modules, which can hierarchically transfer the OAR features and adaptively learn discriminative representations for the high-level (or primary) task of the PB segmentation. We comprehensively evaluate the proposed AMTA-Net on a clinical dataset composed of 186 CT images. According to the experimental results, our AMTA-Net significantly outperforms current clinical state-of-the-arts (i.e., atlas-based segmentation methods), indicating the value of our method in reducing time and labor in the clinical workflow. Our AMTA-Net also presents better performance than the technical state-of-the-arts (i.e., the deep learning-based segmentation methods), especially for the most indistinguishable and clinically critical part of the PB boundaries. Source code is released at https://github.com/superxuang/amta-net.
Collapse
Affiliation(s)
- Xuanang Xu
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Shuai Wang
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, Shandong 264209, China
| | - Tong Zhu
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Ronald C Chen
- Department of Radiation Oncology, University of Kansas Medical Center, Kansas City, KS 66160, USA
| | - Andrew Z Wang
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Trevor J Royce
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200030, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| | - Jun Lian
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
174
|
Ocasio E, Duong TQ. Deep learning prediction of mild cognitive impairment conversion to Alzheimer's disease at 3 years after diagnosis using longitudinal and whole-brain 3D MRI. PeerJ Comput Sci 2021; 7:e560. [PMID: 34141888 PMCID: PMC8176545 DOI: 10.7717/peerj-cs.560] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 05/03/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND While there is no cure for Alzheimer's disease (AD), early diagnosis and accurate prognosis of AD may enable or encourage lifestyle changes, neurocognitive enrichment, and interventions to slow the rate of cognitive decline. The goal of our study was to develop and evaluate a novel deep learning algorithm to predict mild cognitive impairment (MCI) to AD conversion at three years after diagnosis using longitudinal and whole-brain 3D MRI. METHODS This retrospective study consisted of 320 normal cognition (NC), 554 MCI, and 237 AD patients. Longitudinal data include T1-weighted 3D MRI obtained at initial presentation with diagnosis of MCI and at 12-month follow up. Whole-brain 3D MRI volumes were used without a priori segmentation of regional structural volumes or cortical thicknesses. MRIs of the AD and NC cohort were used to train a deep learning classification model to obtain weights to be applied via transfer learning for prediction of MCI patient conversion to AD at three years post-diagnosis. Two (zero-shot and fine tuning) transfer learning methods were evaluated. Three different convolutional neural network (CNN) architectures (sequential, residual bottleneck, and wide residual) were compared. Data were split into 75% and 25% for training and testing, respectively, with 4-fold cross validation. Prediction accuracy was evaluated using balanced accuracy. Heatmaps were generated. RESULTS The sequential convolutional approach yielded slightly better performance than the residual-based architecture, the zero-shot transfer learning approach yielded better performance than fine tuning, and CNN using longitudinal data performed better than CNN using a single timepoint MRI in predicting MCI conversion to AD. The best CNN model for predicting MCI conversion to AD at three years after diagnosis yielded a balanced accuracy of 0.793. Heatmaps of the prediction model showed regions most relevant to the network including the lateral ventricles, periventricular white matter and cortical gray matter. CONCLUSIONS This is the first convolutional neural network model using longitudinal and whole-brain 3D MRIs without extracting regional brain volumes or cortical thicknesses to predict future MCI to AD conversion at 3 years after diagnosis. This approach could lead to early prediction of patients who are likely to progress to AD and thus may lead to better management of the disease.
Collapse
Affiliation(s)
- Ethan Ocasio
- Department of Radiology, Montefiore Medical Center, Albert Einstein College of Medicine, Bronx, NY, United States of America
| | - Tim Q. Duong
- Department of Radiology, Montefiore Medical Center, Albert Einstein College of Medicine, Bronx, NY, United States of America
| |
Collapse
|
175
|
Akramifard H, Balafar MA, Razavi SN, Ramli AR. Early Detection of Alzheimer's Disease Based on Clinical Trials, Three-Dimensional Imaging Data, and Personal Information Using Autoencoders. JOURNAL OF MEDICAL SIGNALS & SENSORS 2021; 11:120-130. [PMID: 34268100 PMCID: PMC8253314 DOI: 10.4103/jmss.jmss_11_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 03/16/2019] [Accepted: 08/30/2020] [Indexed: 12/02/2022]
Abstract
Background: A timely diagnosis of Alzheimer's disease (AD) is crucial to obtain more practical treatments. In this article, a novel approach using Auto-Encoder Neural Networks (AENN) for early detection of AD was proposed. Method: The proposed method mainly deals with the classification of multimodal data and the imputation of missing data. The data under study involve the MiniMental State Examination, magnetic resonance imaging, positron emission tomography, cerebrospinal fluid data, and personal information. Natural logarithm was used for normalizing the data. The Auto-Encoder Neural Networks was used for imputing missing data. Principal component analysis algorithm was used for reducing dimensionality of data. Support Vector Machine (SVM) was used as classifier. The proposed method was evaluated using Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Then, 10fold crossvalidation was used to audit the detection accuracy of the method. Results: The effectiveness of the proposed approach was studied under several scenarios considering 705 cases of ADNI database. In three binary classification problems, that is AD vs. normal controls (NCs), mild cognitive impairment (MCI) vs. NC, and MCI vs. AD, we obtained the accuracies of 95.57%, 83.01%, and 78.67%, respectively. Conclusion: Experimental results revealed that the proposed method significantly outperformed most of the stateoftheart methods.
Collapse
Affiliation(s)
- Hamid Akramifard
- Department of Software Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, East Azerbaijan, Tabriz, Iran
| | - Mohammad Ali Balafar
- Department of Software Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, East Azerbaijan, Tabriz, Iran
| | - Seyed Naser Razavi
- Department of Software Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, East Azerbaijan, Tabriz, Iran
| | - Abd Rahman Ramli
- Department of Software Engineering, Faculty of Engineering, University Putra Malaysia, Selangor, Malaysia
| |
Collapse
|
176
|
He K, Zhao W, Xie X, Ji W, Liu M, Tang Z, Shi Y, Shi F, Gao Y, Liu J, Zhang J, Shen D. Synergistic learning of lung lobe segmentation and hierarchical multi-instance classification for automated severity assessment of COVID-19 in CT images. PATTERN RECOGNITION 2021; 113:107828. [PMID: 33495661 PMCID: PMC7816595 DOI: 10.1016/j.patcog.2021.107828] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 12/10/2020] [Accepted: 12/22/2020] [Indexed: 05/03/2023]
Abstract
Understanding chest CT imaging of the coronavirus disease 2019 (COVID-19) will help detect infections early and assess the disease progression. Especially, automated severity assessment of COVID-19 in CT images plays an essential role in identifying cases that are in great need of intensive clinical care. However, it is often challenging to accurately assess the severity of this disease in CT images, due to variable infection regions in the lungs, similar imaging biomarkers, and large inter-case variations. To this end, we propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images, by jointly performing lung lobe segmentation and multi-instance classification. Considering that only a few infection regions in a CT image are related to the severity assessment, we first represent each input image by a bag that contains a set of 2D image patches (with each cropped from a specific slice). A multi-task multi-instance deep network (called M2 UNet) is then developed to assess the severity of COVID-19 patients and also segment the lung lobe simultaneously. Our M2 UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment (with a unique hierarchical multi-instance learning strategy). Here, the context information provided by segmentation can be implicitly employed to improve the performance of severity assessment. Extensive experiments were performed on a real COVID-19 CT image dataset consisting of 666 chest CT images, with results suggesting the effectiveness of our proposed method compared to several state-of-the-art methods.
Collapse
Affiliation(s)
- Kelei He
- Medical School of Nanjing University, Nanjing, China
- National Institute of Healthcare Data Science at Nanjing University, China
| | - Wei Zhao
- Department of Radiology, the Second Xiangya Hospital, Central South University, Changsha,Hunan, China
| | - Xingzhi Xie
- Department of Radiology, the Second Xiangya Hospital, Central South University, Changsha,Hunan, China
| | - Wen Ji
- National Institute of Healthcare Data Science at Nanjing University, China
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Mingxia Liu
- Biomedical Research Imaging Center and the Department of Radiology, University of North Carolina, Chapel Hill, NC, U.S
| | - Zhenyu Tang
- Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, China
| | - Yinghuan Shi
- National Institute of Healthcare Data Science at Nanjing University, China
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yang Gao
- National Institute of Healthcare Data Science at Nanjing University, China
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Jun Liu
- Department of Radiology, the Second Xiangya Hospital, Central South University, Changsha,Hunan, China
- Department of Radiology Quality Control Center, Changsha, China
| | - Junfeng Zhang
- Medical School of Nanjing University, Nanjing, China
- National Institute of Healthcare Data Science at Nanjing University, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| |
Collapse
|
177
|
Ding Y, Zhao K, Che T, Du K, Sun H, Liu S, Zheng Y, Li S, Liu B, Liu Y. Quantitative Radiomic Features as New Biomarkers for Alzheimer's Disease: An Amyloid PET Study. Cereb Cortex 2021; 31:3950-3961. [PMID: 33884402 DOI: 10.1093/cercor/bhab061] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 01/29/2021] [Accepted: 02/22/2021] [Indexed: 12/20/2022] Open
Abstract
Growing evidence indicates that amyloid-beta (Aβ) accumulation is one of the most common neurobiological biomarkers in Alzheimer's disease (AD). The primary aim of this study was to explore whether the radiomic features of Aβ positron emission tomography (PET) images are used as predictors and provide a neurobiological foundation for AD. The radiomics features of Aβ PET imaging of each brain region of the Brainnetome Atlas were computed for classification and prediction using a support vector machine model. The results showed that the area under the receiver operating characteristic curve (AUC) was 0.93 for distinguishing AD (N = 291) from normal control (NC; N = 334). Additionally, the AUC was 0.83 for the prediction of mild cognitive impairment (MCI) converting (N = 88) (vs. no conversion, N = 100) to AD. In the MCI and AD groups, the systemic analysis demonstrated that the classification outputs were significantly associated with clinical measures (apolipoprotein E genotype, polygenic risk scores, polygenic hazard scores, cerebrospinal fluid Aβ, and Tau, cognitive ability score, the conversion time for progressive MCI subjects and cognitive changes). These findings provide evidence that the radiomic features of Aβ PET images can serve as new biomarkers for clinical applications in AD/MCI, further providing evidence for predicting whether MCI subjects will convert to AD.
Collapse
Affiliation(s)
- Yanhui Ding
- School of Information Science and Engineering, Shandong Normal University, Ji'nan 250014, China
| | - Kun Zhao
- School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China.,Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Tongtong Che
- School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Kai Du
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.,Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China
| | - Hongzan Sun
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang 110004, China
| | - Shu Liu
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Ji'nan 250014, China
| | - Shuyu Li
- School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Bing Liu
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.,Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China
| | - Yong Liu
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.,Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China.,Pazhou Lab, Guangzhou 510330, China.,School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | | |
Collapse
|
178
|
Han K, Luo J, Xiao Q, Ning Z, Zhang Y. Light-weight cross-view hierarchical fusion network for joint localization and identification in Alzheimer's disease with adaptive instance-declined pruning. Phys Med Biol 2021; 66. [PMID: 33765665 DOI: 10.1088/1361-6560/abf200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 03/25/2021] [Indexed: 11/11/2022]
Abstract
Magnetic resonance imaging (MRI) has been widely used in assessing development of Alzheimer's disease (AD) by providing structural information of disease-associated regions (e.g. atrophic regions). In this paper, we propose a light-weight cross-view hierarchical fusion network (CvHF-net), consisting of local patch and global subject subnets, for joint localization and identification of the discriminative local patches and regions in the whole brain MRI, upon which feature representations are then jointly learned and fused to construct hierarchical classification models for AD diagnosis. Firstly, based on the extracted class-discriminative 3D patches, we employ the local patch subnets to utilize multiple 2D views to represent 3D patches by using an attention-aware hierarchical fusion structure in a divide-and-conquer manner. Since different local patches are with various abilities in AD identification, the global subject subnet is developed to bias the allocation of available resources towards the most informative parts among these local patches to obtain global information for AD identification. Besides, an instance declined pruning algorithm is embedded in the CvHF-net for adaptively selecting most discriminant patches in a task-driven manner. The proposed method was evaluated on the AD Neuroimaging Initiative dataset and the experimental results show that our proposed method can achieve good performance on AD diagnosis.
Collapse
Affiliation(s)
- Kangfu Han
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Jiaxiu Luo
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Qing Xiao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Zhenyuan Ning
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, Guangdong, 510515, People's Republic of China
| |
Collapse
|
179
|
Budd S, Robinson EC, Kainz B. A survey on active learning and human-in-the-loop deep learning for medical image analysis. Med Image Anal 2021; 71:102062. [PMID: 33901992 DOI: 10.1016/j.media.2021.102062] [Citation(s) in RCA: 135] [Impact Index Per Article: 33.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 03/26/2021] [Accepted: 03/30/2021] [Indexed: 12/21/2022]
Abstract
Fully automatic deep learning has become the state-of-the-art technique for many tasks including image acquisition, analysis and interpretation, and for the extraction of clinically useful information for computer-aided detection, diagnosis, treatment planning, intervention and therapy. However, the unique challenges posed by medical image analysis suggest that retaining a human end-user in any deep learning enabled system will be beneficial. In this review we investigate the role that humans might play in the development and deployment of deep learning enabled diagnostic applications and focus on techniques that will retain a significant input from a human end user. Human-in-the-Loop computing is an area that we see as increasingly important in future research due to the safety-critical nature of working in the medical domain. We evaluate four key areas that we consider vital for deep learning in the clinical practice: (1) Active Learning to choose the best data to annotate for optimal model performance; (2) Interaction with model outputs - using iterative feedback to steer models to optima for a given prediction and offering meaningful ways to interpret and respond to predictions; (3) Practical considerations - developing full scale applications and the key considerations that need to be made before deployment; (4) Future Prospective and Unanswered Questions - knowledge gaps and related research fields that will benefit human-in-the-loop computing as they evolve. We offer our opinions on the most promising directions of research and how various aspects of each area might be unified towards common goals.
Collapse
Affiliation(s)
- Samuel Budd
- Department of Computing, Imperial College London, UK.
| | | | | |
Collapse
|
180
|
Yao D, Sui J, Wang M, Yang E, Jiaerken Y, Luo N, Yap PT, Liu M, Shen D. A Mutual Multi-Scale Triplet Graph Convolutional Network for Classification of Brain Disorders Using Functional or Structural Connectivity. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1279-1289. [PMID: 33444133 PMCID: PMC8238125 DOI: 10.1109/tmi.2021.3051604] [Citation(s) in RCA: 69] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain connectivity alterations associated with mental disorders have been widely reported in both functional MRI (fMRI) and diffusion MRI (dMRI). However, extracting useful information from the vast amount of information afforded by brain networks remains a great challenge. Capturing network topology, graph convolutional networks (GCNs) have demonstrated to be superior in learning network representations tailored for identifying specific brain disorders. Existing graph construction techniques generally rely on a specific brain parcellation to define regions-of-interest (ROIs) to construct networks, often limiting the analysis into a single spatial scale. In addition, most methods focus on the pairwise relationships between the ROIs and ignore high-order associations between subjects. In this letter, we propose a mutual multi-scale triplet graph convolutional network (MMTGCN) to analyze functional and structural connectivity for brain disorder diagnosis. We first employ several templates with different scales of ROI parcellation to construct coarse-to-fine brain connectivity networks for each subject. Then, a triplet GCN (TGCN) module is developed to learn functional/structural representations of brain connectivity networks at each scale, with the triplet relationship among subjects explicitly incorporated into the learning process. Finally, we propose a template mutual learning strategy to train different scale TGCNs collaboratively for disease classification. Experimental results on 1,160 subjects from three datasets with fMRI or dMRI data demonstrate that our MMTGCN outperforms several state-of-the-art methods in identifying three types of brain disorders.
Collapse
|
181
|
Zhao X, Ang CKE, Acharya UR, Cheong KH. Application of Artificial Intelligence techniques for the detection of Alzheimer’s disease using structural MRI images. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.02.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
182
|
He K, Lian C, Adeli E, Huo J, Gao Y, Zhang B, Zhang J, Shen D. MetricUNet: Synergistic image- and voxel-level learning for precise prostate segmentation via online sampling. Med Image Anal 2021; 71:102039. [PMID: 33831595 DOI: 10.1016/j.media.2021.102039] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 02/13/2021] [Accepted: 03/09/2021] [Indexed: 10/21/2022]
Abstract
Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.
Collapse
Affiliation(s)
- Kelei He
- Medical School of Nanjing University, Nanjing, China; National Institute of Healthcare Data Science at Nanjing University, Nanjing, China
| | - Chunfeng Lian
- School of Mathematics and Statistics, Xi'an Jiaotong University, Shanxi, China
| | - Ehsan Adeli
- Department of Psychiatry and Behavioral Sciences and the Department of Computer Science, Stanford University, CA, USA
| | - Jing Huo
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Yang Gao
- National Institute of Healthcare Data Science at Nanjing University, Nanjing, China; State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Bing Zhang
- Department of Radiology, Nanjing Drum Tower Hospital, Nanjing University Medical School, Nanjing, China
| | - Junfeng Zhang
- Medical School of Nanjing University, Nanjing, China; National Institute of Healthcare Data Science at Nanjing University, Nanjing, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
183
|
Qu Y, Wang P, Liu B, Song C, Wang D, Yang H, Zhang Z, Chen P, Kang X, Du K, Yao H, Zhou B, Han T, Zuo N, Han Y, Lu J, Yu C, Zhang X, Jiang T, Zhou Y, Liu Y. AI4AD: Artificial intelligence analysis for Alzheimer's disease classification based on a multisite DTI database. BRAIN DISORDERS 2021. [DOI: 10.1016/j.dscb.2021.100005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
|
184
|
Yang E, Liu M, Yao D, Cao B, Lian C, Yap PT, Shen D. Deep Bayesian Hashing With Center Prior for Multi-Modal Neuroimage Retrieval. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:503-513. [PMID: 33048672 PMCID: PMC7909752 DOI: 10.1109/tmi.2020.3030752] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Multi-modal neuroimage retrieval has greatly facilitated the efficiency and accuracy of decision making in clinical practice by providing physicians with previous cases (with visually similar neuroimages) and corresponding treatment records. However, existing methods for image retrieval usually fail when applied directly to multi-modal neuroimage databases, since neuroimages generally have smaller inter-class variation and larger inter-modal discrepancy compared to natural images. To this end, we propose a deep Bayesian hash learning framework, called CenterHash, which can map multi-modal data into a shared Hamming space and learn discriminative hash codes from imbalanced multi-modal neuroimages. The key idea to tackle the small inter-class variation and large inter-modal discrepancy is to learn a common center representation for similar neuroimages from different modalities and encourage hash codes to be explicitly close to their corresponding center representations. Specifically, we measure the similarity between hash codes and their corresponding center representations and treat it as a center prior in the proposed Bayesian learning framework. A weighted contrastive likelihood loss function is also developed to facilitate hash learning from imbalanced neuroimage pairs. Comprehensive empirical evidence shows that our method can generate effective hash codes and yield state-of-the-art performance in cross-modal retrieval on three multi-modal neuroimage datasets.
Collapse
|
185
|
Buvaneswari PR, Gayathri R. Deep Learning-Based Segmentation in Classification of Alzheimer’s Disease. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021. [DOI: 10.1007/s13369-020-05193-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
186
|
Pan X, Phan TL, Adel M, Fossati C, Gaidon T, Wojak J, Guedj E. Multi-View Separable Pyramid Network for AD Prediction at MCI Stage by 18F-FDG Brain PET Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:81-92. [PMID: 32894711 DOI: 10.1109/tmi.2020.3022591] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Alzheimer's Disease (AD), one of the main causes of death in elderly people, is characterized by Mild Cognitive Impairment (MCI) at prodromal stage. Nevertheless, only part of MCI subjects could progress to AD. The main objective of this paper is thus to identify those who will develop a dementia of AD type among MCI patients. 18F-FluoroDeoxyGlucose Positron Emission Tomography (18F-FDG PET) serves as a neuroimaging modality for early diagnosis as it can reflect neural activity via measuring glucose uptake at resting-state. In this paper, we design a deep network on 18F-FDG PET modality to address the problem of AD identification at early MCI stage. To this end, a Multi-view Separable Pyramid Network (MiSePyNet) is proposed, in which representations are learned from axial, coronal and sagittal views of PET scans so as to offer complementary information and then combined to make a decision jointly. Different from the widely and naturally used 3D convolution operations for 3D images, the proposed architecture is deployed with separable convolution from slice-wise to spatial-wise successively, which can retain the spatial information and reduce training parameters compared to 2D and 3D networks, respectively. Experiments on ADNI dataset show that the proposed method can yield better performance than both traditional and deep learning-based algorithms for predicting the progression of Mild Cognitive Impairment, with a classification accuracy of 83.05%.
Collapse
|
187
|
Yee E, Ma D, Popuri K, Wang L, Beg MF. Construction of MRI-Based Alzheimer's Disease Score Based on Efficient 3D Convolutional Neural Network: Comprehensive Validation on 7,902 Images from a Multi-Center Dataset. J Alzheimers Dis 2021; 79:47-58. [PMID: 33252079 PMCID: PMC9159475 DOI: 10.3233/jad-200830] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
BACKGROUND In recent years, many convolutional neural networks (CNN) have been proposed for the classification of Alzheimer's disease. Due to memory constraints, many of the proposed CNNs work at a 2D slice-level or 3D patch-level. OBJECTIVE Here, we propose a subject-level 3D CNN that can extract the neurodegenerative patterns of the whole brain MRI and converted into a probabilistic Dementia score. METHODS We propose an efficient and lightweight subject-level 3D CNN featuring dilated convolutions. We trained our network on the ADNI data on stable Dementia of the Alzheimer's type (sDAT) from stable normal controls (sNC). To comprehensively evaluate the generalizability of our proposed network, we performed four independent tests which includes testing on images from other ADNI individuals at various stages of the dementia, images acquired from other sites (AIBL), images acquired using different protocols (OASIS), and longitudinal images acquired over a short period of time (MIRIAD). RESULTS We achieved a 5-fold cross-validated balanced accuracy of 88%in differentiating sDAT from sNC, and an overall specificity of 79.5%and sensitivity 79.7%on the entire set of 7,902 independent test images. CONCLUSION Independent testing is essential for estimating the generalization ability of the network to unseen data, but is often lacking in studies using CNN for DAT classification. This makes it difficult to compare the performances achieved using different architectures. Our comprehensive evaluation highlighting the competitive performance of our network and potential promise for generalization.
Collapse
Affiliation(s)
- Evangeline Yee
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Da Ma
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Karteek Popuri
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Lei Wang
- Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Mirza Faisal Beg
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | | | | |
Collapse
|
188
|
Burgos N, Bottani S, Faouzi J, Thibeau-Sutre E, Colliot O. Deep learning for brain disorders: from data processing to disease treatment. Brief Bioinform 2020; 22:1560-1576. [PMID: 33316030 DOI: 10.1093/bib/bbaa310] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Collapse
|
189
|
Ahmed S, Kim BC, Lee KH, Jung HY. Ensemble of ROI-based convolutional neural network classifiers for staging the Alzheimer disease spectrum from magnetic resonance imaging. PLoS One 2020; 15:e0242712. [PMID: 33290403 PMCID: PMC7723284 DOI: 10.1371/journal.pone.0242712] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Accepted: 11/07/2020] [Indexed: 11/26/2022] Open
Abstract
Patches from three orthogonal views of selected cerebral regions can be utilized to learn convolutional neural network (CNN) models for staging the Alzheimer disease (AD) spectrum including preclinical AD, mild cognitive impairment due to AD, and dementia due to AD and normal controls. Hippocampi, amygdalae and insulae were selected from the volumetric analysis of structured magnetic resonance images (MRIs). Three-view patches (TVPs) from these regions were fed to the CNN for training. MRIs were classified with the SoftMax-normalized scores of individual model predictions on TVPs. The significance of each region of interest (ROI) for staging the AD spectrum was evaluated and reported. The results of the ensemble classifier are compared with state-of-the-art methods using the same evaluation metrics. Patch-based ROI ensembles provide comparable diagnostic performance for AD staging. In this work, TVP-based ROI analysis using a CNN provides informative landmarks in cerebral MRIs and may have significance in clinical studies and computer-aided diagnosis system design.
Collapse
Affiliation(s)
- Samsuddin Ahmed
- Department of Computer Engineering, Chosun University, Gwangju, South Korea
| | - Byeong C. Kim
- Gwangju Alzheimer’s disease and Related Dementias Cohort Research Center, Chosun University, Gwangju, Korea
- Department of Neurology, Chonnam National University Medical School, Gwangju, South Korea
| | - Kun Ho Lee
- Gwangju Alzheimer’s disease and Related Dementias Cohort Research Center, Chosun University, Gwangju, Korea
- Department of Biomedical Science, Chosun University, Gwangju, South Korea
- Korea Brain Research Institute, Daegu, Korea
| | - Ho Yub Jung
- Department of Computer Engineering, Chosun University, Gwangju, South Korea
| | | |
Collapse
|
190
|
Aderghal K, Afdel K, Benois-Pineau J, Catheline G. Improving Alzheimer's stage categorization with Convolutional Neural Network using transfer learning and different magnetic resonance imaging modalities. Heliyon 2020; 6:e05652. [PMID: 33336093 PMCID: PMC7733012 DOI: 10.1016/j.heliyon.2020.e05652] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 07/04/2020] [Accepted: 11/30/2020] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND Alzheimer's Disease (AD) is a neurodegenerative disease characterized by progressive loss of memory and general decline in cognitive functions. Multi-modal imaging such as structural MRI and DTI provide useful information for the classification of patients on the basis of brain biomarkers. Recently, CNN methods have emerged as powerful tools to improve classification using images. NEW METHOD In this paper, we propose a transfer learning scheme using Convolutional Neural Networks (CNNs) to automatically classify brain scans focusing only on a small ROI: e.g. a few slices of the hippocampal region. The network's architecture is similar to a LeNet-like CNN upon which models are built and fused for AD stage classification diagnosis. We evaluated various types of transfer learning through the following mechanisms: (i) cross-modal (sMRI and DTI) and (ii) cross-domain transfer learning (using MNIST) (iii) a hybrid transfer learning of both types. RESULTS Our method shows good performances even on small datasets and with a limited number of slices of small brain region. It increases accuracy with more than 5 points for the most difficult classification tasks, i.e., AD/MCI and MCI/NC. COMPARISON WITH EXISTING METHODS Our methodology provides good accuracy scores for classification over a shallow convolutional network. Besides, we focused only on a small region; i.e., the hippocampal region, where few slices are selected to feed the network. Also, we used cross-modal transfer learning. CONCLUSIONS Our proposed method is suitable for working with a shallow CNN network for low-resolution MRI and DTI scans. It yields to significant results even if the model is trained on small datasets, which is often the case in medical image analysis.
Collapse
Affiliation(s)
- Karim Aderghal
- Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, F-33400, Talence, France
- LabSIV, Faculty of Sciences, Department of Computer Science, Ibn Zohr University, Agadir, Morocco
| | - Karim Afdel
- LabSIV, Faculty of Sciences, Department of Computer Science, Ibn Zohr University, Agadir, Morocco
| | - Jenny Benois-Pineau
- Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, F-33400, Talence, France
| | - Gwénaëlle Catheline
- Univ. Bordeaux, CNRS, UMR 5287, Institut de Neurosciences Cognitives et Intégratives d'Aquitaine (INCIA), Bordeaux, France
| |
Collapse
|
191
|
Raju M, Gopi VP, Anitha VS, Wahid KA. Multi-class diagnosis of Alzheimer's disease using cascaded three dimensional-convolutional neural network. Phys Eng Sci Med 2020; 43:1219-1228. [PMID: 32926392 DOI: 10.1007/s13246-020-00924-w] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Accepted: 09/03/2020] [Indexed: 12/13/2022]
Abstract
Dementia is a social problem in the aging society of advanced countries. Presently, 46.8 million people affected with dementia worldwide, and it may increase to 130 million by 2050. Alzheimer's disease (AD) is the most common form of dementia. The cost of care for AD patients in 2015 was 818 billion US dollars and is expected to increase intensely due to the increasing number of patients due to the aging society. It isn't easy to cure AD, but early detection is crucial. This paper proposes a multi-class classification of AD, mild cognitive impairment (MCI), and normal control (NC) subjects using three dimensional-convolutional neural network with Support Vector Machine classifier. A cross-sectional study on structural MRI data of 465 subjects, including 132 AD patients, 181 MCI, and 152 NC, is performed in this paper. The highly complex and spatial atrophy patterns of the brain related to Alzheimer's Disease and MCI are extracted from structural MRI images using cascaded layers of the three dimensional convolutional neural network. The hectic process of segmentation and further extraction of handcrafted features is eliminated. The complete image is considered for the processing, thus incorporating every region of the brain for the classification. The features extracted using four cascaded layers of three dimensional-convolutional neural network are fed into the Support Vector Machine classifier. The proposed method achieved 97.77% accuracy which outperforms state of the art, and this algorithm is a promising indicator for the diagnosis of AD.
Collapse
Affiliation(s)
- Manu Raju
- Department of Electronics and Communication Engineering Government Engineering College Wayanad, APJ Abdul Kalam Technological University, Thiruvananthapuram, Kerala, India
| | - Varun P Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tiruchirappalli, India.
| | - V S Anitha
- Department of Computer Science and Engineering, Government Engineering College Wayanad, APJ Abdul Kalam Technological University, Thiruvananthapuram, Kerala, India
| | - Khan A Wahid
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada
| |
Collapse
|
192
|
Wang M, Hao X, Huang J, Wang K, Shen L, Xu X, Zhang D, Liu M. Hierarchical Structured Sparse Learning for Schizophrenia Identification. Neuroinformatics 2020; 18:43-57. [PMID: 31016571 DOI: 10.1007/s12021-019-09423-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Fractional amplitude of low-frequency fluctuation (fALFF) has been widely used for resting-state functional magnetic resonance imaging (rs-fMRI) based schizophrenia (SZ) diagnosis. However, previous studies usually measure the fALFF within low-frequency fluctuation (from 0.01 to 0.08Hz), which cannot fully cover the complex neural activity pattern in the resting-state brain. In addition, existing studies usually ignore the fact that each specific frequency band can delineate the unique spontaneous fluctuations of neural activities in the brain. Accordingly, in this paper, we propose a novel hierarchical structured sparse learning method to sufficiently utilize the specificity and complementary structure information across four different frequency bands (from 0.01Hz to 0.25Hz) for SZ diagnosis. The proposed method can help preserve the partial group structures among multiple frequency bands and the specific characters in each frequency band. We further develop an efficient optimization algorithm to solve the proposed objective function. We validate the efficacy of our proposed method on a real SZ dataset. Also, to demonstrate the generality of the method, we apply our proposed method on a subset of Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results on both datasets demonstrate that our proposed method achieves promising performance in brain disease classification, compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Mingliang Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, China.,The State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, Shaanxi, China
| | - Xiaoke Hao
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, China
| | - Jiashuang Huang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, China
| | - Kangcheng Wang
- Department of Psychology, Southwest University, Chongqing, China
| | - Li Shen
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Xijia Xu
- Department of Psychiatry, Affiliated Nanjing Brain Hospital, Nanjing Medical University, Nanjing, China.
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, China.
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
193
|
Zhang L, Wang M, Liu M, Zhang D. A Survey on Deep Learning for Neuroimaging-Based Brain Disorder Analysis. Front Neurosci 2020; 14:779. [PMID: 33117114 PMCID: PMC7578242 DOI: 10.3389/fnins.2020.00779] [Citation(s) in RCA: 84] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Accepted: 06/02/2020] [Indexed: 12/12/2022] Open
Abstract
Deep learning has recently been used for the analysis of neuroimages, such as structural magnetic resonance imaging (MRI), functional MRI, and positron emission tomography (PET), and it has achieved significant performance improvements over traditional machine learning in computer-aided diagnosis of brain disorders. This paper reviews the applications of deep learning methods for neuroimaging-based brain disorder analysis. We first provide a comprehensive overview of deep learning techniques and popular network architectures by introducing various types of deep neural networks and recent developments. We then review deep learning methods for computer-aided analysis of four typical brain disorders, including Alzheimer's disease, Parkinson's disease, Autism spectrum disorder, and Schizophrenia, where the first two diseases are neurodegenerative disorders and the last two are neurodevelopmental and psychiatric disorders, respectively. More importantly, we discuss the limitations of existing studies and present possible future directions.
Collapse
Affiliation(s)
- Li Zhang
- College of Computer Science and Technology, Nanjing Forestry University, Nanjing, China
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Mingliang Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| |
Collapse
|
194
|
Tufail AB, Ma YK, Zhang QN. Binary Classification of Alzheimer's Disease Using sMRI Imaging Modality and Deep Learning. J Digit Imaging 2020; 33:1073-1090. [PMID: 32728983 PMCID: PMC7573078 DOI: 10.1007/s10278-019-00265-5] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
Alzheimer's disease (AD) is an irreversible devastative neurodegenerative disorder associated with progressive impairment of memory and cognitive functions. Its early diagnosis is crucial for the development of possible future treatment option(s). Structural magnetic resonance images (sMRI) play an important role to help in understanding the anatomical changes related to AD especially in its early stages. Conventional methods require the expertise of domain experts and extract hand-picked features such as gray matter substructures and train a classifier to distinguish AD subjects from healthy subjects. Different from these methods, this paper proposes to construct multiple deep 2D convolutional neural networks (2D-CNNs) to learn the various features from local brain images which are combined to make the final classification for AD diagnosis. The whole brain image was passed through two transfer learning architectures; Inception version 3 and Xception, as well as a custom Convolutional Neural Network (CNN) built with the help of separable convolutional layers which can automatically learn the generic features from imaging data for classification. Our study is conducted using cross-sectional T1-weighted structural MRI brain images from Open Access Series of Imaging Studies (OASIS) database to maintain the size and contrast over different MRI scans. Experimental results show that the transfer learning approaches exceed the performance of non-transfer learning-based approaches demonstrating the effectiveness of these approaches for the binary AD classification task.
Collapse
Affiliation(s)
- Ahsan Bin Tufail
- Harbin Institute of Technology, Harbin, China
- COMSATS University Islamabad, Sahiwal Campus, Sahiwal, Pakistan
| | - Yong-Kui Ma
- Harbin Institute of Technology, Harbin, China.
| | | |
Collapse
|
195
|
Xu X, Lian C, Wang S, Wang A, Royce T, Chen R, Lian J, Shen D. Asymmetrical Multi-task Attention U-Net for the Segmentation of Prostate Bed in CT Image. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12264:470-479. [PMID: 34179897 PMCID: PMC8221064 DOI: 10.1007/978-3-030-59719-1_46] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Segmentation of the prostate bed, the residual tissue after the removal of the prostate gland, is an essential prerequisite for post-prostatectomy radiotherapy but also a challenging task due to its non-contrast boundaries and highly variable shapes relying on neighboring organs. In this work, we propose a novel deep learning-based method to automatically segment this "invisible target". As the main idea of our design, we expect to get reference from the surrounding normal structures (bladder&rectum) and take advantage of this information to facilitate the prostate bed segmentation. To achieve this goal, we first use a U-Net as the backbone network to perform the bladder&rectum segmentation, which serves as a low-level task that can provide references to the high-level task of the prostate bed segmentation. Based on the backbone network, we build a novel attention network with a series of cascaded attention modules to further extract discriminative features for the high-level prostate bed segmentation task. Since the attention network has one-sided dependency on the backbone network, simulating the clinical workflow to use normal structures to guide the segmentation of radiotherapy target, we name the final composition model asymmetrical multi-task attention U-Net. Extensive experiments on a clinical dataset consisting of 186 CT images demonstrate the effectiveness of this new design and the superior performance of the model in comparison to the conventional atlas-based methods for prostate bed segmentation. The source code is publicly available at https://github.com/superxuang/amta-net.
Collapse
Affiliation(s)
- Xuanang Xu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Shuai Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Andrew Wang
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Trevor Royce
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Ronald Chen
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
196
|
Lian C, Wang F, Deng HH, Wang L, Xiao D, Kuang T, Lin HY, Gateno J, Shen SGF, Yap PT, Xia JJ, Shen D. Multi-task Dynamic Transformer Network for Concurrent Bone Segmentation and Large-Scale Landmark Localization with Dental CBCT. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12264:807-816. [PMID: 34935006 PMCID: PMC8687703 DOI: 10.1007/978-3-030-59719-1_78] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Accurate bone segmentation and anatomical landmark localization are essential tasks in computer-aided surgical simulation for patients with craniomaxillofacial (CMF) deformities. To leverage the complementarity between the two tasks, we propose an efficient end-to-end deep network, i.e., multi-task dynamic transformer network (DTNet), to concurrently segment CMF bones and localize large-scale landmarks in one-pass from large volumes of cone-beam computed tomography (CBCT) data. Our DTNet was evaluated quantitatively using CBCTs of patients with CMF deformities. The results demonstrated that our method outperforms the other state-of-the-art methods in both tasks of the bony segmentation and the landmark digitization. Our DTNet features three main technical contributions. First, a collaborative two-branch architecture is designed to efficiently capture both fine-grained image details and complete global context for high-resolution volume-to-volume prediction. Second, leveraging anatomical dependencies between landmarks, regionalized dynamic learners (RDLs) are designed in the concept of "learns to learn" to jointly regress large-scale 3D heatmaps of all landmarks under limited computational costs. Third, adaptive transformer modules (ATMs) are designed for the flexible learning of task-specific feature embedding from common feature bases.
Collapse
Affiliation(s)
- Chunfeng Lian
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Fan Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Hannah H Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Deqiang Xiao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Hung-Ying Lin
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, NY, USA
| | - Steve G F Shen
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Jiao Tong University, Shanghai, China
- Shanghai University of Medicine and Health Science, Shanghai, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, NY, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
197
|
Zhao X, Zhao XM. Deep learning of brain magnetic resonance images: A brief review. Methods 2020; 192:131-140. [PMID: 32931932 DOI: 10.1016/j.ymeth.2020.09.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 08/22/2020] [Accepted: 09/09/2020] [Indexed: 01/24/2023] Open
Abstract
Magnetic resonance imaging (MRI) is one of the most popular techniques in brain science and is important for understanding brain function and neuropsychiatric disorders. However, the processing and analysis of MRI is not a trivial task with lots of challenges. Recently, deep learning has shown superior performance over traditional machine learning approaches in image analysis. In this survey, we give a brief review of the recent popular deep learning approaches and their applications in brain MRI analysis. Furthermore, popular brain MRI databases and deep learning tools are also introduced. The strength and weaknesses of different approaches are addressed, and challenges as well as future directions are also discussed.
Collapse
Affiliation(s)
- Xingzhong Zhao
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Ministry of Education, China
| | - Xing-Ming Zhao
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Ministry of Education, China; Research Institute of Intelligent Complex Systems, Fudan University, Shanghai 200433, China.
| |
Collapse
|
198
|
Jin D, Wang P, Zalesky A, Liu B, Song C, Wang D, Xu K, Yang H, Zhang Z, Yao H, Zhou B, Han T, Zuo N, Han Y, Lu J, Wang Q, Yu C, Zhang X, Zhang X, Jiang T, Zhou Y, Liu Y. Grab-AD: Generalizability and reproducibility of altered brain activity and diagnostic classification in Alzheimer's Disease. Hum Brain Mapp 2020; 41:3379-3391. [PMID: 32364666 PMCID: PMC7375114 DOI: 10.1002/hbm.25023] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 03/26/2020] [Accepted: 04/14/2020] [Indexed: 02/06/2023] Open
Abstract
Alzheimer's disease (AD) is associated with disruptions in brain activity and networks. However, there is substantial inconsistency among studies that have investigated functional brain alterations in AD; such contradictions have hindered efforts to elucidate the core disease mechanisms. In this study, we aim to comprehensively characterize AD-associated functional brain alterations using one of the world's largest resting-state functional MRI (fMRI) biobank for the disorder. The biobank includes fMRI data from six neuroimaging centers, with a total of 252 AD patients, 221 mild cognitive impairment (MCI) patients and 215 healthy comparison individuals. Meta-analytic techniques were used to unveil reliable differences in brain function among the three groups. Relative to the healthy comparison group, AD was associated with significantly reduced functional connectivity and local activity in the default-mode network, basal ganglia and cingulate gyrus, along with increased connectivity or local activity in the prefrontal lobe and hippocampus (p < .05, Bonferroni corrected). Moreover, these functional alterations were significantly correlated with the degree of cognitive impairment (AD and MCI groups) and amyloid-β burden. Machine learning models were trained to recognize key fMRI features to predict individual diagnostic status and clinical score. Leave-one-site-out cross-validation established that diagnostic status (mean area under the receiver operating characteristic curve: 0.85) and clinical score (mean correlation coefficient between predicted and actual Mini-Mental State Examination scores: 0.56, p < .0001) could be predicted with high accuracy. Collectively, our findings highlight the potential for a reproducible and generalizable functional brain imaging biomarker to aid the early diagnosis of AD and track its progression.
Collapse
Affiliation(s)
- Dan Jin
- Brainnetome Center & National Laboratory of Pattern RecognitionInstitute of Automation, Chinese Academy of SciencesBeijingChina
- School of Artificial IntelligenceUniversity of Chinese Academy of SciencesBeijingChina
| | - Pan Wang
- Department of NeurologyTianjin Huanhu Hospital, Tianjin UniversityTianjinChina
| | - Andrew Zalesky
- Melbourne Neuropsychiatry Centre, Department of PsychiatryUniversity of Melbourne and Melbourne HealthMelbourneVictoriaAustralia
- Department of Biomedical EngineeringUniversity of MelbourneMelbourneVictoriaAustralia
| | - Bing Liu
- Brainnetome Center & National Laboratory of Pattern RecognitionInstitute of Automation, Chinese Academy of SciencesBeijingChina
- School of Artificial IntelligenceUniversity of Chinese Academy of SciencesBeijingChina
- Center for Excellence in Brain Science and Intelligence TechnologyInstitute of Automation, Chinese Academy of SciencesBeijingChina
| | - Chengyuan Song
- Department of NeurologyQilu Hospital of Shandong UniversityJi'nanChina
| | - Dawei Wang
- Department of RadiologyQilu Hospital of Shandong UniversityJi'nanChina
| | - Kaibin Xu
- Brainnetome Center & National Laboratory of Pattern RecognitionInstitute of Automation, Chinese Academy of SciencesBeijingChina
| | - Hongwei Yang
- Department of RadiologyXuanwu Hospital of Capital Medical UniversityBeijingChina
| | | | - Hongxiang Yao
- Department of Radiology, the Second Medical Centre, National Clinical Research Centre for Geriatric DiseasesChinese PLA General HospitalBeijingChina
| | - Bo Zhou
- Department of Neurology, the Second Medical Centre, National Clinical Research Centre for Geriatric DiseasesChinese PLA General HospitalBeijingChina
| | - Tong Han
- Department of RadiologyTianjin Huanhu HospitalTianjinChina
| | - Nianming Zuo
- Brainnetome Center & National Laboratory of Pattern RecognitionInstitute of Automation, Chinese Academy of SciencesBeijingChina
- School of Artificial IntelligenceUniversity of Chinese Academy of SciencesBeijingChina
| | - Ying Han
- Department of NeurologyXuanwu Hospital of Capital Medical UniversityBeijingChina
- Beijing Institute of GeriatricsBeijingChina
- National Clinical Research Center for Geriatric DisordersBeijingChina
- Center of Alzheimer's DiseaseBeijing Institute for Brain DisordersBeijingChina
| | - Jie Lu
- Department of RadiologyXuanwu Hospital of Capital Medical UniversityBeijingChina
| | - Qing Wang
- Department of RadiologyQilu Hospital of Shandong UniversityJi'nanChina
| | - Chunshui Yu
- Department of RadiologyTianjin Medical University General HospitalTianjinChina
| | - Xinqing Zhang
- Department of NeurologyXuanwu Hospital of Capital Medical UniversityBeijingChina
| | - Xi Zhang
- Department of Neurology, the Second Medical Centre, National Clinical Research Centre for Geriatric DiseasesChinese PLA General HospitalBeijingChina
| | - Tianzi Jiang
- Brainnetome Center & National Laboratory of Pattern RecognitionInstitute of Automation, Chinese Academy of SciencesBeijingChina
- School of Artificial IntelligenceUniversity of Chinese Academy of SciencesBeijingChina
- Center for Excellence in Brain Science and Intelligence TechnologyInstitute of Automation, Chinese Academy of SciencesBeijingChina
| | - Yuying Zhou
- Department of NeurologyTianjin Huanhu Hospital, Tianjin UniversityTianjinChina
| | - Yong Liu
- Brainnetome Center & National Laboratory of Pattern RecognitionInstitute of Automation, Chinese Academy of SciencesBeijingChina
- School of Artificial IntelligenceUniversity of Chinese Academy of SciencesBeijingChina
- Center for Excellence in Brain Science and Intelligence TechnologyInstitute of Automation, Chinese Academy of SciencesBeijingChina
| |
Collapse
|
199
|
Wang M, Lian C, Yao D, Zhang D, Liu M, Shen D. Spatial-Temporal Dependency Modeling and Network Hub Detection for Functional MRI Analysis via Convolutional-Recurrent Network. IEEE Trans Biomed Eng 2020; 67:2241-2252. [PMID: 31825859 PMCID: PMC7439279 DOI: 10.1109/tbme.2019.2957921] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Early identification of dementia at the stage of mild cognitive impairment (MCI) is crucial for timely diagnosis and intervention of Alzheimer's disease (AD). Although several pioneering studies have been devoted to automated AD diagnosis based on resting-state functional magnetic resonance imaging (rs-fMRI), their performance is somewhat limited due to non-effective mining of spatial-temporal dependency. Besides, few of these existing approaches consider the explicit detection and modeling of discriminative brain regions (i.e., network hubs) that are sensitive to AD progression. In this paper, we propose a unique Spatial-Temporal convolutional-recurrent neural Network (STNet) for automated prediction of AD progression and network hub detection from rs-fMRI time series. Our STNet incorporates the spatial-temporal information mining and AD-related hub detection into an end-to-end deep learning model. Specifically, we first partition rs-fMRI time series into a sequence of overlapping sliding windows. A sequence of convolutional components are then designed to capture the local-to-global spatially-dependent patterns within each sliding window, based on which we are able to identify discriminative hubs and characterize their unique contributions to disease diagnosis. A recurrent component with long short-term memory (LSTM) units is further employed to model the whole-brain temporal dependency from the spatially-dependent pattern sequences, thus capturing the temporal dynamics along time. We evaluate the proposed method on 174 subjects with 563 rs-fMRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, with results suggesting the effectiveness of our method in both tasks of disease progression prediction and AD-related hub detection.
Collapse
Affiliation(s)
- Mingliang Wang
- M. Wang and D. Zhang are with the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China. D. Yao is with Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China. C. Lian, M. Liu and D. Shen are with the Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599, USA. D. Shen is also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Chunfeng Lian
- M. Wang and D. Zhang are with the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China. D. Yao is with Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China. C. Lian, M. Liu and D. Shen are with the Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599, USA. D. Shen is also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Dongren Yao
- M. Wang and D. Zhang are with the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China. D. Yao is with Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China. C. Lian, M. Liu and D. Shen are with the Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599, USA. D. Shen is also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Daoqiang Zhang
- M. Wang and D. Zhang are with the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China. D. Yao is with Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China. C. Lian, M. Liu and D. Shen are with the Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599, USA. D. Shen is also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Mingxia Liu
- M. Wang and D. Zhang are with the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China. D. Yao is with Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China. C. Lian, M. Liu and D. Shen are with the Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599, USA. D. Shen is also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Dinggang Shen
- M. Wang and D. Zhang are with the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China. D. Yao is with Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China. C. Lian, M. Liu and D. Shen are with the Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599, USA. D. Shen is also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
200
|
Zhao K, Ding Y, Han Y, Fan Y, Alexander-Bloch AF, Han T, Jin D, Liu B, Lu J, Song C, Wang P, Wang D, Wang Q, Xu K, Yang H, Yao H, Zheng Y, Yu C, Zhou B, Zhang X, Zhou Y, Jiang T, Zhang X, Liu Y. Independent and reproducible hippocampal radiomic biomarkers for multisite Alzheimer's disease: diagnosis, longitudinal progress and biological basis. Sci Bull (Beijing) 2020; 65:1103-1113. [PMID: 36659162 DOI: 10.1016/j.scib.2020.04.003] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Revised: 01/31/2020] [Accepted: 03/17/2020] [Indexed: 01/21/2023]
Abstract
Hippocampal morphological change is one of the main hallmarks of Alzheimer's disease (AD). However, whether hippocampal radiomic features are robust as predictors of progression from mild cognitive impairment (MCI) to AD dementia and whether these features provide any neurobiological foundation remains unclear. The primary aim of this study was to verify whether hippocampal radiomic features can serve as robust magnetic resonance imaging (MRI) markers for AD. Multivariate classifier-based support vector machine (SVM) analysis provided individual-level predictions for distinguishing AD patients (n = 261) from normal controls (NCs; n = 231) with an accuracy of 88.21% and intersite cross-validation. Further analyses of a large, independent the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset (n = 1228) reinforced these findings. In MCI groups, a systemic analysis demonstrated that the identified features were significantly associated with clinical features (e.g., apolipoprotein E (APOE) genotype, polygenic risk scores, cerebrospinal fluid (CSF) Aβ, CSF Tau), and longitudinal changes in cognition ability; more importantly, the radiomic features had a consistently altered pattern with changes in the MMSE scores over 5 years of follow-up. These comprehensive results suggest that hippocampal radiomic features can serve as robust biomarkers for clinical application in AD/MCI, and further provide evidence for predicting whether an MCI subject would convert to AD based on the radiomics of the hippocampus. The results of this study are expected to have a substantial impact on the early diagnosis of AD/MCI.
Collapse
Affiliation(s)
- Kun Zhao
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China; School of Information Science and Engineering, Shandong Normal University, Ji'nan 250358, China
| | - Yanhui Ding
- School of Information Science and Engineering, Shandong Normal University, Ji'nan 250358, China
| | - Ying Han
- Department of Neurology, Xuanwu Hospital of Capital Medical University, Beijing 100053, China; Center of Alzheimer's Disease, Beijing Institute for Brain Disorders, Beijing 100069, China; Beijing Institute of Geriatrics, Beijing 100053, China; National Clinical Research Center for Geriatric Disorders, Beijing 100053, China
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | | | - Tong Han
- Department of Radiology, Tianjin Huanhu Hospital, Tianjin 300350, China
| | - Dan Jin
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Bing Liu
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China; Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Jie Lu
- Department of Radiology, Xuanwu Hospital of Capital Medical University, Beijing 100053, China
| | - Chengyuan Song
- Department of Neurology, Qilu Hospital of Shandong University, Ji'nan 250012, China
| | - Pan Wang
- Department of Neurology, Tianjin Huanhu Hospital, Tianjin University, Tianjin 300350, China; Department of Neurology, The Secondary Medical Center, National Clinical Research Center for Geriatric Disease, Chinese PLA General Hospital, Beijing 100853, China
| | - Dawei Wang
- Department of Radiology, Qilu Hospital of Shandong University, Ji'nan 250012, China
| | - Qing Wang
- Department of Radiology, Qilu Hospital of Shandong University, Ji'nan 250012, China
| | - Kaibin Xu
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Hongwei Yang
- Department of Radiology, Xuanwu Hospital of Capital Medical University, Beijing 100053, China
| | - Hongxiang Yao
- Department of Radiology, The Secondary Medical Center, National Clinical Research Center for Geriatric Disease, Chinese PLA General Hospital, Beijing 100853, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Ji'nan 250358, China
| | - Chunshui Yu
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Bo Zhou
- Department of Neurology, The Secondary Medical Center, National Clinical Research Center for Geriatric Disease, Chinese PLA General Hospital, Beijing 100853, China
| | - Xinqing Zhang
- Department of Neurology, Xuanwu Hospital of Capital Medical University, Beijing 100053, China
| | - Yuying Zhou
- Department of Neurology, Tianjin Huanhu Hospital, Tianjin University, Tianjin 300350, China
| | - Tianzi Jiang
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China; Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Xi Zhang
- Department of Neurology, The Secondary Medical Center, National Clinical Research Center for Geriatric Disease, Chinese PLA General Hospital, Beijing 100853, China.
| | - Yong Liu
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China; Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| |
Collapse
|