1
|
Zhong T, Wu X, Liang S, Ning Z, Wang L, Niu Y, Yang S, Kang Z, Feng Q, Li G, Zhang Y. nBEST: Deep-learning-based non-human primates Brain Extraction and Segmentation Toolbox across ages, sites and species. Neuroimage 2024; 295:120652. [PMID: 38797384 DOI: 10.1016/j.neuroimage.2024.120652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 05/21/2024] [Accepted: 05/22/2024] [Indexed: 05/29/2024] Open
Abstract
Accurate processing and analysis of non-human primate (NHP) brain magnetic resonance imaging (MRI) serves an indispensable role in understanding brain evolution, development, aging, and diseases. Despite the accumulation of diverse NHP brain MRI datasets at various developmental stages and from various imaging sites/scanners, existing computational tools designed for human MRI typically perform poor on NHP data, due to huge differences in brain sizes, morphologies, and imaging appearances across species, sites, and ages, highlighting the imperative for NHP-specialized MRI processing tools. To address this issue, in this paper, we present a robust, generic, and fully automated computational pipeline, called non-human primates Brain Extraction and Segmentation Toolbox (nBEST), whose main functionality includes brain extraction, non-cerebrum removal, and tissue segmentation. Building on cutting-edge deep learning techniques by employing lifelong learning to flexibly integrate data from diverse NHP populations and innovatively constructing 3D U-NeXt architecture, nBEST can well handle structural NHP brain MR images from multi-species, multi-site, and multi-developmental-stage (from neonates to the elderly). We extensively validated nBEST based on, to our knowledge, the largest assemblage dataset in NHP brain studies, encompassing 1,469 scans with 11 species (e.g., rhesus macaques, cynomolgus macaques, chimpanzees, marmosets, squirrel monkeys, etc.) from 23 independent datasets. Compared to alternative tools, nBEST outperforms in precision, applicability, robustness, comprehensiveness, and generalizability, greatly benefiting downstream longitudinal, cross-sectional, and cross-species quantitative analyses. We have made nBEST an open-source toolbox (https://github.com/TaoZhong11/nBEST) and we are committed to its continual refinement through lifelong learning with incoming data to greatly contribute to the research field.
Collapse
Affiliation(s)
- Tao Zhong
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Xueyang Wu
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Shujun Liang
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Zhenyuan Ning
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Yuyu Niu
- Yunnan Key Laboratory of Primate Biomedical Research, Institute of Primate Translational Medicine, Kunming University of Science and Technology, Kunming, China
| | - Shihua Yang
- College of Veterinary Medicine, South China Agricultural University, Guangzhou, China
| | - Zhuang Kang
- Department of Radiology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qianjin Feng
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA.
| | - Yu Zhang
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.
| |
Collapse
|
2
|
Liu M, Yu W, Xu D, Wang M, Peng B, Jiang H, Dai Y. Diagnosis for autism spectrum disorder children using T1-based gray matter and arterial spin labeling-based cerebral blood flow network metrics. Front Neurosci 2024; 18:1356241. [PMID: 38694903 PMCID: PMC11061487 DOI: 10.3389/fnins.2024.1356241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 03/14/2024] [Indexed: 05/04/2024] Open
Abstract
Introduction Autism Spectrum Disorder (ASD) is a complex neurodevelopmental condition characterized by impairments in motor skills, communication, emotional expression, and social interaction. Accurate diagnosis of ASD remains challenging due to the reliance on subjective behavioral observations and assessment scales, lacking objective diagnostic indicators. Methods In this study, we introduced a novel approach for diagnosing ASD, leveraging T1-based gray matter and ASL-based cerebral blood flow network metrics. Thirty preschool-aged patients with ASD and twenty-two typically developing (TD) individuals were enrolled. Brain network features, including gray matter and cerebral blood flow metrics, were extracted from both T1-weighted magnetic resonance imaging (MRI) and ASL images. Feature selection was performed using statistical t-tests and Minimum Redundancy Maximum Relevance (mRMR). A machine learning model based on random vector functional link network was constructed for diagnosis. Results The proposed approach demonstrated a classification accuracy of 84.91% in distinguishing ASD from TD. Key discriminating network features were identified in the inferior frontal gyrus and superior occipital gyrus, regions critical for social and executive functions in ASD patients. Discussion Our study presents an objective and effective approach to the clinical diagnosis of ASD, overcoming the limitations of subjective behavioral observations. The identified brain network features provide insights into the neurobiological mechanisms underlying ASD, potentially leading to more targeted interventions.
Collapse
Affiliation(s)
- Mingyang Liu
- School of Electrical and Electronic Engineering, Changchun University of Technology, Changchun, China
| | - Weibo Yu
- School of Electrical and Electronic Engineering, Changchun University of Technology, Changchun, China
| | - Dandan Xu
- Department of Radiology, Affiliated Children’s Hospital of Jiangnan University, Wuxi, China
| | - Miaoyan Wang
- Department of Radiology, Affiliated Children’s Hospital of Jiangnan University, Wuxi, China
| | - Bo Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, China
| | - Haoxiang Jiang
- Department of Radiology, Affiliated Children’s Hospital of Jiangnan University, Wuxi, China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, China
| |
Collapse
|
3
|
Pang C, Zhang Y, Xue Z, Bao J, Keong Li B, Liu Y, Liu Y, Sheng M, Peng B, Dai Y. Improving model robustness via enhanced feature representation and sample distribution based on cascaded classifiers for computer-aided diagnosis of brain disease. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
4
|
Liu S, Zhang Y, Peng B, Pang C, Li M, Zhu J, Liu CF, Hu H. Correlation between parameters related to sarcopenia and gray matter volume in patients with mild to moderate Alzheimer's disease. Aging Clin Exp Res 2022; 34:3041-3053. [PMID: 36121640 DOI: 10.1007/s40520-022-02244-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 08/29/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND Alzheimer's disease (AD) is a neurodegenerative disease characterized by brain atrophy and closely correlated with sarcopenia. Mounting studies indicate that parameters related to sarcopenia are associated with AD, but some results show inconsistent. Furthermore, the association between the parameters related to sarcopenia and gray matter volume (GMV) has rarely been explored. AIM To investigate the correlation between parameters related to sarcopenia and cerebral GMV in AD. METHODS Demographics, neuropsychological tests, parameters related to sarcopenia, and magnetic resonance imaging (MRI) scans were collected from 42 patients with AD and 40 normal controls (NC). Parameters related to sarcopenia include appendicular skeletal muscle mass index (ASMI), grip strength, 5-times sit-to-stand (5-STS) time and 6-m gait speed. The GMV of each cerebral region of interest (ROI) and the intracranial volume were calculated by computing the numbers of the voxels in the specific region based on MRI data. Partial correlation and multivariate stepwise linear regression analysis explored the correlation between different inter-group GMV ratios in ROIs and parameters related to sarcopenia, adjusting for covariates. RESULTS The 82 participants included 40 NC aged 70.13 ± 5.94 years, 24 mild AD patients aged 73.54 ± 8.27 years and 18 moderate AD patients aged 71.67 ± 9.39 years. Multivariate stepwise linear regression showed that 5-STS time and gait speed were correlated with bilateral hippocampus volume ratios in total AD. Grip strength was associated with the GMV ratio of the left middle frontal gyrus in mild AD and the GMV ratios of the right superior temporal gyrus and right hippocampus in moderate AD. However, ASMI did not have a relationship to any cerebral GMV ratio. CONCLUSIONS Among parameters related to sarcopenia, 5-STS time and gait speed were associated with bilateral hippocampus volume ratios at different clinical stages of patients with AD. Five-STS time provide an objective basis for early screening and can help diagnose patients with AD.
Collapse
Affiliation(s)
- Shanwen Liu
- Department of Neurology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China
| | - Yu Zhang
- School of Life Sciences and Technology, Changchun University of Science and Technology, Changchun, 130012, China
| | - Bo Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Chunying Pang
- School of Life Sciences and Technology, Changchun University of Science and Technology, Changchun, 130012, China
| | - Meng Li
- Department of Imaging, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China
| | - Jiangtao Zhu
- Department of Imaging, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China
| | - Chun-Feng Liu
- Department of Neurology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China
| | - Hua Hu
- Department of Neurology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China.
| |
Collapse
|
5
|
Ko W, Jung W, Jeon E, Suk HI. A Deep Generative-Discriminative Learning for Multimodal Representation in Imaging Genetics. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2348-2359. [PMID: 35344489 DOI: 10.1109/tmi.2022.3162870] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Imaging genetics, one of the foremost emerging topics in the medical imaging field, analyzes the inherent relations between neuroimaging and genetic data. As deep learning has gained widespread acceptance in many applications, pioneering studies employed deep learning frameworks for imaging genetics. However, existing approaches suffer from some limitations. First, they often adopt a simple strategy for joint learning of phenotypic and genotypic features. Second, their findings have not been extended to biomedical applications, e.g., degenerative brain disease diagnosis and cognitive score prediction. Finally, existing studies perform insufficient and inappropriate analyses from the perspective of data science and neuroscience. In this work, we propose a novel deep learning framework to simultaneously tackle the aforementioned issues. Our proposed framework learns to effectively represent the neuroimaging and the genetic data jointly, and achieves state-of-the-art performance when used for Alzheimer's disease and mild cognitive impairment identification. Furthermore, unlike the existing methods, the framework enables learning the relation between imaging phenotypes and genotypes in a nonlinear way without any prior neuroscientific knowledge. To demonstrate the validity of our proposed framework, we conducted experiments on a publicly available dataset and analyzed the results from diverse perspectives. Based on our experimental results, we believe that the proposed framework has immense potential to provide new insights and perspectives in deep learning-based imaging genetics studies.
Collapse
|
6
|
Han X, Fei X, Wang J, Zhou T, Ying S, Shi J, Shen D. Doubly Supervised Transfer Classifier for Computer-Aided Diagnosis With Imbalanced Modalities. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2009-2020. [PMID: 35171766 DOI: 10.1109/tmi.2022.3152157] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Transfer learning (TL) can effectively improve diagnosis accuracy of single-modal-imaging-based computer-aided diagnosis (CAD) by transferring knowledge from other related imaging modalities, which offers a way to alleviate the small-sample-size problem. However, medical imaging data generally have the following characteristics for the TL-based CAD: 1) The source domain generally has limited data, which increases the difficulty to explore transferable information for the target domain; 2) Samples in both domains often have been labeled for training the CAD model, but the existing TL methods cannot make full use of label information to improve knowledge transfer. In this work, we propose a novel doubly supervised transfer classifier (DSTC) algorithm. In particular, DSTC integrates the support vector machine plus (SVM+) classifier and the low-rank representation (LRR) into a unified framework. The former makes full use of the shared labels to guide the knowledge transfer between the paired data, while the latter adopts the block-diagonal low-rank (BLR) to perform supervised TL between the unpaired data. Furthermore, we introduce the Schatten-p norm for BLR to obtain a tighter approximation to the rank function. The proposed DSTC algorithm is evaluated on the Alzheimer's disease neuroimaging initiative (ADNI) dataset and the bimodal breast ultrasound image (BBUI) dataset. The experimental results verify the effectiveness of the proposed DSTC algorithm.
Collapse
|
7
|
Vass L, Moore MJ, Hanayik T, Mair G, Pendlebury ST, Demeyere N, Jenkinson M. A Comparison of Cranial Cavity Extraction Tools for Non-contrast Enhanced CT Scans in Acute Stroke Patients. Neuroinformatics 2022; 20:587-598. [PMID: 34490589 PMCID: PMC9547790 DOI: 10.1007/s12021-021-09534-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2021] [Indexed: 12/31/2022]
Abstract
Cranial cavity extraction is often the first step in quantitative neuroimaging analyses. However, few automated, validated extraction tools have been developed for non-contrast enhanced CT scans (NECT). The purpose of this study was to compare and contrast freely available tools in an unseen dataset of real-world clinical NECT head scans in order to assess the performance and generalisability of these tools. This study included data from a demographically representative sample of 428 patients who had completed NECT scans following hospitalisation for stroke. In a subset of the scans (n = 20), the intracranial spaces were segmented using automated tools and compared to the gold standard of manual delineation to calculate accuracy, precision, recall, and dice similarity coefficient (DSC) values. Further, three readers independently performed regional visual comparisons of the quality of the results in a larger dataset (n = 428). Three tools were found; one of these had unreliable performance so subsequent evaluation was discontinued. The remaining tools included one that was adapted from the FMRIB software library (fBET) and a convolutional neural network- based tool (rBET). Quantitative comparison showed comparable accuracy, precision, recall and DSC values (fBET: 0.984 ± 0.002; rBET: 0.984 ± 0.003; p = 0.99) between the tools; however, intracranial volume was overestimated. Visual comparisons identified characteristic regional differences in the resulting cranial cavity segmentations. Overall fBET had highest visual quality ratings and was preferred by the readers in the majority of subject results (84%). However, both tools produced high quality extractions of the intracranial space and our findings should improve confidence in these automated CT tools. Pre- and post-processing techniques may further improve these results.
Collapse
Affiliation(s)
- L Vass
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK.
| | - M J Moore
- Department of Experimental Psychology, Radcliffe Observatory Quarter, Oxford, UK
| | - T Hanayik
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - G Mair
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
- Neuroradiology, Department of Clinical Neurosciences, NHS Lothian, Edinburgh, UK
| | - S T Pendlebury
- Wolfson Centre for Prevention of Stroke and Dementia, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
- Departments of Medicine and Geratology and the NIHR Oxford Biomedical Research Centre, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - N Demeyere
- Department of Experimental Psychology, Radcliffe Observatory Quarter, Oxford, UK
| | - M Jenkinson
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| |
Collapse
|
8
|
A survey on artificial intelligence techniques for chronic diseases: open issues and challenges. Artif Intell Rev 2021. [DOI: 10.1007/s10462-021-10084-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
9
|
Wen C, Ba H, Pan W, Huang M. Co-sparse reduced-rank regression for association analysis between imaging phenotypes and genetic variants. Bioinformatics 2021; 36:5214-5222. [PMID: 32683450 DOI: 10.1093/bioinformatics/btaa650] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 05/22/2020] [Accepted: 07/14/2020] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION The association analysis between genetic variants and imaging phenotypes must be carried out to understand the inherited neuropsychiatric disorders via imaging genetic studies. Given the high dimensionality in imaging and genetic data, traditional methods based on massive univariate regression entail large computational cost and disregard many-to-many correlations between phenotypes and genetic variants. Several multivariate imaging genetic methods have been proposed to alleviate the above problems. However, most of these methods are based on the l1 penalty, which might cause the over-selection of variables and thus mislead scientists in analyzing data from the field of neuroimaging genetics. RESULTS To address these challenges in both statistics and computation, we propose a novel co-sparse reduced-rank regression model that identifies complex correlations in a dimensional reduction manner. We developed an iterative algorithm based on a group primal dual-active set formulation to detect simultaneously important genetic variants and imaging phenotypes efficiently and precisely via non-convex penalty. The simulation studies showed that our method achieved accurate and stable performance in parameter estimation and variable selection. In real application, the proposed approach successfully detected several novel Alzheimer's disease-related genetic variants and regions of interest, which indicate that our method may be a valuable statistical toolbox for imaging genetic studies. AVAILABILITY AND IMPLEMENTATION The R package csrrr, and the code for experiments in this article is available in Github: https://github.com/hailongba/csrrr. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Canhong Wen
- International Institute of Finance, School of Management, University of Science and Technology of China, Hefei 230026, China
| | - Hailong Ba
- International Institute of Finance, School of Management, University of Science and Technology of China, Hefei 230026, China
| | - Wenliang Pan
- Department of Statistical Science, School of Mathematics, Sun Yat-Sen University, Guangzhou 510275, China
| | - Meiyan Huang
- School of Biomedical Engineering, Guangzhou 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| | | |
Collapse
|
10
|
Huang M, Lai H, Yu Y, Chen X, Wang T, Feng Q. Deep-gated recurrent unit and diet network-based genome-wide association analysis for detecting the biomarkers of Alzheimer's disease. Med Image Anal 2021; 73:102189. [PMID: 34343841 DOI: 10.1016/j.media.2021.102189] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 05/30/2021] [Accepted: 07/16/2021] [Indexed: 01/01/2023]
Abstract
Genome-wide association analysis (GWAS) is a commonly used method to detect the potential biomarkers of Alzheimer's disease (AD). Most existing GWAS methods entail a high computational cost, disregard correlations among imaging data and correlations among genetic data, and ignore various associations between longitudinal imaging and genetic data. A novel GWAS method was proposed to identify potential AD biomarkers and address these problems. A network based on a gated recurrent unit was applied without imputing incomplete longitudinal imaging data to integrate the longitudinal data of variable lengths and extract an image representation. In this study, a modified diet network that can considerably reduce the number of parameters in the genetic network was proposed to perform GWAS between image representation and genetic data. Genetic representation can be extracted in this way. A link between genetic representation and AD was established to detect potential AD biomarkers. The proposed method was tested on a set of simulated data and a real AD dataset. Results of the simulated data showed that the proposed method can accurately detect relevant biomarkers. Moreover, the results of real AD dataset showed that the proposed method can detect some new risk-related genes of AD. Based on previous reports, no research has incorporated a deep-learning model into a GWAS framework to investigate the potential information on super-high-dimensional genetic data and longitudinal imaging data and create a link between imaging genetics and AD for detecting potential AD biomarkers. Therefore, the proposed method may provide new insights into the underlying pathological mechanism of AD.
Collapse
Affiliation(s)
- Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Haoran Lai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Yuwei Yu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Xiumei Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Tao Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | | |
Collapse
|
11
|
Alzheimer's disease diagnosis framework from incomplete multimodal data using convolutional neural networks. J Biomed Inform 2021; 121:103863. [PMID: 34229061 DOI: 10.1016/j.jbi.2021.103863] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 06/09/2021] [Accepted: 07/01/2021] [Indexed: 11/23/2022]
Abstract
Alzheimer's disease (AD) is a severe irreversible neurodegenerative disease that has great sufferings on patients and eventually leads to death. Early detection of AD and its prodromal stage, mild cognitive impairment (MCI) which can be either stable (sMCI) or progressive (pMCI), is highly desirable for effective treatment planning and tailoring therapy. Recent studies recommended using multimodal data fusion of genetic (single nucleotide polymorphisms, SNPs) and neuroimaging data (magnetic resonance imaging (MRI) and positron emission tomography (PET)) to discriminate AD/MCI from normal control (NC) subjects. However, missing multimodal data in the cohort under study is inevitable. In addition, data heterogeneity between phenotypes and genotypes biomarkers makes learning capability of the models more challenging. Also, the current studies mainly focus on identifying brain disease classification and ignoring the regression task. Furthermore, they utilize multistage for predicting the brain disease progression. To address these issues, we propose a novel multimodal neuroimaging and genetic data fusion for joint classification and clinical score regression tasks using the maximum number of available samples in one unified framework using convolutional neural network (CNN). Specifically, we initially perform a technique based on linear interpolation to fill the missing features for each incomplete sample. Then, we learn the neuroimaging features from MRI, PET, and SNPs using CNN to alleviate the heterogeneity among genotype and phenotype data. Meanwhile, the high learned features from each modality are combined for jointly identifying brain diseases and predicting clinical scores. To validate the performance of the proposed method, we test our method on 805 subjects from Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Also, we verify the similarity between the synthetic and real data using statistical analysis. Moreover, the experimental results demonstrate that the proposed method can yield better performance in both classification and regression tasks. Specifically, our proposed method achieves accuracy of 98.22%, 93.11%, and 97.35% for NC vs. AD, NC vs. sMCI, and NC vs. pMCI, respectively. On the other hand, our method attains the lowest root mean square error and the highest correlation coefficient for different clinical scores regression tasks compared with the state-of-the-art methods.
Collapse
|
12
|
Huang M, Chen X, Yu Y, Lai H, Feng Q. Imaging Genetics Study Based on a Temporal Group Sparse Regression and Additive Model for Biomarker Detection of Alzheimer's Disease. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1461-1473. [PMID: 33556003 DOI: 10.1109/tmi.2021.3057660] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Imaging genetics is an effective tool used to detect potential biomarkers of Alzheimer's disease (AD) in imaging and genetic data. Most existing imaging genetics methods analyze the association between brain imaging quantitative traits (QTs) and genetic data [e.g., single nucleotide polymorphism (SNP)] by using a linear model, ignoring correlations between a set of QTs and SNP groups, and disregarding the varied associations between longitudinal imaging QTs and SNPs. To solve these problems, we propose a novel temporal group sparsity regression and additive model (T-GSRAM) to identify associations between longitudinal imaging QTs and SNPs for detection of potential AD biomarkers. We first construct a nonparametric regression model to analyze the nonlinear association between QTs and SNPs, which can accurately model the complex influence of SNPs on QTs. We then use longitudinal QTs to identify the trajectory of imaging genetic patterns over time. Moreover, the SNP information of group and individual levels are incorporated into the proposed method to boost the power of biomarker detection. Finally, we propose an efficient algorithm to solve the whole T-GSRAM model. We evaluated our method using simulation data and real data obtained from AD neuroimaging initiative. Experimental results show that our proposed method outperforms several state-of-the-art methods in terms of the receiver operating characteristic curves and area under the curve. Moreover, the detection of AD-related genes and QTs has been confirmed in previous studies, thereby further verifying the effectiveness of our approach and helping understand the genetic basis over time during disease progression.
Collapse
|
13
|
Hedayati R, Khedmati M, Taghipour-Gorjikolaie M. Deep feature extraction method based on ensemble of convolutional auto encoders: Application to Alzheimer’s disease diagnosis. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102397] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
14
|
|
15
|
He B, Yang Z, Fan L, Gao B, Li H, Ye C, You B, Jiang T. MonkeyCBP: A Toolbox for Connectivity-Based Parcellation of Monkey Brain. Front Neuroinform 2020; 14:14. [PMID: 32410977 PMCID: PMC7198896 DOI: 10.3389/fninf.2020.00014] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 03/10/2020] [Indexed: 01/24/2023] Open
Abstract
Non-human primate models are widely used in studying the brain mechanism underlying brain development, cognitive functions, and psychiatric disorders. Neuroimaging techniques, such as magnetic resonance imaging, play an important role in the examinations of brain structure and functions. As an indispensable tool for brain imaging data analysis, brain atlases have been extensively investigated, and a variety of versions constructed. These atlases diverge in the criteria based on which they are plotted. The criteria range from cytoarchitectonic features, neurotransmitter receptor distributions, myelination fingerprints, and transcriptomic patterns to structural and functional connectomic profiles. Among them, brainnetome atlas is tightly related to brain connectome information and built by parcellating the brain on the basis of the anatomical connectivity profiles derived from structural neuroimaging data. The pipeline for building the brainnetome atlas has been published as a toolbox named ATPP (A Pipeline for Automatic Tractography-Based Brain Parcellation). In this paper, we present a variation of ATPP, which is dedicated to monkey brain parcellation, to address the significant differences in the process between the two species. The new toolbox, MonkeyCBP, has major alterations in three aspects: brain extraction, image registration, and validity indices. By parcellating two different brain regions (posterior cingulate cortex) and (frontal pole) of the rhesus monkey, we demonstrate the efficacy of these alterations. The toolbox has been made public (https://github.com/bheAI/MonkeyCBP_CLI, https://github.com/bheAI/MonkeyCBP_GUI). It is expected that the toolbox can benefit the non-human primate neuroimaging community with high-throughput computation and low labor involvement.
Collapse
Affiliation(s)
- Bin He
- School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin, China
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Zhengyi Yang
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Lingzhong Fan
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Bin Gao
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Hai Li
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Chuyang Ye
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Bo You
- School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin, China
| | - Tianzi Jiang
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
- Key Laboratory for NeuroInformation of the Ministry of Education, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD, Australia
- Chinese Institute for Brain Research, Beijing, China
| |
Collapse
|
16
|
Abstract
The extraction of brain tissue from brain MRI images is an important pre-procedure for the neuroimaging analyses. The brain is bilaterally symmetric both in coronal plane and transverse plane, but is usually asymmetric in sagittal plane. To address the over-smoothness, boundary leakage, local convergence and asymmetry problems in many popular methods, we developed a brain extraction method using an active contour neighborhood-based graph cuts model. The method defined a new asymmetric assignment of edge weights in graph cuts for brain MRI images. The new graph cuts model was performed iteratively in the neighborhood of brain boundary named the active contour neighborhood (ACN), and was effective to eliminate boundary leakage and avoid local convergence. The method was compared with other popular methods on the Internet Brain Segmentation Repository (IBSR) and OASIS data sets. In testing cross IBSR data set (18 scans with 1.5 mm thickness), IBSR data set (20 scans with 3.1 mm thickness) and OASIS data set (77 scans with 1 mm thickness), the mean Dice similarity coefficients obtained by the proposed method were 0.957 ± 0.013, 0.960 ± 0.009 and 0.936 ± 0.018 respectively. The result obtained by the proposed method is very similar with manual segmentation and achieved the best mean Dice similarity coefficient on IBSR data. Our experiments indicate that the proposed method can provide competitively accurate results and may obtain brain tissues with sharp brain boundary from brain MRI images.
Collapse
|
17
|
Zhou T, Thung KH, Liu M, Shi F, Zhang C, Shen D. Multi-modal latent space inducing ensemble SVM classifier for early dementia diagnosis with neuroimaging data. Med Image Anal 2020; 60:101630. [PMID: 31927474 PMCID: PMC8260095 DOI: 10.1016/j.media.2019.101630] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Revised: 12/15/2019] [Accepted: 12/19/2019] [Indexed: 12/21/2022]
Abstract
Fusing multi-modality data is crucial for accurate identification of brain disorder as different modalities can provide complementary perspectives of complex neurodegenerative disease. However, there are at least four common issues associated with the existing fusion methods. First, many existing fusion methods simply concatenate features from each modality without considering the correlations among different modalities. Second, most existing methods often make prediction based on a single classifier, which might not be able to address the heterogeneity of the Alzheimer's disease (AD) progression. Third, many existing methods often employ feature selection (or reduction) and classifier training in two independent steps, without considering the fact that the two pipelined steps are highly related to each other. Forth, there are missing neuroimaging data for some of the participants (e.g., missing PET data), due to the participants' "no-show" or dropout. In this paper, to address the above issues, we propose an early AD diagnosis framework via novel multi-modality latent space inducing ensemble SVM classifier. Specifically, we first project the neuroimaging data from different modalities into a latent space, and then map the learned latent representations into the label space to learn multiple diversified classifiers. Finally, we obtain the more reliable classification results by using an ensemble strategy. More importantly, we present a Complete Multi-modality Latent Space (CMLS) learning model for complete multi-modality data and also an Incomplete Multi-modality Latent Space (IMLS) learning model for incomplete multi-modality data. Extensive experiments using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset have demonstrated that our proposed models outperform other state-of-the-art methods.
Collapse
Affiliation(s)
- Tao Zhou
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA; Inception Institute of Artificial Intelligence, Abu Dhabi 51133, United Arab Emirates.
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA.
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA.
| | - Feng Shi
- United Imaging Intelligence, Shanghai, China.
| | - Changqing Zhang
- School of Computer Science and Technology, Tianjin University, Tianjin 300072, China.
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
18
|
Yu X, Peng B, Xue Z, Rad HS, Cai Z, Shi J, Zhu J, Dai Y. Analyzing brain structural differences associated with categories of blood pressure in adults using empirical kernel mapping-based kernel ELM. Biomed Eng Online 2019; 18:124. [PMID: 31881897 PMCID: PMC6935092 DOI: 10.1186/s12938-019-0740-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2019] [Accepted: 12/06/2019] [Indexed: 01/23/2023] Open
Abstract
BACKGROUND Hypertension increases the risk of angiocardiopathy and cognitive disorder. Blood pressure has four categories: normal, elevated, hypertension stage 1 and hypertension stage 2. The quantitative analysis of hypertension helps determine disease status, prognosis assessment, guidance and management, but is not well studied in the framework of machine learning. METHODS We proposed empirical kernel mapping-based kernel extreme learning machine plus (EKM-KELM+) classifier to discriminate different blood pressure grades in adults from structural brain MR images. ELM+ is the extended version of ELM, which integrates the additional privileged information about training samples in ELM to help train a more effective classifier. In this work, we extracted gray matter volume (GMV), white matter volume, cerebrospinal fluid volume, cortical surface area, cortical thickness from structural brain MR images, and constructed brain network features based on thickness. After feature selection and EKM, the enhanced features are obtained. Then, we select one feature type as the main feature to feed into KELM+, and the rest of the feature types are PI to assist the main feature to train 5 KELM+ classifiers. Finally, the 5 KELM+ classifiers are ensemble to predict classification result in the test stage, while PI is not used during testing. RESULTS We evaluated the performance of the proposed EKM-KELM+ method using four grades of hypertension data (73 samples for each grade). The experimental results show that the GMV performs observably better than any other feature types with a comparatively higher classification accuracy of 77.37% (Grade 1 vs. Grade 2), 93.19% (Grade 1 vs. Grade 3), and 95.15% (Grade 1 vs. Grade 4). The most discriminative brain regions found using our method are olfactory, orbitofrontal cortex (inferior), supplementary motor area, etc. CONCLUSIONS: Using region of interest features and brain network features, EKM-KELM+ is proposed to study the most discriminative regions that have obvious structural changes in different blood pressure grades. The discriminative features that are selected using our method are consistent with the existing neuroimaging studies. Moreover, our study provides a potential approach to take effective interventions in the early period, when the blood pressure makes minor impacts on the brain structure and function.
Collapse
Affiliation(s)
- Xinying Yu
- Shanghai Institute for Advanced Communication and Data Science, School of Communication and Information Engineering, Shanghai University, Shanghai, China
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, Jiangsu, China
| | - Bo Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, Jiangsu, China
| | - Zeyu Xue
- Shanghai Institute for Advanced Communication and Data Science, School of Communication and Information Engineering, Shanghai University, Shanghai, China
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, Jiangsu, China
| | - Hamidreza Saligheh Rad
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, Jiangsu, China
- Quantitative Medical Imaging Systems Group, Research Center for Molecular and Cellular Imaging, Institute for Advanced Medical Technologies and Devices, Tehran University of Medical Sciences, Tehran, Iran
| | - Zhenlin Cai
- The Affiliated Suzhou Science & Technology Town Hospital of Nanjing Medical University, Suzhou, Jiangsu, China
- Suzhou Science & Technology Town Hospital, Suzhou, 215153, Jiangsu, China
| | - Jun Shi
- Shanghai Institute for Advanced Communication and Data Science, School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Jianbing Zhu
- The Affiliated Suzhou Science & Technology Town Hospital of Nanjing Medical University, Suzhou, Jiangsu, China.
- Suzhou Science & Technology Town Hospital, Suzhou, 215153, Jiangsu, China.
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, Jiangsu, China.
- Suzhou Key Laboratory of Medical and Health Information Technology, Suzhou, China.
- Nanjing Guoke Medical Engineering Technology Development Co., Ltd, Nanjing, China.
- Jinan Guoke Medical Engineering Technology Development Co., Ltd, Jinan, China.
| |
Collapse
|
19
|
Huang M, Yu Y, Yang W, Feng Q. Incorporating spatial-anatomical similarity into the VGWAS framework for AD biomarker detection. Bioinformatics 2019; 35:5271-5280. [PMID: 31095298 PMCID: PMC6954655 DOI: 10.1093/bioinformatics/btz401] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2018] [Revised: 04/03/2019] [Accepted: 05/07/2019] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION The detection of potential biomarkers of Alzheimer's disease (AD) is crucial for its early prediction, diagnosis and treatment. Voxel-wise genome-wide association study (VGWAS) is a commonly used method in imaging genomics and usually applied to detect AD biomarkers in imaging and genetic data. However, existing VGWAS methods entail large computational cost and disregard spatial correlations within imaging data. A novel method is proposed to solve these issues. RESULTS We introduce a novel method to incorporate spatial correlations into a VGWAS framework for the detection of potential AD biomarkers. To consider the characteristics of AD, we first present a modification of a simple linear iterative clustering method for spatial grouping in an anatomically meaningful manner. Second, we propose a spatial-anatomical similarity matrix to incorporate correlations among voxels. Finally, we detect the potential AD biomarkers from imaging and genetic data by using a fast VGWAS method and test our method on 708 subjects obtained from an Alzheimer's Disease Neuroimaging Initiative dataset. Results show that our method can successfully detect some new risk genes and clusters of AD. The detected imaging and genetic biomarkers are used as predictors to classify AD/normal control subjects, and a high accuracy of AD/normal control classification is achieved. To the best of our knowledge, the association between imaging and genetic data has yet to be systematically investigated while building statistical models for classifying AD subjects to create a link between imaging genetics and AD. Therefore, our method may provide a new way to gain insights into the underlying pathological mechanism of AD. AVAILABILITY AND IMPLEMENTATION https://github.com/Meiyan88/SASM-VGWAS.
Collapse
Affiliation(s)
- Meiyan Huang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Yuwei Yu
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Wei Yang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Qianjin Feng
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | | |
Collapse
|
20
|
Zhou T, Liu M, Thung KH, Shen D. Latent Representation Learning for Alzheimer's Disease Diagnosis With Incomplete Multi-Modality Neuroimaging and Genetic Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2411-2422. [PMID: 31021792 PMCID: PMC8034601 DOI: 10.1109/tmi.2019.2913158] [Citation(s) in RCA: 82] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer's disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e., not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related (e.g., MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.
Collapse
Affiliation(s)
- Tao Zhou
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
- Inception Institute of Artificial Intelligence, Abu Dhabi 51133, United Arab Emirates
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
21
|
Shi J, Xue Z, Dai Y, Peng B, Dong Y, Zhang Q, Zhang Y. Cascaded Multi-Column RVFL+ Classifier for Single-Modal Neuroimaging-Based Diagnosis of Parkinson's Disease. IEEE Trans Biomed Eng 2019; 66:2362-2371. [DOI: 10.1109/tbme.2018.2889398] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
22
|
Zhang Z, Sejdić E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput Biol Med 2019; 108:354-370. [PMID: 31054502 PMCID: PMC6531364 DOI: 10.1016/j.compbiomed.2019.02.017] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/18/2023]
Abstract
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| |
Collapse
|
23
|
Li Y, Meng F, Shi J. Learning using privileged information improves neuroimaging-based CAD of Alzheimer's disease: a comparative study. Med Biol Eng Comput 2019; 57:1605-1616. [PMID: 31028606 DOI: 10.1007/s11517-019-01974-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Accepted: 03/19/2019] [Indexed: 12/26/2022]
Abstract
The neuroimaging-based computer-aided diagnosis (CAD) for Alzheimer's disease (AD) has shown its effectiveness in recent years. In general, the multimodal neuroimaging-based CAD always outperforms the approaches based on a single modality. However, single-modal neuroimaging is more favored in clinical practice for diagnosis due to the limitations of imaging devices, especially in rural hospitals. Learning using privileged information (LUPI) is a new learning paradigm that adopts additional privileged information (PI) modality to help to train a more effective learning model during the training stage, but PI itself is not available in the testing stage. Since PI is generally related to the training samples, it is then transferred to the learned model. In this work, a LUPI-based CAD framework for AD is proposed. It can flexibly perform a classifier- or feature-level LUPI, in which the information is transferred from the additional PI modality to the diagnosis modality. A thorough comparison has been made among three classifier-level algorithms and five feature-level LUPI algorithms. The experimental results on the ADNI dataset show that all classifier-level and deep learning based feature-level LUPI algorithms can improve the performance of a single-modal neuroimaging-based CAD for AD by transferring PI. Graphical abstract Graphical abstract for the framework of the LUPI-based CAD for AD.
Collapse
Affiliation(s)
- Yan Li
- Shenzhen City Key Laboratory of Embedded System Design, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
| | - Fanqing Meng
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Shanghai Institute for Advanced Communication and Data Science, School of Communication and Information Engineering, Shanghai University, No. 99 Shangda Road, Shanghai, 200444, People's Republic of China
| | - Jun Shi
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Shanghai Institute for Advanced Communication and Data Science, School of Communication and Information Engineering, Shanghai University, No. 99 Shangda Road, Shanghai, 200444, People's Republic of China.
| |
Collapse
|
24
|
Zhou T, Thung KH, Liu M, Shen D. Brain-Wide Genome-Wide Association Study for Alzheimer's Disease via Joint Projection Learning and Sparse Regression Model. IEEE Trans Biomed Eng 2019; 66:165-175. [PMID: 29993426 PMCID: PMC6342004 DOI: 10.1109/tbme.2018.2824725] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Brain-wide and genome-wide association (BW-GWA) study is presented in this paper to identify the associations between the brain imaging phenotypes (i.e., regional volumetric measures) and the genetic variants [i.e., single nucleotide polymorphism (SNP)] in Alzheimer's disease (AD). The main challenges of this study include the data heterogeneity, complex phenotype-genotype associations, high-dimensional data (e.g., thousands of SNPs), and the existence of phenotype outliers. Previous BW-GWA studies, while addressing some of these challenges, did not consider the diagnostic label information in their formulations, thus limiting their clinical applicability. To address these issues, we present a novel joint projection and sparse regression model to discover the associations between the phenotypes and genotypes. Specifically, to alleviate the negative influence of data heterogeneity, we first map the genotypes into an intermediate imaging-phenotype-like space. Then, to better reveal the complex phenotype-genotype associations, we project both the mapped genotypes and the original imaging phenotypes into a diagnostic-label-guided joint feature space, where the intraclass projected points are constrained to be close to each other. In addition, we use l2,1-norm minimization on both the regression loss function and the transformation coefficient matrices, to reduce the effect of phenotype outliers and also to encourage sparse feature selections of both the genotypes and phenotypes. We evaluate our method using AD neuroimaging initiative dataset, and the results show that our proposed method outperforms several state-of-the-art methods in term of the average root-mean-square error of genome-to-phenotype predictions. Besides, the associated SNPs and brain regions identified in this study have also been shown in the previous AD-related studies, thus verifying the effectiveness and potential of our proposed method in AD pathogenesis study.
Collapse
Affiliation(s)
- Tao Zhou
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA ()
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA ()
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA ()
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea ()
| |
Collapse
|
25
|
Huang M, Deng C, Yu Y, Lian T, Yang W, Feng Q. Spatial correlations exploitation based on nonlocal voxel-wise GWAS for biomarker detection of AD. Neuroimage Clin 2018; 21:101642. [PMID: 30584014 PMCID: PMC6413305 DOI: 10.1016/j.nicl.2018.101642] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 11/19/2018] [Accepted: 12/10/2018] [Indexed: 02/05/2023]
Abstract
Potential biomarker detection is a crucial area of study for the prediction, diagnosis, and monitoring of Alzheimer's disease (AD). The voxelwise genome-wide association study (vGWAS) is widely used in imaging genomics studies that is usually applied to the detection of AD biomarkers in both imaging and genetic data. However, performing vGWAS remains a challenge because of the computational complexity of the technique and our ignorance of the spatial correlations within the imaging data. In this paper, we propose a novel method based on the exploitation of spatial correlations that may help to detect potential AD biomarkers using a fast vGWAS. To incorporate spatial correlations, we applied a nonlocal method that supposed that a given voxel could be represented by weighting the sum of the other voxels. Three commonly used weighting methods were adopted to calculate the weights among different voxels in this study. Then, a fast vGWAS approach was used to assess the association between the image and the genetic data. The proposed method was estimated using both simulated and real data. In the simulation studies, we designed a set of experiments to evaluate the effectiveness of the nonlocal method for incorporating spatial correlations in vGWAS. The experiments showed that incorporating spatial correlations by the nonlocal method could improve the detecting accuracy of AD biomarkers. For real data, we successfully identified three genes, namely, ANK3, MEIS2, and TLR4, which have significant associations with mental retardation, learning disabilities and age according to previous research. These genes have profound impacts on AD or other neurodegenerative diseases. Our results indicated that our method might be an effective and valuable tool for detecting potential biomarkers of AD.
Collapse
Affiliation(s)
- Meiyan Huang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Chunyan Deng
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, China; Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China
| | - Yuwei Yu
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Tao Lian
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Wei Yang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Qianjin Feng
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, China.
| |
Collapse
|
26
|
Zhou T, Thung KH, Zhu X, Shen D. Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis. Hum Brain Mapp 2018; 40:1001-1016. [PMID: 30381863 DOI: 10.1002/hbm.24428] [Citation(s) in RCA: 111] [Impact Index Per Article: 15.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Revised: 09/04/2018] [Accepted: 10/03/2018] [Indexed: 12/13/2022] Open
Abstract
In this article, the authors aim to maximally utilize multimodality neuroimaging and genetic data for identifying Alzheimer's disease (AD) and its prodromal status, Mild Cognitive Impairment (MCI), from normal aging subjects. Multimodality neuroimaging data such as MRI and PET provide valuable insights into brain abnormalities, while genetic data such as single nucleotide polymorphism (SNP) provide information about a patient's AD risk factors. When these data are used together, the accuracy of AD diagnosis may be improved. However, these data are heterogeneous (e.g., with different data distributions), and have different number of samples (e.g., with far less number of PET samples than the number of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three-stage deep feature learning and fusion framework, where deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combinations of modalities, via effective training using the maximum number of available samples. Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity among modalities can be partially addressed, and high-level features from different modalities can be combined in the next stage. In the second stage, we learn joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. To further increase the number of samples during training, we also use data at multiple scanning time points for each training subject in the dataset. We evaluate the proposed framework using Alzheimer's disease neuroimaging initiative (ADNI) dataset for AD diagnosis, and the experimental results show that the proposed framework outperforms other state-of-the-art methods.
Collapse
Affiliation(s)
- Tao Zhou
- Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina
| | - Kim-Han Thung
- Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina
| | - Xiaofeng Zhu
- Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina
| | - Dinggang Shen
- Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina.,Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
27
|
Wang X, Zhen X, Li Q, Shen D, Huang H. Cognitive Assessment Prediction in Alzheimer's Disease by Multi-Layer Multi-Target Regression. Neuroinformatics 2018; 16:285-294. [PMID: 29802511 PMCID: PMC6378694 DOI: 10.1007/s12021-018-9381-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
Accurate and automatic prediction of cognitive assessment from multiple neuroimaging biomarkers is crucial for early detection of Alzheimer's disease. The major challenges arise from the nonlinear relationship between biomarkers and assessment scores and the inter-correlation among them, which have not yet been well addressed. In this paper, we propose multi-layer multi-target regression (MMR) which enables simultaneously modeling intrinsic inter-target correlations and nonlinear input-output relationships in a general compositional framework. Specifically, by kernelized dictionary learning, the MMR can effectively handle highly nonlinear relationship between biomarkers and assessment scores; by robust low-rank linear learning via matrix elastic nets, the MMR can explicitly encode inter-correlations among multiple assessment scores; moreover, the MMR is flexibly and allows to work with non-smooth ℓ2,1-norm loss function, which enables calibration of multiple targets with disparate noise levels for more robust parameter estimation. The MMR can be efficiently solved by an alternating optimization algorithm via gradient descent with guaranteed convergence. The MMR has been evaluated by extensive experiments on the ADNI database with MRI data, and produced high accuracy surpassing previous regression models, which demonstrates its great effectiveness as a new multi-target regression model for clinical multivariate prediction.
Collapse
Affiliation(s)
- Xiaoqian Wang
- Department of Electrical, Computer Engineering, University of Pittsburgh, Pennsylvania, PA 15263, USA
| | - Xiantong Zhen
- Department of Electrical, Computer Engineering, University of Pittsburgh, Pennsylvania, PA 15263, USA
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| | - Dinggang Shen
- Radiology and BRIC, UNC-CH School of Medicine, 130 Mason Farm Road, Chapel Hill, NC 27599, USA
| | - Heng Huang
- Department of Electrical, Computer Engineering, University of Pittsburgh, Pennsylvania, PA 15263, USA
| |
Collapse
|
28
|
Ou Y, Zöllei L, Da X, Retzepi K, Murphy SN, Gerstner ER, Rosen BR, Grant PE, Kalpathy-Cramer J, Gollub RL. Field of View Normalization in Multi-Site Brain MRI. Neuroinformatics 2018; 16:431-444. [PMID: 29353341 PMCID: PMC7334884 DOI: 10.1007/s12021-018-9359-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Multi-site brain MRI analysis is needed in big data neuroimaging studies, but challenging. The challenges lie in almost every analysis step including skull stripping. The diversities in multi-site brain MR images make it difficult to tune parameters specific to subjects or imaging protocols. Alternatively, using constant parameter settings often leads to inaccurate, inconsistent and even failed skull stripping results. One reason is that images scanned at different sites, under different scanners or protocols, and/or by different technicians often have very different fields of view (FOVs). Normalizing FOV is currently done manually or using ad hoc pre-processing steps, which do not always generalize well to multi-site diverse images. In this paper, we show that (a) a generic FOV normalization approach is possible in multi-site diverse images; we show experiments on images acquired from Philips, GE, Siemens scanners, from 1.0T, 1.5T, 3.0T field of strengths, and from subjects 0-90 years of ages; and (b) generic FOV normalization improves skull stripping accuracy and consistency for multiple skull stripping algorithms; we show this effect for 5 skull stripping algorithms including FSL's BET, AFNI's 3dSkullStrip, FreeSurfer's HWA, BrainSuite's BSE, and MASS. We have released our FOV normalization software at http://www.nitrc.org/projects/normalizefov .
Collapse
Affiliation(s)
- Yangming Ou
- Department of Pediatrics and Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Lilla Zöllei
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Xiao Da
- Functional Neuroimaging Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Kallirroi Retzepi
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Shawn N Murphy
- Research Computing, Partners Healthcare, Boston, MA, USA
| | - Elizabeth R Gerstner
- Neuro-Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Bruce R Rosen
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - P Ellen Grant
- Department of Pediatrics and Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | | | - Randy L Gollub
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
29
|
Zhao G, Liu F, Oler JA, Meyerand ME, Kalin NH, Birn RM. Bayesian convolutional neural network based MRI brain extraction on nonhuman primates. Neuroimage 2018; 175:32-44. [PMID: 29604454 PMCID: PMC6095475 DOI: 10.1016/j.neuroimage.2018.03.065] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 03/26/2018] [Accepted: 03/27/2018] [Indexed: 11/17/2022] Open
Abstract
Brain extraction or skull stripping of magnetic resonance images (MRI) is an essential step in neuroimaging studies, the accuracy of which can severely affect subsequent image processing procedures. Current automatic brain extraction methods demonstrate good results on human brains, but are often far from satisfactory on nonhuman primates, which are a necessary part of neuroscience research. To overcome the challenges of brain extraction in nonhuman primates, we propose a fully-automated brain extraction pipeline combining deep Bayesian convolutional neural network (CNN) and fully connected three-dimensional (3D) conditional random field (CRF). The deep Bayesian CNN, Bayesian SegNet, is used as the core segmentation engine. As a probabilistic network, it is not only able to perform accurate high-resolution pixel-wise brain segmentation, but also capable of measuring the model uncertainty by Monte Carlo sampling with dropout in the testing stage. Then, fully connected 3D CRF is used to refine the probability result from Bayesian SegNet in the whole 3D context of the brain volume. The proposed method was evaluated with a manually brain-extracted dataset comprising T1w images of 100 nonhuman primates. Our method outperforms six popular publicly available brain extraction packages and three well-established deep learning based methods with a mean Dice coefficient of 0.985 and a mean average symmetric surface distance of 0.220 mm. A better performance against all the compared methods was verified by statistical tests (all p-values < 10-4, two-sided, Bonferroni corrected). The maximum uncertainty of the model on nonhuman primate brain extraction has a mean value of 0.116 across all the 100 subjects. The behavior of the uncertainty was also studied, which shows the uncertainty increases as the training set size decreases, the number of inconsistent labels in the training set increases, or the inconsistency between the training set and the testing set increases.
Collapse
Affiliation(s)
- Gengyan Zhao
- Department of Medical Physics, University of Wisconsin - Madison, USA.
| | - Fang Liu
- Department of Radiology, University of Wisconsin - Madison, USA
| | - Jonathan A Oler
- Department of Psychiatry, University of Wisconsin - Madison, USA
| | - Mary E Meyerand
- Department of Medical Physics, University of Wisconsin - Madison, USA; Department of Biomedical Engineering, University of Wisconsin - Madison, USA
| | - Ned H Kalin
- Department of Psychiatry, University of Wisconsin - Madison, USA
| | - Rasmus M Birn
- Department of Medical Physics, University of Wisconsin - Madison, USA; Department of Psychiatry, University of Wisconsin - Madison, USA
| |
Collapse
|
30
|
Navarrete AF, Blezer ELA, Pagnotta M, de Viet ESM, Todorov OS, Lindenfors P, Laland KN, Reader SM. Primate Brain Anatomy: New Volumetric MRI Measurements for Neuroanatomical Studies. BRAIN, BEHAVIOR AND EVOLUTION 2018; 91:109-117. [PMID: 29894995 DOI: 10.1159/000488136] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Accepted: 03/05/2018] [Indexed: 12/20/2022]
Abstract
Since the publication of the primate brain volumetric dataset of Stephan and colleagues in the early 1980s, no major new comparative datasets covering multiple brain regions and a large number of primate species have become available. However, technological and other advances in the last two decades, particularly magnetic resonance imaging (MRI) and the creation of institutions devoted to the collection and preservation of rare brain specimens, provide opportunities to rectify this situation. Here, we present a new dataset including brain region volumetric measurements of 39 species, including 20 species not previously available in the literature, with measurements of 16 brain areas. These volumes were extracted from MRI of 46 brains of 38 species from the Netherlands Institute of Neuroscience Primate Brain Bank, scanned at high resolution with a 9.4-T scanner, plus a further 7 donated MRI of 4 primate species. Partial measurements were made on an additional 8 brains of 5 species. We make the dataset and MRI scans available online in the hope that they will be of value to researchers conducting comparative studies of primate evolution.
Collapse
Affiliation(s)
- Ana F Navarrete
- Centre for Social Learning and Cognitive Evolution, School of Biology, University of St. Andrews, St. Andrews, United Kingdom.,Department of Biology and Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
| | - Erwin L A Blezer
- Biomedical MR Imaging and Spectroscopy Group, Center for Image Sciences, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Murillo Pagnotta
- Centre for Social Learning and Cognitive Evolution, School of Biology, University of St. Andrews, St. Andrews, United Kingdom
| | - Elizabeth S M de Viet
- Department of Biology and Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
| | - Orlin S Todorov
- Department of Biology and Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
| | - Patrik Lindenfors
- Institute for Future Studies, Stockholm, Sweden.,Centre for Cultural Evolution & Department of Zoology, Stockholm University, Stockholm, Sweden
| | - Kevin N Laland
- Centre for Social Learning and Cognitive Evolution, School of Biology, University of St. Andrews, St. Andrews, United Kingdom
| | - Simon M Reader
- Department of Biology and Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,Department of Biology, McGill University, Montreal, Québec, Canada
| |
Collapse
|
31
|
Shi J, Zheng X, Li Y, Zhang Q, Ying S. Multimodal Neuroimaging Feature Learning With Multimodal Stacked Deep Polynomial Networks for Diagnosis of Alzheimer's Disease. IEEE J Biomed Health Inform 2018; 22:173-183. [DOI: 10.1109/jbhi.2017.2655720] [Citation(s) in RCA: 222] [Impact Index Per Article: 31.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
32
|
Zhu X, Suk HI, Huang H, Shen D. Low-Rank Graph-Regularized Structured Sparse Regression for Identifying Genetic Biomarkers. IEEE TRANSACTIONS ON BIG DATA 2017; 3:405-414. [PMID: 29725610 PMCID: PMC5929142 DOI: 10.1109/tbdata.2017.2735991] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
In this paper, we propose a novel sparse regression method for Brain-Wide and Genome-Wide association study. Specifically, we impose a low-rank constraint on the weight coefficient matrix and then decompose it into two low-rank matrices, which find relationships in genetic features and in brain imaging features, respectively. We also introduce a sparse acyclic digraph with sparsity-inducing penalty to take further into account the correlations among the genetic variables, by which it can be possible to identify the representative SNPs that are highly associated with the brain imaging features. We optimize our objective function by jointly tackling low-rank regression and variable selection in a framework. In our method, the low-rank constraint allows us to conduct variable selection with the low-rank representations of the data; the learned low-sparsity weight coefficients allow discarding unimportant variables at the end. The experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset showed that the proposed method could select the important SNPs to more accurately estimate the brain imaging features than the state-of-the-art methods.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, and also with the Guangxi Key Lab of Multi-source Information Mining & Security, Guangxi Normal University, Guilin, Guangxi 541000, China
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 03760, Republic of Korea
| | - Heng Huang
- Electrical and Computer Engineering, University of Pittsburgh, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 03760, Republic of Korea
| |
Collapse
|
33
|
Wang Z, Zhu X, Adeli E, Zhu Y, Nie F, Munsell B, Wu G. Multi-modal classification of neurodegenerative disease by progressive graph-based transductive learning. Med Image Anal 2017; 39:218-230. [PMID: 28551556 PMCID: PMC5901767 DOI: 10.1016/j.media.2017.05.003] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2016] [Revised: 01/27/2017] [Accepted: 05/09/2017] [Indexed: 01/12/2023]
Abstract
Graph-based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter-subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature-to-phenotype alignment is achieved using an iterative approach that: (1) refines inter-subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter-subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi-modal imaging data to further improve pGTL classification accuracy. Using Alzheimer's disease and Parkinson's disease study data, the classification accuracy of the proposed pGTL method is compared to several state-of-the-art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets.
Collapse
Affiliation(s)
- Zhengxia Wang
- Department of Information Science and Engineering, Chongqing Jiaotong University, Chongqing, 400074, PR China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA; Department of Automation, Chongqing University, Chongqing, 400044, PR China.
| | - Xiaofeng Zhu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA; Department Computer Science and Information Engineering, Guangxi Normal University, Guilin, 541004, PR China
| | - Ehsan Adeli
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Yingying Zhu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Feiping Nie
- School of Computer Science and Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi'an 710072, Shaanxi, PR China
| | - Brent Munsell
- Department of Computer Science, College of Charleston, Charleston, SC 29424, USA
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA.
| |
Collapse
|
34
|
Peng B, Lu J, Saxena A, Zhou Z, Zhang T, Wang S, Dai Y. Examining Brain Morphometry Associated with Self-Esteem in Young Adults Using Multilevel-ROI-Features-Based Classification Method. Front Comput Neurosci 2017; 11:37. [PMID: 28588470 PMCID: PMC5438414 DOI: 10.3389/fncom.2017.00037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2016] [Accepted: 05/04/2017] [Indexed: 11/29/2022] Open
Abstract
Purpose: This study is to exam self-esteem related brain morphometry on brain magnetic resonance (MR) images using multilevel-features-based classification method. Method: The multilevel region of interest (ROI) features consist of two types of features: (i) ROI features, which include gray matter volume, white matter volume, cerebrospinal fluid volume, cortical thickness, and cortical surface area, and (ii) similarity features, which are based on similarity calculation of cortical thickness between ROIs. For each feature type, a hybrid feature selection method, comprising of filter-based and wrapper-based algorithms, is used to select the most discriminating features. ROI features and similarity features are integrated by using multi-kernel support vector machines (SVMs) with appropriate weighting factor. Results: The classification performance is improved by using multilevel ROI features with an accuracy of 96.66%, a specificity of 96.62%, and a sensitivity of 95.67%. The most discriminating ROI features that are related to self-esteem spread over occipital lobe, frontal lobe, parietal lobe, limbic lobe, temporal lobe, and central region, mainly involving white matter and cortical thickness. The most discriminating similarity features are distributed in both the right and left hemisphere, including frontal lobe, occipital lobe, limbic lobe, parietal lobe, and central region, which conveys information of structural connections between different brain regions. Conclusion: By using ROI features and similarity features to exam self-esteem related brain morphometry, this paper provides a pilot evidence that self-esteem is linked to specific ROIs and structural connections between different brain regions.
Collapse
Affiliation(s)
- Bo Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of SciencesSuzhou, China.,University of Chinese Academy of SciencesBeijing, China.,Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of SciencesChangchun, China
| | - Jieru Lu
- School of Information Science and Engineering, Changzhou UniversityChangzhou, China
| | - Aditya Saxena
- Trauma Center, Khandwa District HospitalKhandwa, India
| | - Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of SciencesSuzhou, China
| | - Tao Zhang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of SciencesChangchun, China
| | - Suhong Wang
- Department of Neuroscience, The Third Affiliated Hospital of Soochow UniversityChangzhou, China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of SciencesSuzhou, China
| |
Collapse
|
35
|
Peng B, Wang S, Zhou Z, Liu Y, Tong B, Zhang T, Dai Y. A multilevel-ROI-features-based machine learning method for detection of morphometric biomarkers in Parkinson's disease. Neurosci Lett 2017; 651:88-94. [PMID: 28435046 DOI: 10.1016/j.neulet.2017.04.034] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Revised: 04/07/2017] [Accepted: 04/18/2017] [Indexed: 11/16/2022]
Abstract
Machine learning methods have been widely used in recent years for detection of neuroimaging biomarkers in regions of interest (ROIs) and assisting diagnosis of neurodegenerative diseases. The innovation of this study is to use multilevel-ROI-features-based machine learning method to detect sensitive morphometric biomarkers in Parkinson's disease (PD). Specifically, the low-level ROI features (gray matter volume, cortical thickness, etc.) and high-level correlative features (connectivity between ROIs) are integrated to construct the multilevel ROI features. Filter- and wrapper- based feature selection method and multi-kernel support vector machine (SVM) are used in the classification algorithm. T1-weighted brain magnetic resonance (MR) images of 69 PD patients and 103 normal controls from the Parkinson's Progression Markers Initiative (PPMI) dataset are included in the study. The machine learning method performs well in classification between PD patients and normal controls with an accuracy of 85.78%, a specificity of 87.79%, and a sensitivity of 87.64%. The most sensitive biomarkers between PD patients and normal controls are mainly distributed in frontal lobe, parental lobe, limbic lobe, temporal lobe, and central region. The classification performance of our method with multilevel ROI features is significantly improved comparing with other classification methods using single-level features. The proposed method shows promising identification ability for detecting morphometric biomarkers in PD, thus confirming the potentiality of our method in assisting diagnosis of the disease.
Collapse
Affiliation(s)
- Bo Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China; University of Chinese Academy of Sciences, Beijing 100049, China; Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, Jilin 130033, China
| | - Suhong Wang
- Department of Neuroscience, The Third Affiliated Hospital of Soochow University, Changzhou, Jiangsu 213003, China
| | - Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Yan Liu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Baotong Tong
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Tao Zhang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, Jilin 130033, China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China.
| |
Collapse
|
36
|
Weiner MW, Veitch DP, Aisen PS, Beckett LA, Cairns NJ, Green RC, Harvey D, Jack CR, Jagust W, Morris JC, Petersen RC, Saykin AJ, Shaw LM, Toga AW, Trojanowski JQ. Recent publications from the Alzheimer's Disease Neuroimaging Initiative: Reviewing progress toward improved AD clinical trials. Alzheimers Dement 2017; 13:e1-e85. [PMID: 28342697 PMCID: PMC6818723 DOI: 10.1016/j.jalz.2016.11.007] [Citation(s) in RCA: 179] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Revised: 11/21/2016] [Accepted: 11/28/2016] [Indexed: 01/31/2023]
Abstract
INTRODUCTION The Alzheimer's Disease Neuroimaging Initiative (ADNI) has continued development and standardization of methodologies for biomarkers and has provided an increased depth and breadth of data available to qualified researchers. This review summarizes the over 400 publications using ADNI data during 2014 and 2015. METHODS We used standard searches to find publications using ADNI data. RESULTS (1) Structural and functional changes, including subtle changes to hippocampal shape and texture, atrophy in areas outside of hippocampus, and disruption to functional networks, are detectable in presymptomatic subjects before hippocampal atrophy; (2) In subjects with abnormal β-amyloid deposition (Aβ+), biomarkers become abnormal in the order predicted by the amyloid cascade hypothesis; (3) Cognitive decline is more closely linked to tau than Aβ deposition; (4) Cerebrovascular risk factors may interact with Aβ to increase white-matter (WM) abnormalities which may accelerate Alzheimer's disease (AD) progression in conjunction with tau abnormalities; (5) Different patterns of atrophy are associated with impairment of memory and executive function and may underlie psychiatric symptoms; (6) Structural, functional, and metabolic network connectivities are disrupted as AD progresses. Models of prion-like spreading of Aβ pathology along WM tracts predict known patterns of cortical Aβ deposition and declines in glucose metabolism; (7) New AD risk and protective gene loci have been identified using biologically informed approaches; (8) Cognitively normal and mild cognitive impairment (MCI) subjects are heterogeneous and include groups typified not only by "classic" AD pathology but also by normal biomarkers, accelerated decline, and suspected non-Alzheimer's pathology; (9) Selection of subjects at risk of imminent decline on the basis of one or more pathologies improves the power of clinical trials; (10) Sensitivity of cognitive outcome measures to early changes in cognition has been improved and surrogate outcome measures using longitudinal structural magnetic resonance imaging may further reduce clinical trial cost and duration; (11) Advances in machine learning techniques such as neural networks have improved diagnostic and prognostic accuracy especially in challenges involving MCI subjects; and (12) Network connectivity measures and genetic variants show promise in multimodal classification and some classifiers using single modalities are rivaling multimodal classifiers. DISCUSSION Taken together, these studies fundamentally deepen our understanding of AD progression and its underlying genetic basis, which in turn informs and improves clinical trial design.
Collapse
Affiliation(s)
- Michael W Weiner
- Department of Veterans Affairs Medical Center, Center for Imaging of Neurodegenerative Diseases, San Francisco, CA, USA; Department of Radiology, University of California, San Francisco, CA, USA; Department of Medicine, University of California, San Francisco, CA, USA; Department of Psychiatry, University of California, San Francisco, CA, USA; Department of Neurology, University of California, San Francisco, CA, USA.
| | - Dallas P Veitch
- Department of Veterans Affairs Medical Center, Center for Imaging of Neurodegenerative Diseases, San Francisco, CA, USA
| | - Paul S Aisen
- Alzheimer's Therapeutic Research Institute, University of Southern California, San Diego, CA, USA
| | - Laurel A Beckett
- Division of Biostatistics, Department of Public Health Sciences, University of California, Davis, CA, USA
| | - Nigel J Cairns
- Knight Alzheimer's Disease Research Center, Washington University School of Medicine, Saint Louis, MO, USA; Department of Neurology, Washington University School of Medicine, Saint Louis, MO, USA
| | - Robert C Green
- Division of Genetics, Department of Medicine, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - Danielle Harvey
- Division of Biostatistics, Department of Public Health Sciences, University of California, Davis, CA, USA
| | | | - William Jagust
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA
| | - John C Morris
- Alzheimer's Therapeutic Research Institute, University of Southern California, San Diego, CA, USA
| | | | - Andrew J Saykin
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA; Department of Medical and Molecular Genetics, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Leslie M Shaw
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Arthur W Toga
- Laboratory of Neuroimaging, Institute of Neuroimaging and Informatics, Keck School of Medicine of University of Southern California, Los Angeles, CA, USA
| | - John Q Trojanowski
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Institute on Aging, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Alzheimer's Disease Core Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Udall Parkinson's Research Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
37
|
Suk HI, Lee SW, Shen D. Deep ensemble learning of sparse regression models for brain disease diagnosis. Med Image Anal 2017; 37:101-113. [PMID: 28167394 PMCID: PMC5808465 DOI: 10.1016/j.media.2017.01.008] [Citation(s) in RCA: 123] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Revised: 01/14/2017] [Accepted: 01/23/2017] [Indexed: 01/18/2023]
Abstract
Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature.
Collapse
Affiliation(s)
- Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Dinggang Shen
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea; Biomedical Research Imaging Center and Department of Radiology, University of North Carolina at Chapel Hill, NC 27599, USA
| |
Collapse
|
38
|
Lei B, Jiang F, Chen S, Ni D, Wang T. Longitudinal Analysis for Disease Progression via Simultaneous Multi-Relational Temporal-Fused Learning. Front Aging Neurosci 2017; 9:6. [PMID: 28316569 PMCID: PMC5335657 DOI: 10.3389/fnagi.2017.00006] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 01/11/2017] [Indexed: 01/21/2023] Open
Abstract
It is highly desirable to predict the progression of Alzheimer's disease (AD) of patients [e.g., to predict conversion of mild cognitive impairment (MCI) to AD], especially longitudinal prediction of AD is important for its early diagnosis. Currently, most existing methods predict different clinical scores using different models, or separately predict multiple scores at different future time points. Such approaches prevent coordinated learning of multiple predictions that can be used to jointly predict clinical scores at multiple future time points. In this paper, we propose a joint learning method for predicting clinical scores of patients using multiple longitudinal prediction models for various future time points. Three important relationships among training samples, features, and clinical scores are explored. The relationship among different longitudinal prediction models is captured using a common feature set among the multiple prediction models at different time points. Our experimental results based on the Alzheimer's disease neuroimaging initiative (ADNI) database shows that our method achieves considerable improvement over competing methods in predicting multiple clinical scores.
Collapse
Affiliation(s)
- Baiying Lei
- School of Biomedical Engineering, Shenzhen UniversityShenzhen, China
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen UniversityShenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen UniversityShenzhen, China
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang UniversityFuzhou, China
| | - Feng Jiang
- School of Biomedical Engineering, Shenzhen UniversityShenzhen, China
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen UniversityShenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen UniversityShenzhen, China
| | - Siping Chen
- School of Biomedical Engineering, Shenzhen UniversityShenzhen, China
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen UniversityShenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen UniversityShenzhen, China
| | - Dong Ni
- School of Biomedical Engineering, Shenzhen UniversityShenzhen, China
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen UniversityShenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen UniversityShenzhen, China
| | - Tianfu Wang
- School of Biomedical Engineering, Shenzhen UniversityShenzhen, China
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen UniversityShenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen UniversityShenzhen, China
| |
Collapse
|
39
|
|
40
|
Puccio B, Pooley JP, Pellman JS, Taverna EC, Craddock RC. The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data. Gigascience 2016; 5:45. [PMID: 27782853 PMCID: PMC5080782 DOI: 10.1186/s13742-016-0150-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Accepted: 09/22/2016] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND Skull-stripping is the procedure of removing non-brain tissue from anatomical MRI data. This procedure can be useful for calculating brain volume and for improving the quality of other image processing steps. Developing new skull-stripping algorithms and evaluating their performance requires gold standard data from a variety of different scanners and acquisition methods. We complement existing repositories with manually corrected brain masks for 125 T1-weighted anatomical scans from the Nathan Kline Institute Enhanced Rockland Sample Neurofeedback Study. FINDINGS Skull-stripped images were obtained using a semi-automated procedure that involved skull-stripping the data using the brain extraction based on nonlocal segmentation technique (BEaST) software, and manually correcting the worst results. Corrected brain masks were added into the BEaST library and the procedure was repeated until acceptable brain masks were available for all images. In total, 85 of the skull-stripped images were hand-edited and 40 were deemed to not need editing. The results are brain masks for the 125 images along with a BEaST library for automatically skull-stripping other data. CONCLUSION Skull-stripped anatomical images from the Neurofeedback sample are available for download from the Preprocessed Connectomes Project. The resulting brain masks can be used by researchers to improve preprocessing of the Neurofeedback data, as training and testing data for developing new skull-stripping algorithms, and for evaluating the impact on other aspects of MRI preprocessing. We have illustrated the utility of these data as a reference for comparing various automatic methods and evaluated the performance of the newly created library on independent data.
Collapse
Affiliation(s)
- Benjamin Puccio
- Computational Neuroimaging Lab, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd, Orangeburg, 10962 NY USA
| | - James P. Pooley
- Center for the Developing Brain, Child Mind Institute, 445 Park Ave, New York, 10022 NY USA
| | - John S. Pellman
- Center for the Developing Brain, Child Mind Institute, 445 Park Ave, New York, 10022 NY USA
| | - Elise C. Taverna
- Computational Neuroimaging Lab, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd, Orangeburg, 10962 NY USA
| | - R. Cameron Craddock
- Computational Neuroimaging Lab, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd, Orangeburg, 10962 NY USA
- Center for the Developing Brain, Child Mind Institute, 445 Park Ave, New York, 10022 NY USA
| |
Collapse
|
41
|
Wang X, Shen D, Huang H. Prediction of Memory Impairment with MRI Data: A Longitudinal Study of Alzheimer's Disease. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2016; 9900:273-281. [PMID: 28149965 PMCID: PMC5278819 DOI: 10.1007/978-3-319-46720-7_32] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Alzheimer's Disease (AD), a severe type of neurodegenerative disorder with progressive impairment of learning and memory, has threatened the health of millions of people. How to recognize AD at early stage is crucial. Multiple models have been presented to predict cognitive impairments by means of neuroimaging data. However, traditional models did not employ the valuable longitudinal information along the progression of the disease. In this paper, we proposed a novel longitudinal feature learning model to simultaneously uncover the interrelations among different cognitive measures at different time points and utilize such interrelated structures to enhance the learning of associations between imaging features and prediction tasks. Moreover, we adopted Schatten p-norm to identify the interrelation structures existing in the low-rank subspace. Empirical results on the ADNI cohort demonstrated promising performance of our model.
Collapse
Affiliation(s)
- Xiaoqian Wang
- Computer Science and Engineering, University of Texas at Arlington, Arlington, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Heng Huang
- Computer Science and Engineering, University of Texas at Arlington, Arlington, USA
| |
Collapse
|
42
|
Huo Z, Shen D, Huang H. New Multi-task Learning Model to Predict Alzheimer's Disease Cognitive Assessment. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2016; 9900:317-325. [PMID: 28149966 PMCID: PMC5278836 DOI: 10.1007/978-3-319-46720-7_37] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
As a neurodegenerative disorder, the Alzheimer's disease (AD) status can be characterized by the progressive impairment of memory and other cognitive functions. Thus, it is an important topic to use neuroimaging measures to predict cognitive performance and track the progression of AD. Many existing cognitive performance prediction methods employ the regression models to associate cognitive scores to neuroimaging measures, but these methods do not take into account the interconnected structures within imaging data and those among cognitive scores. To address this problem, we propose a novel multi-task learning model for minimizing the k smallest singular values to uncover the underlying low-rank common subspace and jointly analyze all the imaging and clinical data. The effectiveness of our method is demonstrated by the clearly improved prediction performances in all empirical AD cognitive scores prediction cases.
Collapse
Affiliation(s)
- Zhouyuan Huo
- Computer Science and Engineering, University of Texas at Arlington, Arlington, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Heng Huang
- Computer Science and Engineering, University of Texas at Arlington, Arlington, USA
| |
Collapse
|
43
|
Li Z, Suk HI, Shen D, Li L. Sparse Multi-Response Tensor Regression for Alzheimer's Disease Study With Multivariate Clinical Assessments. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1927-1936. [PMID: 26960221 PMCID: PMC5154176 DOI: 10.1109/tmi.2016.2538289] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Alzheimer's disease (AD) is a progressive and irreversible neurodegenerative disorder that has recently seen serious increase in the number of affected subjects. In the last decade, neuroimaging has been shown to be a useful tool to understand AD and its prodromal stage, amnestic mild cognitive impairment (MCI). The majority of AD/MCI studies have focused on disease diagnosis, by formulating the problem as classification with a binary outcome of AD/MCI or healthy controls. There have recently emerged studies that associate image scans with continuous clinical scores that are expected to contain richer information than a binary outcome. However, very few studies aim at modeling multiple clinical scores simultaneously, even though it is commonly conceived that multivariate outcomes provide correlated and complementary information about the disease pathology. In this article, we propose a sparse multi-response tensor regression method to model multiple outcomes jointly as well as to model multiple voxels of an image jointly. The proposed method is particularly useful to both infer clinical scores and thus disease diagnosis, and to identify brain subregions that are highly relevant to the disease outcomes. We conducted experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, and showed that the proposed method enhances the performance and clearly outperforms the competing solutions.
Collapse
Affiliation(s)
- Zhou Li
- Department of Statistics, North Carolina State University, Raleigh, NC 27695 USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea
| | - Dinggang Shen
- Biomedical Research Imaging Center (BRIC) and Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea
| | - Lexin Li
- Division of Biostatistics, University of California at Berkeley, Berkeley, CA 94720 USA
| |
Collapse
|
44
|
Liu M, Zhang D, Adeli-Mosabbeb E, Shen D. Inherent Structure-Based Multiview Learning With Multitemplate Feature Representation for Alzheimer's Disease Diagnosis. IEEE Trans Biomed Eng 2016; 63:1473-82. [PMID: 26540666 PMCID: PMC4851920 DOI: 10.1109/tbme.2015.2496233] [Citation(s) in RCA: 71] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Multitemplate-based brain morphometric pattern analysis using magnetic resonance imaging has been recently proposed for automatic diagnosis of Alzheimer's disease (AD) and its prodromal stage (i.e., mild cognitive impairment or MCI). In such methods, multiview morphological patterns generated from multiple templates are used as feature representation for brain images. However, existing multitemplate-based methods often simply assume that each class is represented by a specific type of data distribution (i.e., a single cluster), while in reality, the underlying data distribution is actually not preknown. In this paper, we propose an inherent structure-based multiview leaning method using multiple templates for AD/MCI classification. Specifically, we first extract multiview feature representations for subjects using multiple selected templates and then cluster subjects within a specific class into several subclasses (i.e., clusters) in each view space. Then, we encode those subclasses with unique codes by considering both their original class information and their own distribution information, followed by a multitask feature selection model. Finally, we learn an ensemble of view-specific support vector machine classifiers based on their, respectively, selected features in each view and fuse their results to draw the final decision. Experimental results on the Alzheimer's Disease Neuroimaging Initiative database demonstrate that our method achieves promising results for AD/MCI classification, compared to the state-of-the-art multitemplate-based methods.
Collapse
Affiliation(s)
- Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Daoqiang Zhang
- School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Ehsan Adeli-Mosabbeb
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
45
|
Adeli E, Shi F, An L, Wee CY, Wu G, Wang T, Shen D. Joint feature-sample selection and robust diagnosis of Parkinson's disease from MRI data. Neuroimage 2016; 141:206-219. [PMID: 27296013 DOI: 10.1016/j.neuroimage.2016.05.054] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2015] [Revised: 03/31/2016] [Accepted: 05/22/2016] [Indexed: 01/27/2023] Open
Abstract
Parkinson's disease (PD) is an overwhelming neurodegenerative disorder caused by deterioration of a neurotransmitter, known as dopamine. Lack of this chemical messenger impairs several brain regions and yields various motor and non-motor symptoms. Incidence of PD is predicted to double in the next two decades, which urges more research to focus on its early diagnosis and treatment. In this paper, we propose an approach to diagnose PD using magnetic resonance imaging (MRI) data. Specifically, we first introduce a joint feature-sample selection (JFSS) method for selecting an optimal subset of samples and features, to learn a reliable diagnosis model. The proposed JFSS model effectively discards poor samples and irrelevant features. As a result, the selected features play an important role in PD characterization, which will help identify the most relevant and critical imaging biomarkers for PD. Then, a robust classification framework is proposed to simultaneously de-noise the selected subset of features and samples, and learn a classification model. Our model can also de-noise testing samples based on the cleaned training data. Unlike many previous works that perform de-noising in an unsupervised manner, we perform supervised de-noising for both training and testing data, thus boosting the diagnostic accuracy. Experimental results on both synthetic and publicly available PD datasets show promising results. To evaluate the proposed method, we use the popular Parkinson's progression markers initiative (PPMI) database. Our results indicate that the proposed method can differentiate between PD and normal control (NC), and outperforms the competing methods by a relatively large margin. It is noteworthy to mention that our proposed framework can also be used for diagnosis of other brain disorders. To show this, we have also conducted experiments on the widely-used ADNI database. The obtained results indicate that our proposed method can identify the imaging biomarkers and diagnose the disease with favorable accuracies compared to the baseline methods.
Collapse
Affiliation(s)
- Ehsan Adeli
- Department of Radiology and BRIC, University of North Carolina-Chapel Hill, NC 27599, USA
| | - Feng Shi
- Department of Radiology and BRIC, University of North Carolina-Chapel Hill, NC 27599, USA
| | - Le An
- Department of Radiology and BRIC, University of North Carolina-Chapel Hill, NC 27599, USA
| | - Chong-Yaw Wee
- Department of Radiology and BRIC, University of North Carolina-Chapel Hill, NC 27599, USA; Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina-Chapel Hill, NC 27599, USA
| | - Tao Wang
- Department of Radiology and BRIC, University of North Carolina-Chapel Hill, NC 27599, USA; Department of Geriatric Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Alzheimer's Disease and Related Disorders Center, Shanghai Jiao Tong University, Shanghai, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina-Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
46
|
Liu M, Zhang D, Shen D. Relationship Induced Multi-Template Learning for Diagnosis of Alzheimer's Disease and Mild Cognitive Impairment. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1463-74. [PMID: 26742127 PMCID: PMC5572669 DOI: 10.1109/tmi.2016.2515021] [Citation(s) in RCA: 117] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
As shown in the literature, methods based on multiple templates usually achieve better performance, compared with those using only a single template for processing medical images. However, most existing multi-template based methods simply average or concatenate multiple sets of features extracted from different templates, which potentially ignores important structural information contained in the multi-template data. Accordingly, in this paper, we propose a novel relationship induced multi-template learning method for automatic diagnosis of Alzheimer's disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI), by explicitly modeling structural information in the multi-template data. Specifically, we first nonlinearly register each brain's magnetic resonance (MR) image separately onto multiple pre-selected templates, and then extract multiple sets of features for this MR image. Next, we develop a novel feature selection algorithm by introducing two regularization terms to model the relationships among templates and among individual subjects. Using these selected features corresponding to multiple templates, we then construct multiple support vector machine (SVM) classifiers. Finally, an ensemble classification is used to combine outputs of all SVM classifiers, for achieving the final result. We evaluate our proposed method on 459 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, including 97 AD patients, 128 normal controls (NC), 117 progressive MCI (pMCI) patients, and 117 stable MCI (sMCI) patients. The experimental results demonstrate promising classification performance, compared with several state-of-the-art methods for multi-template based AD/MCI classification.
Collapse
|
47
|
Alansary A, Ismail M, Soliman A, Khalifa F, Nitzken M, Elnakib A, Mostapha M, Black A, Stinebruner K, Casanova MF, Zurada JM, El-Baz A. Infant Brain Extraction in T1-Weighted MR Images Using BET and Refinement Using LCDG and MGRF Models. IEEE J Biomed Health Inform 2016; 20:925-935. [DOI: 10.1109/jbhi.2015.2415477] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
48
|
Zhu X, Suk HI, Lee SW, Shen D. Subspace Regularized Sparse Multitask Learning for Multiclass Neurodegenerative Disease Identification. IEEE Trans Biomed Eng 2016; 63:607-18. [PMID: 26276982 PMCID: PMC4751062 DOI: 10.1109/tbme.2015.2466616] [Citation(s) in RCA: 104] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
The high feature-dimension and low sample-size problem is one of the major challenges in the study of computer-aided Alzheimer's disease (AD) diagnosis. To circumvent this problem, feature selection and subspace learning have been playing core roles in the literature. Generally, feature selection methods are preferable in clinical applications due to their ease for interpretation, but subspace learning methods can usually achieve more promising results. In this paper, we combine two different methodological approaches to discriminative feature selection in a unified framework. Specifically, we utilize two subspace learning methods, namely, linear discriminant analysis and locality preserving projection, which have proven their effectiveness in a variety of fields, to select class-discriminative and noise-resistant features. Unlike previous methods in neuroimaging studies that mostly focused on a binary classification, the proposed feature selection method is further applicable for multiclass classification in AD diagnosis. Extensive experiments on the Alzheimer's disease neuroimaging initiative dataset showed the effectiveness of the proposed method over other state-of-the-art methods.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | | |
Collapse
|
49
|
Thung KH, Wee CY, Yap PT, Shen D. Identification of progressive mild cognitive impairment patients using incomplete longitudinal MRI scans. Brain Struct Funct 2015; 221:3979-3995. [PMID: 26603378 DOI: 10.1007/s00429-015-1140-6] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2014] [Accepted: 10/26/2015] [Indexed: 11/26/2022]
Abstract
Distinguishing progressive mild cognitive impairment (pMCI) from stable mild cognitive impairment (sMCI) is critical for identification of patients who are at risk for Alzheimer's disease (AD), so that early treatment can be administered. In this paper, we propose a pMCI/sMCI classification framework that harnesses information available in longitudinal magnetic resonance imaging (MRI) data, which could be incomplete, to improve diagnostic accuracy. Volumetric features were first extracted from the baseline MRI scan and subsequent scans acquired after 6, 12, and 18 months. Dynamic features were then obtained using the 18th month scan as the reference and computing the ratios of feature differences for the earlier scans. Features that are linearly or non-linearly correlated with diagnostic labels are then selected using two elastic net sparse learning algorithms. Missing feature values due to the incomplete longitudinal data are imputed using a low-rank matrix completion method. Finally, based on the completed feature matrix, we build a multi-kernel support vector machine (mkSVM) to predict the diagnostic label of samples with unknown diagnostic statuses. Our evaluation indicates that a diagnosis accuracy as high as 78.2 % can be achieved when information from the longitudinal scans is used-6.6 % higher than the case using only the reference time point image. In other words, information provided by the longitudinal history of the disease improves diagnosis accuracy.
Collapse
Affiliation(s)
- Kim-Han Thung
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Chong-Yaw Wee
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Department of Biomedical Engineering, Faculty of Engineering, National University of Singapore, Singapore, Singapore
| | - Pew-Thian Yap
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea.
| |
Collapse
|
50
|
Zhu X, Suk HI, Wang L, Lee SW, Shen D. A novel relational regularization feature selection method for joint regression and classification in AD diagnosis. Med Image Anal 2015; 38:205-214. [PMID: 26674971 DOI: 10.1016/j.media.2015.10.008] [Citation(s) in RCA: 109] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2014] [Revised: 06/10/2015] [Accepted: 10/21/2015] [Indexed: 01/18/2023]
Abstract
In this paper, we focus on joint regression and classification for Alzheimer's disease diagnosis and propose a new feature selection method by embedding the relational information inherent in the observations into a sparse multi-task learning framework. Specifically, the relational information includes three kinds of relationships (such as feature-feature relation, response-response relation, and sample-sample relation), for preserving three kinds of the similarity, such as for the features, the response variables, and the samples, respectively. To conduct feature selection, we first formulate the objective function by imposing these three relational characteristics along with an ℓ2,1-norm regularization term, and further propose a computationally efficient algorithm to optimize the proposed objective function. With the dimension-reduced data, we train two support vector regression models to predict the clinical scores of ADAS-Cog and MMSE, respectively, and also a support vector classification model to determine the clinical label. We conducted extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to validate the effectiveness of the proposed method. Our experimental results showed the efficacy of the proposed method in enhancing the performances of both clinical scores prediction and disease status identification, compared to the state-of-the-art methods.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Li Wang
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, USA
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea.
| | - Dinggang Shen
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Republic of Korea.
| | | |
Collapse
|