1
|
Sarveswaran T, Rajangam V. An ensemble approach using multidimensional convolutional neural networks in wavelet domain for schizophrenia classification from sMRI data. Sci Rep 2025; 15:10257. [PMID: 40133457 PMCID: PMC11937485 DOI: 10.1038/s41598-025-93912-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2024] [Accepted: 03/10/2025] [Indexed: 03/27/2025] Open
Abstract
Schizophrenia is a complicated mental condition marked by disruptions in thought processes, perceptions, and emotional responses, which can cause severe impairment in everyday functioning. sMRI is a non-invasive neuroimaging technology that visualizes the brain's structure while providing precise information on its anatomy and potential problems. This paper investigates the role of multidimensional Convolutional Neural Network (CNN) architectures: 1D-CNN, 2D-CNN and 3D-CNN, using the DWT subbands of sMRI data. 1D-CNN involves energy features extracted from the CD subband of sMRI data. The sum of gradient magnitudes of CD subband, known as energy feature, highlights diagonal high frequency elements associated with schizophrenia. 2D-CNN uses the CH subband decomposed by DWT that enables feature extraction from horizontal high frequency coefficients of sMRI data. In the case of 3D-CNNs, the CV subband is used which leads to volumetric feature extraction from vertical high frequency coefficients. Feature extraction in DWT domain explores textural changes, edges, coarse and fine details present in sMRI data from which the multidimensional feature extraction is carried out for classification.Through maximum voting technique, the proposed model optimizes schizophrenia classification from the multidimensional CNN models. The generalization of the proposed model for the two datasets proves convincing in improving the classification accuracy. The multidimensional CNN architectures achieve an average accuracy of 93.2%, 95.8%, and 98.0%, respectively, while the proposed model achieves an average accuracy of 98.9%.
Collapse
Affiliation(s)
| | - Vijayarajan Rajangam
- Centre for Healthcare Advancement, Innovation and Research, School of Electronics Engineering, Vellore Institute of Technology, Chennai, India.
| |
Collapse
|
2
|
Bit S, Dey P, Maji A, for the Alzheimer’s Disease Neuroimaging Initiative, Khan TK. MRI-based mild cognitive impairment and Alzheimer's disease classification using an algorithm of combination of variational autoencoder and other machine learning classifiers. J Alzheimers Dis Rep 2024; 8:1434-1452. [PMID: 40034356 PMCID: PMC11863754 DOI: 10.1177/25424823241290694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Accepted: 09/20/2024] [Indexed: 03/05/2025] Open
Abstract
Background Correctly diagnosing mild cognitive impairment (MCI) and Alzheimer's disease (AD) is important for patient selection in drug discovery. Research outcomes on stage diagnosis using neuroimages combined with cerebrospinal fluid and genetic biomarkers are expensive and time-consuming. Only structural magnetic resonance imaging (sMRI) scans from two internationally recognized datasets are employed as input as well as test and independent validation to determine the classification of dementia by the machine learning algorithm. Objective We extract the reduced dimensional latent feature vector from the sMRI scans using a variational autoencoder (VAE). The objective is to classify AD, MCI, and control (CN) using MRI and without any other information. Methods The extracted feature vectors from MRI scans by VAE are used as input conditions for different advanced machine-learning classifiers. Classification of AD/CN/MCI are conducted using the output of VAE from MRI images and different artificial intelligence/machine learning classifier models in two cohorts. Results Using only MRI scans, the primary goal of the study is to test the ability to classify AD from CN and MCI cases. The current study achieved classification accuracies of AD versus CN 75.45% (F1-score = 79.52%), AD versus MCI 81.41% (F1-Score = 87.06%), and autopsy-confirmed AD versus MCI 92.75% (F1-Score = 95.52%) in test sets and AD versus CN 86.16% (F1-score = 92.03%) and AD versus MCI 70.03% (F1-Score = 82.1%) in validation data set. Conclusions By overcoming the data leakage problem, the autopsy-confirmed machine learning classification model is tested in two independent cohorts. External validation by an independent cohort improved the quality and novelty of the classification algorithm.
Collapse
Affiliation(s)
| | | | - Arnab Maji
- Department of Chemistry, Indian Institute of Technology, Kanpur, UP, India
| | | | | |
Collapse
|
3
|
Yamaguchi H, Sugihara G, Shimizu M, Yamashita Y. Generative artificial intelligence model for simulating structural brain changes in schizophrenia. Front Psychiatry 2024; 15:1437075. [PMID: 39429522 PMCID: PMC11486638 DOI: 10.3389/fpsyt.2024.1437075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 09/12/2024] [Indexed: 10/22/2024] Open
Abstract
Background Recent advancements in generative artificial intelligence (AI) for image generation have presented significant opportunities for medical imaging, offering a promising way to generate realistic virtual medical images while ensuring patient privacy. The generation of a large number of virtual medical images through AI has the potential to augment training datasets for discriminative AI models, particularly in fields with limited data availability, such as neuroimaging. Current studies on generative AI in neuroimaging have mainly focused on disease discrimination; however, its potential for simulating complex phenomena in psychiatric disorders remains unknown. In this study, as examples of a simulation, we aimed to present a novel generative AI model that transforms magnetic resonance imaging (MRI) images of healthy individuals into images that resemble those of patients with schizophrenia (SZ) and explore its application. Methods We used anonymized public datasets from the Center for Biomedical Research Excellence (SZ, 71 patients; healthy subjects [HSs], 71 patients) and the Autism Brain Imaging Data Exchange (autism spectrum disorder [ASD], 79 subjects; HSs, 105 subjects). We developed a model to transform MRI images of HSs into MRI images of SZ using cycle generative adversarial networks. The efficacy of the transformation was evaluated using voxel-based morphometry to assess the differences in brain region volumes and the accuracy of age prediction pre- and post-transformation. In addition, the model was examined for its applicability in simulating disease comorbidities and disease progression. Results The model successfully transformed HS images into SZ images and identified brain volume changes consistent with existing case-control studies. We also applied this model to ASD MRI images, where simulations comparing SZ with and without ASD backgrounds highlighted the differences in brain structures due to comorbidities. Furthermore, simulating disease progression while preserving individual characteristics showcased the model's ability to reflect realistic disease trajectories. Discussion The results suggest that our generative AI model can capture subtle changes in brain structures associated with SZ, providing a novel tool for visualizing brain changes in different diseases. The potential of this model extends beyond clinical diagnosis to advances in the simulation of disease mechanisms, which may ultimately contribute to the refinement of therapeutic strategies.
Collapse
Affiliation(s)
- Hiroyuki Yamaguchi
- Department of Information Medicine, National Institute of Neuroscience, National Center of Neurology and Psychiatry, Tokyo, Japan
- Department of Psychiatry, Yokohama City University, School of Medicine, Yokohama, Japan
| | - Genichi Sugihara
- Department of Psychiatry and Behavioral Sciences, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Tokyo, Japan
| | - Masaaki Shimizu
- Department of Psychiatry and Behavioral Sciences, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yuichi Yamashita
- Department of Information Medicine, National Institute of Neuroscience, National Center of Neurology and Psychiatry, Tokyo, Japan
| |
Collapse
|
4
|
Zhao C, Jiang R, Bustillo J, Kochunov P, Turner JA, Liang C, Fu Z, Zhang D, Qi S, Calhoun VD. Cross-cohort replicable resting-state functional connectivity in predicting symptoms and cognition of schizophrenia. Hum Brain Mapp 2024; 45:e26694. [PMID: 38727014 PMCID: PMC11083889 DOI: 10.1002/hbm.26694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 03/24/2024] [Accepted: 04/10/2024] [Indexed: 05/13/2024] Open
Abstract
Schizophrenia (SZ) is a debilitating mental illness characterized by adolescence or early adulthood onset of psychosis, positive and negative symptoms, as well as cognitive impairments. Despite a plethora of studies leveraging functional connectivity (FC) from functional magnetic resonance imaging (fMRI) to predict symptoms and cognitive impairments of SZ, the findings have exhibited great heterogeneity. We aimed to identify congruous and replicable connectivity patterns capable of predicting positive and negative symptoms as well as cognitive impairments in SZ. Predictable functional connections (FCs) were identified by employing an individualized prediction model, whose replicability was further evaluated across three independent cohorts (BSNIP, SZ = 174; COBRE, SZ = 100; FBIRN, SZ = 161). Across cohorts, we observed that altered FCs in frontal-temporal-cingulate-thalamic network were replicable in prediction of positive symptoms, while sensorimotor network was predictive of negative symptoms. Temporal-parahippocampal network was consistently identified to be associated with reduced cognitive function. These replicable 23 FCs effectively distinguished SZ from healthy controls (HC) across three cohorts (82.7%, 90.2%, and 86.1%). Furthermore, models built using these replicable FCs showed comparable accuracies to those built using the whole-brain features in predicting symptoms/cognition of SZ across the three cohorts (r = .17-.33, p < .05). Overall, our findings provide new insights into the neural underpinnings of SZ symptoms/cognition and offer potential targets for further research and possible clinical interventions.
Collapse
Affiliation(s)
- Chunzhi Zhao
- College of Computer Science and TechnologyNanjing University of Aeronautics and AstronauticsNanjingChina
- Key Laboratory of Brain‐Machine Intelligence Technology, Ministry of EducationNanjing University of Aeronautics and AstronauticsNanjingChina
| | - Rongtao Jiang
- Department of Radiology and Biomedical ImagingYale School of MedicineNew HavenConnecticutUSA
| | - Juan Bustillo
- Department of Psychiatry and Behavioral SciencesUniversity of New MexicoAlbuquerqueNew MexicoUSA
| | - Peter Kochunov
- Department of Psychiatry and Behavioral SciencesUniversity of Texas Health Science Center HoustonHoustonTexasUSA
| | - Jessica A. Turner
- Tri‐institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS) Georgia State University, Georgia Institute of Technology, Emory UniversityAtlantaGeorgiaUSA
| | - Chuang Liang
- College of Computer Science and TechnologyNanjing University of Aeronautics and AstronauticsNanjingChina
- Key Laboratory of Brain‐Machine Intelligence Technology, Ministry of EducationNanjing University of Aeronautics and AstronauticsNanjingChina
| | - Zening Fu
- Tri‐institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS) Georgia State University, Georgia Institute of Technology, Emory UniversityAtlantaGeorgiaUSA
| | - Daoqiang Zhang
- College of Computer Science and TechnologyNanjing University of Aeronautics and AstronauticsNanjingChina
- Key Laboratory of Brain‐Machine Intelligence Technology, Ministry of EducationNanjing University of Aeronautics and AstronauticsNanjingChina
| | - Shile Qi
- College of Computer Science and TechnologyNanjing University of Aeronautics and AstronauticsNanjingChina
- Key Laboratory of Brain‐Machine Intelligence Technology, Ministry of EducationNanjing University of Aeronautics and AstronauticsNanjingChina
| | - Vince D. Calhoun
- Tri‐institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS) Georgia State University, Georgia Institute of Technology, Emory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
5
|
Patel K, Xie Z, Yuan H, Islam SMS, Xie Y, He W, Zhang W, Gottlieb A, Chen H, Giancardo L, Knaack A, Fletcher E, Fornage M, Ji S, Zhi D. Unsupervised deep representation learning enables phenotype discovery for genetic association studies of brain imaging. Commun Biol 2024; 7:414. [PMID: 38580839 PMCID: PMC10997628 DOI: 10.1038/s42003-024-06096-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 03/22/2024] [Indexed: 04/07/2024] Open
Abstract
Understanding the genetic architecture of brain structure is challenging, partly due to difficulties in designing robust, non-biased descriptors of brain morphology. Until recently, brain measures for genome-wide association studies (GWAS) consisted of traditionally expert-defined or software-derived image-derived phenotypes (IDPs) that are often based on theoretical preconceptions or computed from limited amounts of data. Here, we present an approach to derive brain imaging phenotypes using unsupervised deep representation learning. We train a 3-D convolutional autoencoder model with reconstruction loss on 6130 UK Biobank (UKBB) participants' T1 or T2-FLAIR (T2) brain MRIs to create a 128-dimensional representation known as Unsupervised Deep learning derived Imaging Phenotypes (UDIPs). GWAS of these UDIPs in held-out UKBB subjects (n = 22,880 discovery and n = 12,359/11,265 replication cohorts for T1/T2) identified 9457 significant SNPs organized into 97 independent genetic loci of which 60 loci were replicated. Twenty-six loci were not reported in earlier T1 and T2 IDP-based UK Biobank GWAS. We developed a perturbation-based decoder interpretation approach to show that these loci are associated with UDIPs mapped to multiple relevant brain regions. Our results established unsupervised deep learning can derive robust, unbiased, heritable, and interpretable brain imaging phenotypes.
Collapse
Affiliation(s)
- Khush Patel
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Ziqian Xie
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Hao Yuan
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, 77843, USA
| | | | - Yaochen Xie
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, 77843, USA
| | - Wei He
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Wanheng Zhang
- School of Public Health, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Assaf Gottlieb
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Han Chen
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
- School of Public Health, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Luca Giancardo
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Alexander Knaack
- Department of Neurology and Imaging of Dementia and Aging (IDeA) Laboratory, University of California at Davis, Davis, CA, 95618, USA
| | - Evan Fletcher
- Department of Neurology and Imaging of Dementia and Aging (IDeA) Laboratory, University of California at Davis, Davis, CA, 95618, USA
| | - Myriam Fornage
- School of Public Health, University of Texas Health Science Center, Houston, TX, 77030, USA
- McGovern Medical School, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Shuiwang Ji
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, 77843, USA
| | - Degui Zhi
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA.
| |
Collapse
|
6
|
Zelger P, Brunner A, Zelger B, Willenbacher E, Unterberger SH, Stalder R, Huck CW, Willenbacher W, Pallua JD. Deep learning analysis of mid-infrared microscopic imaging data for the diagnosis and classification of human lymphomas. JOURNAL OF BIOPHOTONICS 2023; 16:e202300015. [PMID: 37578837 DOI: 10.1002/jbio.202300015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 07/19/2023] [Accepted: 08/09/2023] [Indexed: 08/15/2023]
Abstract
The present study presents an alternative analytical workflow that combines mid-infrared (MIR) microscopic imaging and deep learning to diagnose human lymphoma and differentiate between small and large cell lymphoma. We could show that using a deep learning approach to analyze MIR hyperspectral data obtained from benign and malignant lymph node pathology results in high accuracy for correct classification, learning the distinct region of 3900 to 850 cm-1 . The accuracy is above 95% for every pair of malignant lymphoid tissue and still above 90% for the distinction between benign and malignant lymphoid tissue for binary classification. These results demonstrate that a preliminary diagnosis and subtyping of human lymphoma could be streamlined by applying a deep learning approach to analyze MIR spectroscopic data.
Collapse
Affiliation(s)
- P Zelger
- University Hospital of Hearing, Voice and Speech Disorders, Medical University of Innsbruck, Innsbruck, Austria
| | - A Brunner
- Institute of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, Innsbruck, Austria
| | - B Zelger
- Institute of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, Innsbruck, Austria
| | - E Willenbacher
- University Hospital of Internal Medicine V, Hematology & Oncology, Medical University of Innsbruck, Innsbruck, Austria
| | - S H Unterberger
- Institute of Material-Technology, Leopold-Franzens University Innsbruck, Innsbruck, Austria
| | - R Stalder
- Institute of Mineralogy and Petrography, Leopold-Franzens University Innsbruck, Innsbruck, Austria
| | - C W Huck
- Institute of Analytical Chemistry and Radiochemistry, Innsbruck, Austria
| | - W Willenbacher
- University Hospital of Internal Medicine V, Hematology & Oncology, Medical University of Innsbruck, Innsbruck, Austria
- Oncotyrol, Centre for Personalized Cancer Medicine, Innsbruck, Austria
| | - J D Pallua
- University Hospital for Orthopedics and Traumatology, Medical University of Innsbruck, Innsbruck, Austria
| |
Collapse
|
7
|
Steyaert S, Pizurica M, Nagaraj D, Khandelwal P, Hernandez-Boussard T, Gentles AJ, Gevaert O. Multimodal data fusion for cancer biomarker discovery with deep learning. NAT MACH INTELL 2023; 5:351-362. [PMID: 37693852 PMCID: PMC10484010 DOI: 10.1038/s42256-023-00633-5] [Citation(s) in RCA: 79] [Impact Index Per Article: 39.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 02/17/2023] [Indexed: 09/12/2023]
Abstract
Technological advances now make it possible to study a patient from multiple angles with high-dimensional, high-throughput multi-scale biomedical data. In oncology, massive amounts of data are being generated ranging from molecular, histopathology, radiology to clinical records. The introduction of deep learning has significantly advanced the analysis of biomedical data. However, most approaches focus on single data modalities leading to slow progress in methods to integrate complementary data types. Development of effective multimodal fusion approaches is becoming increasingly important as a single modality might not be consistent and sufficient to capture the heterogeneity of complex diseases to tailor medical care and improve personalised medicine. Many initiatives now focus on integrating these disparate modalities to unravel the biological processes involved in multifactorial diseases such as cancer. However, many obstacles remain, including lack of usable data as well as methods for clinical validation and interpretation. Here, we cover these current challenges and reflect on opportunities through deep learning to tackle data sparsity and scarcity, multimodal interpretability, and standardisation of datasets.
Collapse
Affiliation(s)
- Sandra Steyaert
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
| | - Marija Pizurica
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
| | | | | | - Tina Hernandez-Boussard
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| | - Andrew J Gentles
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| |
Collapse
|
8
|
Sun H, Luo G, Lui S, Huang X, Sweeney J, Gong Q. Morphological fingerprinting: Identifying patients with first-episode schizophrenia using auto-encoded morphological patterns. Hum Brain Mapp 2023; 44:779-789. [PMID: 36206321 PMCID: PMC9842922 DOI: 10.1002/hbm.26098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2022] [Revised: 09/09/2022] [Accepted: 09/22/2022] [Indexed: 01/25/2023] Open
Abstract
Although a large number of case-control statistical and machine learning studies have been conducted to investigate structural brain changes in schizophrenia, how best to measure and characterize structural abnormalities for use in classification algorithms remains an open question. In the current study, a convolutional 3D autoencoder specifically designed for discretized volumes was constructed and trained with segmented brains from 477 healthy individuals. A cohort containing 158 first-episode schizophrenia patients and 166 matched controls was fed into the trained autoencoder to generate auto-encoded morphological patterns. A classifier discriminating schizophrenia patients from healthy controls was built using 80% of the samples in this cohort by automated machine learning and validated on the remaining 20% of the samples, and this classifier was further validated on another independent cohort containing 77 first-episode schizophrenia patients and 58 matched controls acquired at a different resolution. This specially designed autoencoder allowed a satisfactory recovery of the input. With the same feature dimension, the classifier trained with autoencoded features outperformed the classifier trained with conventional morphological features by about 10% points, achieving 73.44% accuracy and 0.8 AUC on the internal validation set and 71.85% accuracy and 0.77 AUC on the external validation set. The use of features automatically learned from the segmented brain can better identify schizophrenia patients from healthy controls, but there is still a need for further improvements to establish a clinical diagnostic marker. However, with a limited sample size, the method proposed in the current study shed insight into the application of deep learning in psychiatric disorders.
Collapse
Affiliation(s)
- Huaiqiang Sun
- Huaxi MR Research Center (HMRRC), Department of RadiologyWest China Hospital of Sichuan UniversityChengduChina
- Functional and Molecular Imaging Key Laboratory of Sichuan ProvinceWest China Hospital of Sichuan UniversityChengduChina
| | - Guoting Luo
- Huaxi MR Research Center (HMRRC), Department of RadiologyWest China Hospital of Sichuan UniversityChengduChina
| | - Su Lui
- Huaxi MR Research Center (HMRRC), Department of RadiologyWest China Hospital of Sichuan UniversityChengduChina
- Functional and Molecular Imaging Key Laboratory of Sichuan ProvinceWest China Hospital of Sichuan UniversityChengduChina
| | - Xiaoqi Huang
- Huaxi MR Research Center (HMRRC), Department of RadiologyWest China Hospital of Sichuan UniversityChengduChina
- Functional and Molecular Imaging Key Laboratory of Sichuan ProvinceWest China Hospital of Sichuan UniversityChengduChina
| | - John Sweeney
- Huaxi MR Research Center (HMRRC), Department of RadiologyWest China Hospital of Sichuan UniversityChengduChina
- Department of Psychiatry and Behavioral NeuroscienceUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
| | - Qiyong Gong
- Huaxi MR Research Center (HMRRC), Department of RadiologyWest China Hospital of Sichuan UniversityChengduChina
- Department of RadiologyWest China Xiamen Hospital of Sichuan UniversityXiamenChina
| |
Collapse
|
9
|
Sadeghi D, Shoeibi A, Ghassemi N, Moridian P, Khadem A, Alizadehsani R, Teshnehlab M, Gorriz JM, Khozeimeh F, Zhang YD, Nahavandi S, Acharya UR. An overview of artificial intelligence techniques for diagnosis of Schizophrenia based on magnetic resonance imaging modalities: Methods, challenges, and future works. Comput Biol Med 2022; 146:105554. [DOI: 10.1016/j.compbiomed.2022.105554] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 04/11/2022] [Accepted: 04/11/2022] [Indexed: 12/21/2022]
|
10
|
Wang C, Li Y, Tsuboshita Y, Sakurai T, Goto T, Yamaguchi H, Yamashita Y, Sekiguchi A, Tachimori H. A high-generalizability machine learning framework for predicting the progression of Alzheimer's disease using limited data. NPJ Digit Med 2022; 5:43. [PMID: 35414651 PMCID: PMC9005545 DOI: 10.1038/s41746-022-00577-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 02/11/2022] [Indexed: 11/25/2022] Open
Abstract
Alzheimer’s disease is a neurodegenerative disease that imposes a substantial financial burden on society. A number of machine learning studies have been conducted to predict the speed of its progression, which varies widely among different individuals, for recruiting fast progressors in future clinical trials. However, because the data in this field are very limited, two problems have yet to be solved: the first is that models built on limited data tend to induce overfitting and have low generalizability, and the second is that no cross-cohort evaluations have been done. Here, to suppress the overfitting caused by limited data, we propose a hybrid machine learning framework consisting of multiple convolutional neural networks that automatically extract image features from the point of view of brain segments, which are relevant to cognitive decline according to clinical findings, and a linear support vector classifier that uses extracted image features together with non-image information to make robust final predictions. The experimental results indicate that our model achieves superior performance (accuracy: 0.88, area under the curve [AUC]: 0.95) compared with other state-of-the-art methods. Moreover, our framework demonstrates high generalizability as a result of evaluations using a completely different cohort dataset (accuracy: 0.84, AUC: 0.91) collected from a different population than that used for training.
Collapse
Affiliation(s)
- Caihua Wang
- Imaging Technology Center, FUJIFILM Corporation, Kanagawa, Japan.
| | - Yuanzhong Li
- Imaging Technology Center, FUJIFILM Corporation, Kanagawa, Japan.
| | | | - Takuya Sakurai
- Imaging Technology Center, FUJIFILM Corporation, Kanagawa, Japan
| | - Tsubasa Goto
- Imaging Technology Center, FUJIFILM Corporation, Kanagawa, Japan
| | - Hiroyuki Yamaguchi
- Department of Information Medicine, National Institute of Neuroscience, National Center of Neurology and Psychiatry, Tokyo, Japan.,Department of Psychiatry, Yokohama City University School of Medicine, Yokohama, Japan
| | - Yuichi Yamashita
- Department of Information Medicine, National Institute of Neuroscience, National Center of Neurology and Psychiatry, Tokyo, Japan
| | - Atsushi Sekiguchi
- Department of Behavioral Medicine, National Institute of Mental Health, National Center of Neurology and Psychiatry, Tokyo, Japan
| | - Hisateru Tachimori
- Department of Clinical Epidemiology, Translational Medical Center, National Center of Neurology and Psychiatry, Tokyo, Japan.,Endowed Course for Health System Innovation, Keio University School of Medicine, Tokyo, Japan
| | | |
Collapse
|
11
|
Hashimoto Y, Ogata Y, Honda M, Yamashita Y. Deep Feature Extraction for Resting-State Functional MRI by Self-Supervised Learning and Application to Schizophrenia Diagnosis. Front Neurosci 2021; 15:696853. [PMID: 34512240 PMCID: PMC8429808 DOI: 10.3389/fnins.2021.696853] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 07/20/2021] [Indexed: 01/21/2023] Open
Abstract
In this study, we propose a deep-learning technique for functional MRI analysis. We introduced a novel self-supervised learning scheme that is particularly useful for functional MRI wherein the subject identity is used as the teacher signal of a neural network. The neural network is trained solely based on functional MRI-scans, and the training does not require any explicit labels. The proposed method demonstrated that each temporal volume of resting state functional MRI contains enough information to identify the subject. The network learned a feature space in which the features were clustered per subject for the test data as well as for the training data; this is unlike the features extracted by conventional methods including region of interests (ROIs) pooling signals and principal component analysis. In addition, applying a simple linear classifier to the per-subject mean of the features (namely "identity feature"), we demonstrated that the extracted features could contribute to schizophrenia diagnosis. The classification accuracy of our identity features was comparable to that of the conventional functional connectivity. Our results suggested that our proposed training scheme of the neural network captured brain functioning related to the diagnosis of psychiatric disorders as well as the identity of the subject. Our results together highlight the validity of our proposed technique as a design for self-supervised learning.
Collapse
Affiliation(s)
- Yuki Hashimoto
- Department of Information Medicine, National Center of Neurology and Psychiatry, National Institute of Neuroscience, Kodaira, Japan
| | - Yousuke Ogata
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Japan
| | - Manabu Honda
- Department of Information Medicine, National Center of Neurology and Psychiatry, National Institute of Neuroscience, Kodaira, Japan
| | - Yuichi Yamashita
- Department of Information Medicine, National Center of Neurology and Psychiatry, National Institute of Neuroscience, Kodaira, Japan
| |
Collapse
|