1
|
Saponaro S, Lizzi F, Serra G, Mainas F, Oliva P, Giuliano A, Calderoni S, Retico A. Deep learning based joint fusion approach to exploit anatomical and functional brain information in autism spectrum disorders. Brain Inform 2024; 11:2. [PMID: 38194126 PMCID: PMC10776521 DOI: 10.1186/s40708-023-00217-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 12/20/2023] [Indexed: 01/10/2024] Open
Abstract
BACKGROUND The integration of the information encoded in multiparametric MRI images can enhance the performance of machine-learning classifiers. In this study, we investigate whether the combination of structural and functional MRI might improve the performances of a deep learning (DL) model trained to discriminate subjects with Autism Spectrum Disorders (ASD) with respect to typically developing controls (TD). MATERIAL AND METHODS We analyzed both structural and functional MRI brain scans publicly available within the ABIDE I and II data collections. We considered 1383 male subjects with age between 5 and 40 years, including 680 subjects with ASD and 703 TD from 35 different acquisition sites. We extracted morphometric and functional brain features from MRI scans with the Freesurfer and the CPAC analysis packages, respectively. Then, due to the multisite nature of the dataset, we implemented a data harmonization protocol. The ASD vs. TD classification was carried out with a multiple-input DL model, consisting in a neural network which generates a fixed-length feature representation of the data of each modality (FR-NN), and a Dense Neural Network for classification (C-NN). Specifically, we implemented a joint fusion approach to multiple source data integration. The main advantage of the latter is that the loss is propagated back to the FR-NN during the training, thus creating informative feature representations for each data modality. Then, a C-NN, with a number of layers and neurons per layer to be optimized during the model training, performs the ASD-TD discrimination. The performance was evaluated by computing the Area under the Receiver Operating Characteristic curve within a nested 10-fold cross-validation. The brain features that drive the DL classification were identified by the SHAP explainability framework. RESULTS The AUC values of 0.66±0.05 and of 0.76±0.04 were obtained in the ASD vs. TD discrimination when only structural or functional features are considered, respectively. The joint fusion approach led to an AUC of 0.78±0.04. The set of structural and functional connectivity features identified as the most important for the two-class discrimination supports the idea that brain changes tend to occur in individuals with ASD in regions belonging to the Default Mode Network and to the Social Brain. CONCLUSIONS Our results demonstrate that the multimodal joint fusion approach outperforms the classification results obtained with data acquired by a single MRI modality as it efficiently exploits the complementarity of structural and functional brain information.
Collapse
Affiliation(s)
- Sara Saponaro
- Medical Physics School, University of Pisa, Pisa, Italy.
- National Institute for Nuclear Physics (INFN), Pisa Division, Pisa, Italy.
| | - Francesca Lizzi
- National Institute for Nuclear Physics (INFN), Pisa Division, Pisa, Italy
| | - Giacomo Serra
- Department of Physics, University of Cagliari, Cagliari, Italy
- INFN, Cagliari Division, Cagliari, Italy
| | - Francesca Mainas
- INFN, Cagliari Division, Cagliari, Italy
- Department of Computer Science, University of Pisa, Pisa, Italy
| | - Piernicola Oliva
- INFN, Cagliari Division, Cagliari, Italy
- Department of Chemical, Physical, Mathematical and Natural Sciences, University of Sassari, Sassari, Italy
| | - Alessia Giuliano
- Unit of Medical Physics, Pisa University Hospital "Azienda Ospedaliero-Universitaria Pisana", Pisa, Italy
| | - Sara Calderoni
- Developmental Psychiatry Unit - IRCCS Stella Maris Foundation, Pisa, Italy
- Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy
| | - Alessandra Retico
- National Institute for Nuclear Physics (INFN), Pisa Division, Pisa, Italy
| |
Collapse
|
2
|
Alharthi AG, Alzahrani SM. Do it the transformer way: A comprehensive review of brain and vision transformers for autism spectrum disorder diagnosis and classification. Comput Biol Med 2023; 167:107667. [PMID: 37939407 DOI: 10.1016/j.compbiomed.2023.107667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 10/25/2023] [Accepted: 10/31/2023] [Indexed: 11/10/2023]
Abstract
Autism spectrum disorder (ASD) is a condition observed in children who display abnormal patterns of interaction, behavior, and communication with others. Despite extensive research efforts, the underlying causes of this neurodevelopmental disorder and its biomarkers remain unknown. However, advancements in artificial intelligence and machine learning have improved clinicians' ability to diagnose ASD. This review paper investigates various MRI modalities to identify distinct features that characterize individuals with ASD compared to typical control subjects. The review then moves on to explore deep learning models for ASD diagnosis, including convolutional neural networks (CNNs), autoencoders, graph convolutions, attention networks, and other models. CNNs and their variations are particularly effective due to their capacity to learn structured image representations and identify reliable biomarkers for brain disorders. Computer vision transformers often employ CNN architectures with transfer learning techniques like fine-tuning and layer freezing to enhance image classification performance, surpassing traditional machine learning models. This review paper contributes in three main ways. Firstly, it provides a comprehensive overview of a recommended architecture for using vision transformers in the systematic ASD diagnostic process. To this end, the paper investigates various pre-trained vision architectures such as VGG, ResNet, Inception, InceptionResNet, DenseNet, and Swin models that were fine-tuned for ASD diagnosis and classification. Secondly, it discusses the vision transformers of 2020th like BiT, ViT, MobileViT, and ConvNeXt, and applying transfer learning methods in relation to their prospective practicality in ASD classification. Thirdly, it explores brain transformers that are pre-trained on medically rich data and MRI neuroimaging datasets. The paper recommends a systematic architecture for ASD diagnosis using brain transformers. It also reviews recently developed brain transformer-based models, such as METAFormer, Com-BrainTF, Brain Network, ST-Transformer, STCAL, BolT, and BrainFormer, discussing their deep transfer learning architectures and results in ASD detection. Additionally, the paper summarizes and discusses brain-related transformers for various brain disorders, such as MSGTN, STAGIN, and MedTransformer, in relation to their potential usefulness in ASD. The study suggests that developing specialized transformer-based models, following the success of natural language processing (NLP), can offer new directions for image classification problems in ASD brain biomarkers learning and classification. By incorporating the attention mechanism, treating MRI modalities as sequence prediction tasks trained on brain disorder classification problems, and fine-tuned on ASD datasets, brain transformers can show a great promise in ASD diagnosis.
Collapse
Affiliation(s)
- Asrar G Alharthi
- Department of Computer Science, College of Computers and Information Technology, Taif University, Saudi Arabia.
| | - Salha M Alzahrani
- Department of Computer Science, College of Computers and Information Technology, Taif University, Saudi Arabia
| |
Collapse
|
3
|
Li T, Xu Y, Wu T, Charlton JR, Bennett KM, Al-Hindawi F. BlobCUT: A Contrastive Learning Method to Support Small Blob Detection in Medical Imaging. Bioengineering (Basel) 2023; 10:1372. [PMID: 38135963 PMCID: PMC10740534 DOI: 10.3390/bioengineering10121372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/19/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
Medical imaging-based biomarkers derived from small objects (e.g., cell nuclei) play a crucial role in medical applications. However, detecting and segmenting small objects (a.k.a. blobs) remains a challenging task. In this research, we propose a novel 3D small blob detector called BlobCUT. BlobCUT is an unpaired image-to-image (I2I) translation model that falls under the Contrastive Unpaired Translation paradigm. It employs a blob synthesis module to generate synthetic 3D blobs with corresponding masks. This is incorporated into the iterative model training as the ground truth. The I2I translation process is designed with two constraints: (1) a convexity consistency constraint that relies on Hessian analysis to preserve the geometric properties and (2) an intensity distribution consistency constraint based on Kullback-Leibler divergence to preserve the intensity distribution of blobs. BlobCUT learns the inherent noise distribution from the target noisy blob images and performs image translation from the noisy domain to the clean domain, effectively functioning as a denoising process to support blob identification. To validate the performance of BlobCUT, we evaluate it on a 3D simulated dataset of blobs and a 3D MRI dataset of mouse kidneys. We conduct a comparative analysis involving six state-of-the-art methods. Our findings reveal that BlobCUT exhibits superior performance and training efficiency, utilizing only 56.6% of the training time required by the state-of-the-art BlobDetGAN. This underscores the effectiveness of BlobCUT in accurately segmenting small blobs while achieving notable gains in training efficiency.
Collapse
Affiliation(s)
- Teng Li
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| | - Yanzhe Xu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| | - Teresa Wu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| | - Jennifer R. Charlton
- Division Nephrology, Department of Pediatrics, University of Virginia, Charlottesville, VA 22903, USA;
| | - Kevin M. Bennett
- Department of Radiology, Washington University, St. Louis, MO 63130, USA;
| | - Firas Al-Hindawi
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| |
Collapse
|
4
|
Alharthi AG, Alzahrani SM. Multi-Slice Generation sMRI and fMRI for Autism Spectrum Disorder Diagnosis Using 3D-CNN and Vision Transformers. Brain Sci 2023; 13:1578. [PMID: 38002538 PMCID: PMC10670036 DOI: 10.3390/brainsci13111578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 11/03/2023] [Accepted: 11/09/2023] [Indexed: 11/26/2023] Open
Abstract
Researchers have explored various potential indicators of ASD, including changes in brain structure and activity, genetics, and immune system abnormalities, but no definitive indicator has been found yet. Therefore, this study aims to investigate ASD indicators using two types of magnetic resonance images (MRI), structural (sMRI) and functional (fMRI), and to address the issue of limited data availability. Transfer learning is a valuable technique when working with limited data, as it utilizes knowledge gained from a pre-trained model in a domain with abundant data. This study proposed the use of four vision transformers namely ConvNeXT, MobileNet, Swin, and ViT using sMRI modalities. The study also investigated the use of a 3D-CNN model with sMRI and fMRI modalities. Our experiments involved different methods of generating data and extracting slices from raw 3D sMRI and 4D fMRI scans along the axial, coronal, and sagittal brain planes. To evaluate our methods, we utilized a standard neuroimaging dataset called NYU from the ABIDE repository to classify ASD subjects from typical control subjects. The performance of our models was evaluated against several baselines including studies that implemented VGG and ResNet transfer learning models. Our experimental results validate the effectiveness of the proposed multi-slice generation with the 3D-CNN and transfer learning methods as they achieved state-of-the-art results. In particular, results from 50-middle slices from the fMRI and 3D-CNN showed a profound promise in ASD classifiability as it obtained a maximum accuracy of 0.8710 and F1-score of 0.8261 when using the mean of 4D images across the axial, coronal, and sagittal. Additionally, the use of the whole slices in fMRI except the beginnings and the ends of brain views helped to reduce irrelevant information and showed good performance of 0.8387 accuracy and 0.7727 F1-score. Lastly, the transfer learning with the ConvNeXt model achieved results higher than other transformers when using 50-middle slices sMRI along the axial, coronal, and sagittal planes.
Collapse
Affiliation(s)
| | - Salha M. Alzahrani
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia;
| |
Collapse
|
5
|
Rodriguez U, Deddah T, Kim SH, Shen M, Botteron KN, Louis Collins D, Dager SR, Estes AM, Evans AC, Hazlett HC, McKinstry R, Shultz RT, Piven J, Dang Q, Styner M, Prieto JC. IcoConv : Explainable brain cortical surface analysis for ASD classification. Shape Med Imaging (2023) 2023; 14350:248-258. [PMID: 38425723 PMCID: PMC10902712 DOI: 10.1007/978-3-031-46914-5_20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
In this study, we introduce a novel approach for the analysis and interpretation of 3D shapes, particularly applied in the context of neuroscientific research. Our method captures 2D perspectives from various vantage points of a 3D object. These perspectives are subsequently analyzed using 2D Convolutional Neural Networks (CNNs), uniquely modified with custom pooling mechanisms. We sought to assess the efficacy of our approach through a binary classification task involving subjects at high risk for Autism Spectrum Disorder (ASD). The task entailed differentiating between high-risk positive and high-risk negative ASD cases. To do this, we employed brain attributes like cortical thickness, surface area, and extra-axial cerebral spinal measurements. We then mapped these measurements onto the surface of a sphere and subsequently analyzed them via our bespoke method. One distinguishing feature of our method is the pooling of data from diverse views using our icosahedron convolution operator. This operator facilitates the efficient sharing of information between neighboring views. A significant contribution of our method is the generation of gradient-based explainability maps, which can be visualized on the brain surface. The insights derived from these explainability images align with prior research findings, particularly those detailing the brain regions typically impacted by ASD. Our innovative approach thereby substantiates the known understanding of this disorder while potentially unveiling novel areas of study.
Collapse
Affiliation(s)
| | | | | | - Mark Shen
- University of North Carolina, Chapel Hill, NC
| | | | | | | | | | | | | | | | | | | | - Quyen Dang
- University of North Carolina, Chapel Hill, NC
| | | | | |
Collapse
|
6
|
Teng J, Mi C, Shi J, Li N. Brain disease research based on functional magnetic resonance imaging data and machine learning: a review. Front Neurosci 2023; 17:1227491. [PMID: 37662098 PMCID: PMC10469689 DOI: 10.3389/fnins.2023.1227491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 07/13/2023] [Indexed: 09/05/2023] Open
Abstract
Brain diseases, including neurodegenerative diseases and neuropsychiatric diseases, have long plagued the lives of the affected populations and caused a huge burden on public health. Functional magnetic resonance imaging (fMRI) is an excellent neuroimaging technology for measuring brain activity, which provides new insight for clinicians to help diagnose brain diseases. In recent years, machine learning methods have displayed superior performance in diagnosing brain diseases compared to conventional methods, attracting great attention from researchers. This paper reviews the representative research of machine learning methods in brain disease diagnosis based on fMRI data in the recent three years, focusing on the most frequent four active brain disease studies, including Alzheimer's disease/mild cognitive impairment, autism spectrum disorders, schizophrenia, and Parkinson's disease. We summarize these 55 articles from multiple perspectives, including the effect of the size of subjects, extracted features, feature selection methods, classification models, validation methods, and corresponding accuracies. Finally, we analyze these articles and introduce future research directions to provide neuroimaging scientists and researchers in the interdisciplinary fields of computing and medicine with new ideas for AI-aided brain disease diagnosis.
Collapse
Affiliation(s)
- Jing Teng
- School of Control and Computer Engineering, North China Electric Power University, Beijing, China
| | - Chunlin Mi
- School of Control and Computer Engineering, North China Electric Power University, Beijing, China
| | - Jian Shi
- Department of Hematology and Critical Care Medicine, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Na Li
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, China
| |
Collapse
|
7
|
Helmy E, Elnakib A, ElNakieb Y, Khudri M, Abdelrahim M, Yousaf J, Ghazal M, Contractor S, Barnes GN, El-Baz A. Role of Artificial Intelligence for Autism Diagnosis Using DTI and fMRI: A Survey. Biomedicines 2023; 11:1858. [PMID: 37509498 PMCID: PMC10376963 DOI: 10.3390/biomedicines11071858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 06/22/2023] [Accepted: 06/27/2023] [Indexed: 07/30/2023] Open
Abstract
Autism spectrum disorder (ASD) is a wide range of diseases characterized by difficulties with social skills, repetitive activities, speech, and nonverbal communication. The Centers for Disease Control (CDC) estimates that 1 in 44 American children currently suffer from ASD. The current gold standard for ASD diagnosis is based on behavior observational tests by clinicians, which suffer from being subjective and time-consuming and afford only late detection (a child must have a mental age of at least two to apply for an observation report). Alternatively, brain imaging-more specifically, magnetic resonance imaging (MRI)-has proven its ability to assist in fast, objective, and early ASD diagnosis and detection. With the recent advances in artificial intelligence (AI) and machine learning (ML) techniques, sufficient tools have been developed for both automated ASD diagnosis and early detection. More recently, the development of deep learning (DL), a young subfield of AI based on artificial neural networks (ANNs), has successfully enabled the processing of brain MRI data with improved ASD diagnostic abilities. This survey focuses on the role of AI in autism diagnostics and detection based on two basic MRI modalities: diffusion tensor imaging (DTI) and functional MRI (fMRI). In addition, the survey outlines the basic findings of DTI and fMRI in autism. Furthermore, recent techniques for ASD detection using DTI and fMRI are summarized and discussed. Finally, emerging tendencies are described. The results of this study show how useful AI is for early, subjective ASD detection and diagnosis. More AI solutions that have the potential to be used in healthcare settings will be introduced in the future.
Collapse
Affiliation(s)
- Eman Helmy
- Department of Diagnostic Radiology, Faculty of Medicine, Mansoura University, Elgomheryia Street, Mansoura 3512, Egypt;
| | - Ahmed Elnakib
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (A.E.); (Y.E.); (M.K.); (M.A.)
| | - Yaser ElNakieb
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (A.E.); (Y.E.); (M.K.); (M.A.)
| | - Mohamed Khudri
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (A.E.); (Y.E.); (M.K.); (M.A.)
| | - Mostafa Abdelrahim
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (A.E.); (Y.E.); (M.K.); (M.A.)
| | - Jawad Yousaf
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (J.Y.); (M.G.)
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (J.Y.); (M.G.)
| | - Sohail Contractor
- Department of Radiology, University of Louisville, Louisville, KY 40202, USA;
| | - Gregory Neal Barnes
- Department of Neurology, Pediatric Research Institute, University of Louisville, Louisville, KY 40202, USA;
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (A.E.); (Y.E.); (M.K.); (M.A.)
| |
Collapse
|
8
|
Zhai H, Lv X, Hou Z, Tong X, Bu F. MLSFF: Multi-level structural features fusion for multi-modal knowledge graph completion. Math Biosci Eng 2023; 20:14096-14116. [PMID: 37679127 DOI: 10.3934/mbe.2023630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
With the rise of multi-modal methods, multi-modal knowledge graphs have become a better choice for storing human knowledge. However, knowledge graphs often suffer from the problem of incompleteness due to the infinite and constantly updating nature of knowledge, and thus the task of knowledge graph completion has been proposed. Existing multi-modal knowledge graph completion methods mostly rely on either embedding-based representations or graph neural networks, and there is still room for improvement in terms of interpretability and the ability to handle multi-hop tasks. Therefore, we propose a new method for multi-modal knowledge graph completion. Our method aims to learn multi-level graph structural features to fully explore hidden relationships within the knowledge graph and to improve reasoning accuracy. Specifically, we first use a Transformer architecture to separately learn about data representations for both the image and text modalities. Then, with the help of multimodal gating units, we filter out irrelevant information and perform feature fusion to obtain a unified encoding of knowledge representations. Furthermore, we extract multi-level path features using a width-adjustable sliding window and learn about structural feature information in the knowledge graph using graph convolutional operations. Finally, we use a scoring function to evaluate the probability of the truthfulness of encoded triplets and to complete the prediction task. To demonstrate the effectiveness of the model, we conduct experiments on two publicly available datasets, FB15K-237-IMG and WN18-IMG, and achieve improvements of 1.8 and 0.7%, respectively, in the Hits@1 metric.
Collapse
Affiliation(s)
- Hanming Zhai
- School of Information Network Security, People's Public Security University of China, Beijing 100038, China
| | - Xiaojun Lv
- Institute of Computing Technology, China Academy of Railway Sciences Corporation Limited, Beijing 100081, China
| | - Zhiwen Hou
- School of Information Network Security, People's Public Security University of China, Beijing 100038, China
| | - Xin Tong
- School of Information Network Security, People's Public Security University of China, Beijing 100038, China
| | - Fanliang Bu
- School of Information Network Security, People's Public Security University of China, Beijing 100038, China
| |
Collapse
|
9
|
Nematollahi MA, Jahangiri S, Asadollahi A, Salimi M, Dehghan A, Mashayekh M, Roshanzamir M, Gholamabbas G, Alizadehsani R, Bazrafshan M, Bazrafshan H, Bazrafshan Drissi H, Shariful Islam SM. Body composition predicts hypertension using machine learning methods: a cohort study. Sci Rep 2023; 13:6885. [PMID: 37105977 PMCID: PMC10140285 DOI: 10.1038/s41598-023-34127-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 04/25/2023] [Indexed: 04/29/2023] Open
Abstract
We used machine learning methods to investigate if body composition indices predict hypertension. Data from a cohort study was used, and 4663 records were included (2156 were male, 1099 with hypertension, with the age range of 35-70 years old). Body composition analysis was done using bioelectrical impedance analysis (BIA); weight, basal metabolic rate, total and regional fat percentage (FATP), and total and regional fat-free mass (FFM) were measured. We used machine learning methods such as Support Vector Classifier, Decision Tree, Stochastic Gradient Descend Classifier, Logistic Regression, Gaussian Naïve Bayes, K-Nearest Neighbor, Multi-Layer Perceptron, Random Forest, Gradient Boosting, Histogram-based Gradient Boosting, Bagging, Extra Tree, Ada Boost, Voting, and Stacking to classify the investigated cases and find the most relevant features to hypertension. FATP, AFFM, BMR, FFM, TRFFM, AFATP, LFATP, and older age were the top features in hypertension prediction. Arm FFM, basal metabolic rate, total FFM, Trunk FFM, leg FFM, and male gender were inversely associated with hypertension, but total FATP, arm FATP, leg FATP, older age, trunk FATP, and female gender were directly associated with hypertension. AutoMLP, stacking and voting methods had the best performance for hypertension prediction achieving an accuracy rate of 90%, 84% and 83%, respectively. By using machine learning methods, we found that BIA-derived body composition indices predict hypertension with acceptable accuracy.
Collapse
Affiliation(s)
| | - Soodeh Jahangiri
- Student Research Committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Arefeh Asadollahi
- Non Communicable Diseases Research Center, Fasa University of Medical Sciences, Fasa, Iran
| | - Maryam Salimi
- Student Research Committee, Shiraz University of Medical Sciences, Shiraz, Iran
- Bone and Joint Diseases Research Center, Department of Orthopedic Surgery, Shiraz University of Medical Science, Shiraz, Iran
| | - Azizallah Dehghan
- Non Communicable Diseases Research Center, Fasa University of Medical Sciences, Fasa, Iran
| | - Mina Mashayekh
- Student Research Committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Mohamad Roshanzamir
- Department of Computer Engineering, Faculty of Engineering, Fasa University, Fasa, 74617-81189, Iran
| | - Ghazal Gholamabbas
- Student Research Committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia
| | | | - Hanieh Bazrafshan
- Department of Neurology, Clinical Neurology Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Hamed Bazrafshan Drissi
- Cardiovascular Research Center, Shiraz University of Medical Sciences, PO Box: 71348-14336, Shiraz, Iran.
| | - Sheikh Mohammed Shariful Islam
- Institute for Physical Activity and Nutrition, School of Exercise and Nutrition Sciences, Deakin University, Geelong, VIC, Australia
- Cardiovascular Division, The George Institute for Global Health, Newtown, Australia
- Sydney Medical School, University of Sydney, Camperdown, Australia
| |
Collapse
|
10
|
Chaddad A, Peng J, Xu J, Bouridane A. Survey of Explainable AI Techniques in Healthcare. Sensors (Basel) 2023; 23:s23020634. [PMID: 36679430 PMCID: PMC9862413 DOI: 10.3390/s23020634] [Citation(s) in RCA: 27] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 12/14/2022] [Accepted: 12/29/2022] [Indexed: 05/27/2023]
Abstract
Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient's symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
- The Laboratory for Imagery Vision and Artificial Intelligence, Ecole de Technologie Superieure, 1100 Rue Notre Dame O, Montreal, QC H3C 1K3, Canada
| | - Jihao Peng
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
| | - Jian Xu
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
| | - Ahmed Bouridane
- Centre for Data Analytics and Cybersecurity, University of Sharjah, Sharjah 27272, United Arab Emirates
| |
Collapse
|
11
|
Rana A, Dumka A, Singh R, Panda MK, Priyadarshi N. A Computerized Analysis with Machine Learning Techniques for the Diagnosis of Parkinson's Disease: Past Studies and Future Perspectives. Diagnostics (Basel) 2022; 12:2708. [PMID: 36359550 PMCID: PMC9689408 DOI: 10.3390/diagnostics12112708] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 10/30/2022] [Accepted: 11/02/2022] [Indexed: 08/03/2023] Open
Abstract
According to the World Health Organization (WHO), Parkinson's disease (PD) is a neurodegenerative disease of the brain that causes motor symptoms including slower movement, rigidity, tremor, and imbalance in addition to other problems like Alzheimer's disease (AD), psychiatric problems, insomnia, anxiety, and sensory abnormalities. Techniques including artificial intelligence (AI), machine learning (ML), and deep learning (DL) have been established for the classification of PD and normal controls (NC) with similar therapeutic appearances in order to address these problems and improve the diagnostic procedure for PD. In this article, we examine a literature survey of research articles published up to September 2022 in order to present an in-depth analysis of the use of datasets, various modalities, experimental setups, and architectures that have been applied in the diagnosis of subjective disease. This analysis includes a total of 217 research publications with a list of the various datasets, methodologies, and features. These findings suggest that ML/DL methods and novel biomarkers hold promising results for application in medical decision-making, leading to a more methodical and thorough detection of PD. Finally, we highlight the challenges and provide appropriate recommendations on selecting approaches that might be used for subgrouping and connection analysis with structural magnetic resonance imaging (sMRI), DaTSCAN, and single-photon emission computerized tomography (SPECT) data for future Parkinson's research.
Collapse
Affiliation(s)
- Arti Rana
- Computer Science & Engineering, Veer Madho Singh Bhandari Uttarakhand Technical University, Dehradun 248007, Uttarakhand, India
| | - Ankur Dumka
- Department of Computer Science and Engineering, Women Institute of Technology, Dehradun 248007, Uttarakhand, India
- Department of Computer Science & Engineering, Graphic Era Deemed to be University, Dehradun 248001, Uttarakhand, India
| | - Rajesh Singh
- Division of Research and Innovation, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun 248007, Uttarakhand, India
- Department of Project Management, Universidad Internacional Iberoamericana, Campeche 24560, Mexico
| | - Manoj Kumar Panda
- Department of Electrical Engineering, G.B. Pant Institute of Engineering and Technology, Pauri 246194, Uttarakhand, India
| | - Neeraj Priyadarshi
- Department of Electrical Engineering, JIS College of Engineering, Kolkata 741235, West Bengal, India
| |
Collapse
|