1
|
Li Y, Liu X, Zhou J, Li F, Wang Y, Liu Q. Artificial intelligence in traditional Chinese medicine: advances in multi-metabolite multi-target interaction modeling. Front Pharmacol 2025; 16:1541509. [PMID: 40303920 PMCID: PMC12037568 DOI: 10.3389/fphar.2025.1541509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2024] [Accepted: 03/25/2025] [Indexed: 05/02/2025] Open
Abstract
Traditional Chinese Medicine (TCM) utilizes multi-metabolite and multi-target interventions to address complex diseases, providing advantages over single-target therapies. However, the active metabolites, therapeutic targets, and especially the combination mechanisms remain unclear. The integration of advanced data analysis and nonlinear modeling capabilities of artificial intelligence (AI) is driving the transformation of TCM into precision medicine. This review concentrates on the application of AI in TCM target prediction, including multi-omics techniques, TCM-specialized databases, machine learning (ML), deep learning (DL), and cross-modal fusion strategies. It also critically analyzes persistent challenges such as data heterogeneity, limited model interpretability, causal confounding, and insufficient robustness validation in practical applications. To enhance the reliability and scalability of AI in TCM target prediction, future research should prioritize continuous optimization of the AI algorithms using zero-shot learning, end-to-end architectures, and self-supervised contrastive learning.
Collapse
Affiliation(s)
| | | | | | | | | | - Qingzhong Liu
- Department of Clinical Laboratory, Shanghai Municipal Hospital of Traditional Chinese Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| |
Collapse
|
2
|
Lei B, Cai G, Zhu Y, Wang T, Dong L, Zhao C, Hu X, Zhu H, Lu L, Feng F, Feng M, Wang R. Self-Supervised Multi-Scale Multi-Modal Graph Pool Transformer for Sellar Region Tumor Diagnosis. IEEE J Biomed Health Inform 2025; 29:2758-2771. [PMID: 39527410 DOI: 10.1109/jbhi.2024.3496700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
The sellar region tumor is a brain tumor that only exists in the brain sellar, which affects the central nervous system. The early diagnosis of the sellar region tumor subtypes helps clinicians better understand the best treatment and recovery of patients. Magnetic resonance imaging (MRI) has proven to be an effective tool for the early detection of sellar region tumors. However, the existing sellar region tumor diagnosis still remains challenging due to the small amount of dataset and data imbalance. To overcome these challenges, we propose a novel self-supervised multi-scale multi-modal graph pool Transformer (MMGPT) network that can enhance the multi-modal fusion of small and imbalanced MRI data of sellar region tumors. MMGPT can strengthen feature interaction between multi-modal images, which makes our model more robust. A contrastive learning equipped auto-encoder (CAE) via self-supervised learning (SSL) is adopted to learn more detailed information between different samples. The proposed CAE transfers the pre-trained knowledge to the downstream tasks. Finally, a hybrid loss is equipped to relieve the performance degradation caused by data imbalance. The experimental results show that the proposed method outperforms state-of-the-art methods and obtains higher accuracy and AUC in the classification of sellar region tumors.
Collapse
|
3
|
da Costa Nascimento JJ, Marques AG, do Nascimento Souza L, de Mattos Dourado Junior CMJ, da Silva Barros AC, de Albuquerque VHC, de Freitas Sousa LF. A novel generative model for brain tumor detection using magnetic resonance imaging. Comput Med Imaging Graph 2025; 121:102498. [PMID: 39985841 DOI: 10.1016/j.compmedimag.2025.102498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Revised: 12/27/2024] [Accepted: 01/22/2025] [Indexed: 02/24/2025]
Abstract
Brain tumors are a disease that kills thousands of people worldwide each year. Early identification through diagnosis is essential for monitoring and treating patients. The proposed study brings a new method through intelligent computational cells that are capable of segmenting the tumor region with high precision. The method uses deep learning to detect brain tumors with the "You only look once" (Yolov8) framework, and a fine-tuning process at the end of the network layer using intelligent computational cells capable of traversing the detected region, segmenting the edges of the brain tumor. In addition, the method uses a classification pipeline that combines a set of classifiers and extractors combined with grid search, to find the best combination and the best parameters for the dataset. The method obtained satisfactory results above 98% accuracy for region detection, and above 99% for brain tumor segmentation and accuracies above 98% for binary classification of brain tumor, and segmentation time obtaining less than 1 s, surpassing the state of the art compared to the same database, demonstrating the effectiveness of the proposed method. The new approach proposes the classification of different databases through data fusion to classify the presence of tumor in MRI images, as well as the patient's life span. The segmentation and classification steps are validated by comparing them with the literature, with comparisons between works that used the same dataset. The method addresses a new generative AI for brain tumor capable of generating a pre-diagnosis through input data through Large Language Model (LLM), and can be used in systems to aid medical imaging diagnosis. As a contribution, this study employs new detection models combined with innovative methods based on digital image processing to improve segmentation metrics, as well as the use of Data Fusion, combining two tumor datasets to enhance classification performance. The study also utilizes LLM models to refine the pre-diagnosis obtained post-classification. Thus, this study proposes a Computer-Aided Diagnosis (CAD) method through AI with PDI, CNN, and LLM.
Collapse
Affiliation(s)
| | - Adriell Gomes Marques
- Instituto Federal de Educação, Ciência e Tecnologia do Ceará - Campus Fortaleza, Fortaleza, 60040-531, CE, Brazil.
| | | | | | | | | | | |
Collapse
|
4
|
Chen C, Mat Isa NA, Liu X. A review of convolutional neural network based methods for medical image classification. Comput Biol Med 2025; 185:109507. [PMID: 39631108 DOI: 10.1016/j.compbiomed.2024.109507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 11/20/2024] [Accepted: 11/28/2024] [Indexed: 12/07/2024]
Abstract
This study systematically reviews CNN-based medical image classification methods. We surveyed 149 of the latest and most important papers published to date and conducted an in-depth analysis of the methods used therein. Based on the selected literature, we organized this review systematically. First, the development and evolution of CNN in the field of medical image classification are analyzed. Subsequently, we provide an in-depth overview of the main techniques of CNN applied to medical image classification, which is also the current research focus in this field, including data preprocessing, transfer learning, CNN architectures, and explainability, and their role in improving classification accuracy and efficiency. In addition, this overview summarizes the main public datasets for various diseases. Although CNN has great potential in medical image classification tasks and has achieved good results, clinical application is still difficult. Therefore, we conclude by discussing the main challenges faced by CNNs in medical image analysis and pointing out future research directions to address these challenges. This review will help researchers with their future studies and can promote the successful integration of deep learning into clinical practice and smart medical systems.
Collapse
Affiliation(s)
- Chao Chen
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia; School of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin, 644000, China
| | - Nor Ashidi Mat Isa
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia.
| | - Xin Liu
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia
| |
Collapse
|
5
|
Amjad U, Raza A, Fahad M, Farid D, Akhunzada A, Abubakar M, Beenish H. Context aware machine learning techniques for brain tumor classification and detection - A review. Heliyon 2025; 11:e41835. [PMID: 39906822 PMCID: PMC11791217 DOI: 10.1016/j.heliyon.2025.e41835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 01/07/2025] [Accepted: 01/08/2025] [Indexed: 02/06/2025] Open
Abstract
Background Machine learning has tremendous potential in acute medical care, particularly in the field of precise medical diagnosis, prediction, and classification of brain tumors. Malignant gliomas, due to their aggressive growth and dismal prognosis, stand out among various brain tumor types. Recent advancements in understanding the genetic abnormalities that underlie these tumors have shed light on their histo-pathological and biological characteristics, which support in better classification and prognosis. Objectives This review aims to predict gene alterations and establish structured correlations among various tumor types, extending the prediction of genetic mutations and structures using the latest machine learning techniques. Specifically, it focuses on multi-modalities of Magnetic Resonance Imaging (MRI) and histopathology, utilizing Convolutional Neural Networks (CNN) for image processing and analysis. Methods The review encompasses the most recent developments in MRI, and histology image processing methods across multiple tumor classes, including Glioma, Meningioma, Pituitary, Oligodendroglioma, and Astrocytoma. It identifies challenges in tumor classification, segmentation, datasets, and modalities, employing various neural network architectures. A competitive analysis assesses the performance of CNN. Furthermore it also implies K-MEANS clustering to predict Genetic structure, Genes Clusters prediction and Molecular Alteration of various types and grades of tumors e.g. Glioma, Meningioma, Pituitary, Oligodendroglioma, and Astrocytoma. Results CNN and KNN structures, with their ability to extract highlights in image-based information, prove effective in tumor classification and segmentation, surmounting challenges in image analysis. Competitive analysis reveals that CNN and outperform others algorithms on publicly available datasets, suggesting their potential for precise tumor diagnosis and treatment planning. Conclusion Machine learning, especially through CNN and SVM algorithms, demonstrates significant potential in the accurate diagnosis and classification of brain tumors based on imaging and histo-pathological data. Further advancements in this area hold promise for improving the accuracy and efficiency of intra-operative tumor diagnosis and treatment.
Collapse
Affiliation(s)
- Usman Amjad
- NED University of Engineering and Technology, Karachi, Pakistan
| | - Asif Raza
- Sir Syed University of Engineering and Technology, Karachi, Pakistan
| | - Muhammad Fahad
- Karachi Institute of Economics and Technology, Karachi, Pakistan
| | | | - Adnan Akhunzada
- College of Computing and IT, University of Doha for Science and Technology, Qatar
| | - Muhammad Abubakar
- Muhammad Nawaz Shareef University of Engineering and Technology, Multan, Pakistan
| | - Hira Beenish
- Karachi Institute of Economics and Technology, Karachi, Pakistan
| |
Collapse
|
6
|
Zhang Z, Liu Z, Ning L, Martin A, Xiong J. Representation of Imprecision in Deep Neural Networks for Image Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:1199-1212. [PMID: 37948150 DOI: 10.1109/tnnls.2023.3329712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Quantification and reduction of uncertainty in deep-learning techniques have received much attention but ignored how to characterize the imprecision caused by such uncertainty. In some tasks, we prefer to obtain an imprecise result rather than being willing or unable to bear the cost of an error. For this purpose, we investigate the representation of imprecision in deep-learning (RIDL) techniques based on the theory of belief functions (TBF). First, the labels of some training images are reconstructed using the learning mechanism of neural networks to characterize the imprecision in the training set. In the process, a label assignment rule is proposed to reassign one or more labels to each training image. Once an image is assigned with multiple labels, it indicates that the image may be in an overlapping region of different categories from the feature perspective or the original label is wrong. Second, those images with multiple labels are rechecked. As a result, the imprecision (multiple labels) caused by the original labeling errors will be corrected, while the imprecision caused by insufficient knowledge is retained. Images with multiple labels are called imprecise ones, and they are considered to belong to meta-categories, the union of some specific categories. Third, the deep network model is retrained based on the reconstructed training set, and the test images are then classified. Finally, some test images that specific categories cannot distinguish will be assigned to meta-categories to characterize the imprecision in the results. Experiments based on some remarkable networks have shown that RIDL can improve accuracy (AC) and reasonably represent imprecision both in the training and testing sets.
Collapse
|
7
|
Chen Y, Lu W, Qin X, Wang J, Xie X. MetaFed: Federated Learning Among Federations With Cyclic Knowledge Distillation for Personalized Healthcare. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:16671-16682. [PMID: 37506019 DOI: 10.1109/tnnls.2023.3297103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
Federated learning (FL) has attracted increasing attention to building models without accessing raw user data, especially in healthcare. In real applications, different federations can seldom work together due to possible reasons such as data heterogeneity and distrust/inexistence of the central server. In this article, we propose a novel framework called MetaFed to facilitate trustworthy FL between different federations. MetaFed obtains a personalized model for each federation without a central server via the proposed cyclic knowledge distillation. Specifically, MetaFed treats each federation as a meta distribution and aggregates knowledge of each federation in a cyclic manner. The training is split into two parts: common knowledge accumulation and personalization. Comprehensive experiments on seven benchmarks demonstrate that MetaFed without a server achieves better accuracy compared with state-of-the-art methods [e.g., 10%+ accuracy improvement compared with the baseline for physical activity monitoring dataset (PAMAP2)] with fewer communication costs. More importantly, MetaFed shows remarkable performance in real-healthcare-related applications.
Collapse
|
8
|
Ullah MS, Khan MA, Albarakati HM, Damaševičius R, Alsenan S. Multimodal brain tumor segmentation and classification from MRI scans based on optimized DeepLabV3+ and interpreted networks information fusion empowered with explainable AI. Comput Biol Med 2024; 182:109183. [PMID: 39357134 DOI: 10.1016/j.compbiomed.2024.109183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Revised: 09/03/2024] [Accepted: 09/20/2024] [Indexed: 10/04/2024]
Abstract
Explainable artificial intelligence (XAI) aims to offer machine learning (ML) methods that enable people to comprehend, properly trust, and create more explainable models. In medical imaging, XAI has been adopted to interpret deep learning black box models to demonstrate the trustworthiness of machine decisions and predictions. In this work, we proposed a deep learning and explainable AI-based framework for segmenting and classifying brain tumors. The proposed framework consists of two parts. The first part, encoder-decoder-based DeepLabv3+ architecture, is implemented with Bayesian Optimization (BO) based hyperparameter initialization. The different scales are performed, and features are extracted through the Atrous Spatial Pyramid Pooling (ASPP) technique. The extracted features are passed to the output layer for tumor segmentation. In the second part of the proposed framework, two customized models have been proposed named Inverted Residual Bottleneck 96 layers (IRB-96) and Inverted Residual Bottleneck Self-Attention (IRB-Self). Both models are trained on the selected brain tumor datasets and extracted features from the global average pooling and self-attention layers. Features are fused using a serial approach, and classification is performed. The BO-based hyperparameters optimization of the neural network classifiers is performed and the classification results have been optimized. An XAI method named LIME is implemented to check the interpretability of the proposed models. The experimental process of the proposed framework was performed on the Figshare dataset, and an average segmentation accuracy of 92.68 % and classification accuracy of 95.42 % were obtained, respectively. Compared with state-of-the-art techniques, the proposed framework shows improved accuracy.
Collapse
Affiliation(s)
| | - Muhammad Attique Khan
- Department of Artificial Intelligence, College of Computer Engineering and Science, Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia.
| | - Hussain Mobarak Albarakati
- Computer and Network Engineering Department, College of Computing, Umm Al-Qura University, Makkah, 24382, Saudi Arabia
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100, Gliwice, Poland
| | - Shrooq Alsenan
- Information Systems Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia.
| |
Collapse
|
9
|
Dutta TK, Nayak DR, Pachori RB. GT-Net: global transformer network for multiclass brain tumor classification using MR images. Biomed Eng Lett 2024; 14:1069-1077. [PMID: 39220025 PMCID: PMC11362438 DOI: 10.1007/s13534-024-00393-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 05/09/2024] [Accepted: 05/12/2024] [Indexed: 09/04/2024] Open
Abstract
Multiclass classification of brain tumors from magnetic resonance (MR) images is challenging due to high inter-class similarities. To this end, convolution neural networks (CNN) have been widely adopted in recent studies. However, conventional CNN architectures fail to capture the small lesion patterns of brain tumors. To tackle this issue, in this paper, we propose a global transformer network dubbed GT-Net for multiclass brain tumor classification. The GT-Net mainly comprises a global transformer module (GTM), which is introduced on the top of a backbone network. A generalized self-attention block (GSB) is proposed to capture the feature inter-dependencies not only across spatial dimension but also channel dimension, thereby facilitating the extraction of the detailed tumor lesion information while ignoring less important information. Further, multiple GSB heads are used in GTM to leverage global feature dependencies. We evaluate our GT-Net on a benchmark dataset by adopting several backbone networks, and the results demonstrate the effectiveness of GTM. Further, comparison with state-of-the-art methods validates the superiority of our model.
Collapse
Affiliation(s)
- Tapas Kumar Dutta
- School of Computer Science and Electronic Engineering, University of Surrey, Guildford, GU27XH United Kingdom
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology Jaipur, Jaipur, Rajasthan 302017 India
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Indore, Madhya Pradesh 453552 India
| |
Collapse
|
10
|
Ashimgaliyev M, Matkarimov B, Barlybayev A, Li RYM, Zhumadillayeva A. Accurate MRI-Based Brain Tumor Diagnosis: Integrating Segmentation and Deep Learning Approaches. APPLIED SCIENCES 2024; 14:7281. [DOI: 10.3390/app14167281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Abstract
Magnetic Resonance Imaging (MRI) is vital in diagnosing brain tumours, offering crucial insights into tumour morphology and precise localisation. Despite its pivotal role, accurately classifying brain tumours from MRI scans is inherently complex due to their heterogeneous characteristics. This study presents a novel integration of advanced segmentation methods with deep learning ensemble algorithms to enhance the classification accuracy of MRI-based brain tumour diagnosis. We conduct a thorough review of both traditional segmentation approaches and contemporary advancements in region-based and machine learning-driven segmentation techniques. This paper explores the utility of deep learning ensemble algorithms, capitalising on the diversity of model architectures to augment tumour classification accuracy and robustness. Through the synergistic amalgamation of sophisticated segmentation techniques and ensemble learning strategies, this research addresses the shortcomings of traditional methodologies, thereby facilitating more precise and efficient brain tumour classification.
Collapse
Affiliation(s)
- Medet Ashimgaliyev
- Faculty of Information Technologies, L.N. Gumilyov Eurasian National University, Astana 010008, Kazakhstan
| | - Bakhyt Matkarimov
- Faculty of Information Technologies, L.N. Gumilyov Eurasian National University, Astana 010008, Kazakhstan
| | - Alibek Barlybayev
- Faculty of Information Technologies, L.N. Gumilyov Eurasian National University, Astana 010008, Kazakhstan
- Higher School of Information Technology and Engineering, Astana International University, Astana 010008, Kazakhstan
| | - Rita Yi Man Li
- Department of Economics and Finance, Hong Kong Shue Yan University, Hong Kong, China
| | - Ainur Zhumadillayeva
- Faculty of Information Technologies, L.N. Gumilyov Eurasian National University, Astana 010008, Kazakhstan
- Department of Computer Engineering, Astana IT University, Astana 010000, Kazakhstan
| |
Collapse
|
11
|
Abbas T, Fatima A, Shahzad T, Alharbi M, Khan MA, Ahmed A. Multidisciplinary cancer disease classification using adaptive FL in healthcare industry 5.0. Sci Rep 2024; 14:18643. [PMID: 39128933 PMCID: PMC11317485 DOI: 10.1038/s41598-024-68919-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 07/29/2024] [Indexed: 08/13/2024] Open
Abstract
Emerging Industry 5.0 designs promote artificial intelligence services and data-driven applications across multiple places with varying ownership that need special data protection and privacy considerations to prevent the disclosure of private information to outsiders. Due to this, federated learning offers a method for improving machine-learning models without accessing the train data at a single manufacturing facility. We provide a self-adaptive framework for federated machine learning of healthcare intelligent systems in this research. Our method takes into account the participating parties at various levels of healthcare ecosystem abstraction. Each hospital trains its local model internally in a self-adaptive style and transmits it to the centralized server for universal model optimization and communication cycle reduction. To represent a multi-task optimization issue, we split the dataset into as many subsets as devices. Each device selects the most advantageous subset for every local iteration of the model. On a training dataset, our initial study demonstrates the algorithm's ability to converge various hospital and device counts. By merging a federated machine-learning approach with advanced deep machine-learning models, we can simply and accurately predict multidisciplinary cancer diseases in the human body. Furthermore, in the smart healthcare industry 5.0, the results of federated machine learning approaches are used to validate multidisciplinary cancer disease prediction. The proposed adaptive federated machine learning methodology achieved 90.0%, while the conventional federated learning approach achieved 87.30%, both of which were higher than the previous state-of-the-art methodologies for cancer disease prediction in the smart healthcare industry 5.0.
Collapse
Affiliation(s)
- Tahir Abbas
- School of Computer Science, National College of Business Administration and Economics, Lahore, 54000, Pakistan
| | - Areej Fatima
- Department of Computer Science, Lahore Garrison University, Lahore, 54000, Pakistan
| | - Tariq Shahzad
- Department of Computer Sciences, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, 57000, Pakistan
| | - Meshal Alharbi
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, 11, 11942, Alkharj, Saudi Arabia
| | - Muhammad Adnan Khan
- Department of Software, Faculty of Artificial Intelligence and Software, Gachon University, Seongnam-si, 13557, South Korea.
| | - Arfan Ahmed
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, P.O. Box 24144, Doha, Qatar.
| |
Collapse
|
12
|
Luckett PH, Olufawo MO, Park KY, Lamichhane B, Dierker D, Verastegui GT, Lee JJ, Yang P, Kim A, Butt OH, Chheda MG, Snyder AZ, Shimony JS, Leuthardt EC. Predicting post-surgical functional status in high-grade glioma with resting state fMRI and machine learning. J Neurooncol 2024; 169:175-185. [PMID: 38789843 PMCID: PMC11269343 DOI: 10.1007/s11060-024-04715-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 05/14/2024] [Indexed: 05/26/2024]
Abstract
PURPOSE High-grade glioma (HGG) is the most common and deadly malignant glioma of the central nervous system. The current standard of care includes surgical resection of the tumor, which can lead to functional and cognitive deficits. The aim of this study is to develop models capable of predicting functional outcomes in HGG patients before surgery, facilitating improved disease management and informed patient care. METHODS Adult HGG patients (N = 102) from the neurosurgery brain tumor service at Washington University Medical Center were retrospectively recruited. All patients completed structural neuroimaging and resting state functional MRI prior to surgery. Demographics, measures of resting state network connectivity (FC), tumor location, and tumor volume were used to train a random forest classifier to predict functional outcomes based on Karnofsky Performance Status (KPS < 70, KPS ≥ 70). RESULTS The models achieved a nested cross-validation accuracy of 94.1% and an AUC of 0.97 in classifying KPS. The strongest predictors identified by the model included FC between somatomotor, visual, auditory, and reward networks. Based on location, the relation of the tumor to dorsal attention, cingulo-opercular, and basal ganglia networks were strong predictors of KPS. Age was also a strong predictor. However, tumor volume was only a moderate predictor. CONCLUSION The current work demonstrates the ability of machine learning to classify postoperative functional outcomes in HGG patients prior to surgery accurately. Our results suggest that both FC and the tumor's location in relation to specific networks can serve as reliable predictors of functional outcomes, leading to personalized therapeutic approaches tailored to individual patients.
Collapse
Affiliation(s)
- Patrick H Luckett
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, USA.
| | - Michael O Olufawo
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Ki Yun Park
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Bidhan Lamichhane
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, USA
- Center for Health Sciences, Oklahoma State University, Tulsa, OK, USA
| | - Donna Dierker
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | | | - John J Lee
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Peter Yang
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Albert Kim
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Omar H Butt
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
| | - Milan G Chheda
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
| | - Abraham Z Snyder
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
| | - Joshua S Shimony
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Eric C Leuthardt
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, USA
- Department of Biomedical Engineering, Washington University in Saint Louis, St. Louis, MO, USA
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA
- Department of Mechanical Engineering and Materials Science, Washington University in Saint Louis, St. Louis, MO, USA
- Center for Innovation in Neuroscience and Technology, Washington University School of Medicine, St. Louis, MO, USA
- Brain Laser Center, Washington University School of Medicine, St. Louis, MO, USA
- National Center for Adaptive Neurotechnologies, Albany, NY, USA
| |
Collapse
|
13
|
Tan J, Zhang X, Qing C, Xu X. Fourier Domain Robust Denoising Decomposition and Adaptive Patch MRI Reconstruction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7299-7311. [PMID: 37015441 DOI: 10.1109/tnnls.2022.3222394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The sparsity of the Fourier transform domain has been applied to magnetic resonance imaging (MRI) reconstruction in k -space. Although unsupervised adaptive patch optimization methods have shown promise compared to data-driven-based supervised methods, the following challenges exist in MRI reconstruction: 1) in previous k -space MRI reconstruction tasks, MRI with noise interference in the acquisition process is rarely considered. 2) Differences in transform domains should be resolved to achieve the high-quality reconstruction of low undersampled MRI data. 3) Robust patch dictionary learning problems are usually nonconvex and NP-hard, and alternate minimization methods are often computationally expensive. In this article, we propose a method for Fourier domain robust denoising decomposition and adaptive patch MRI reconstruction (DDAPR). DDAPR is a two-step optimization method for MRI reconstruction in the presence of noise and low undersampled data. It includes the low-rank and sparse denoising reconstruction model (LSDRM) and the robust dictionary learning reconstruction model (RDLRM). In the first step, we propose LSDRM for different domains. For the optimization solution, the proximal gradient method is used to optimize LSDRM by singular value decomposition and soft threshold algorithms. In the second step, we propose RDLRM, which is an effective adaptive patch method by introducing a low-rank and sparse penalty adaptive patch dictionary and using a sparse rank-one matrix to approximate the undersampled data. Then, the block coordinate descent (BCD) method is used to optimize the variables. The BCD optimization process involves valid closed-form solutions. Extensive numerical experiments show that the proposed method has a better performance than previous methods in image reconstruction based on compressed sensing or deep learning.
Collapse
|
14
|
Agrawal A, Maan V. Brain Tumor Classification of MRI Scans using Deep Learning Techniques. 2024 INTERNATIONAL CONFERENCE ON COMMUNICATION, COMPUTER SCIENCES AND ENGINEERING (IC3SE) 2024:1128-1133. [DOI: 10.1109/ic3se62002.2024.10593086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Affiliation(s)
- Ayesha Agrawal
- Mody University of Science & Technology,Computer Science & Engineering,Lakshmangarh,India
| | - Vinod Maan
- Mody University of Science & Technology,Computer Science & Engineering,Lakshmangarh,India
| |
Collapse
|
15
|
Yang C, Xue B, Tan KC, Zhang M. A Co-Training Framework for Heterogeneous Heuristic Domain Adaptation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:6863-6877. [PMID: 36269922 DOI: 10.1109/tnnls.2022.3212924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The purpose of this article is to address unsupervised domain adaptation (UDA) where a labeled source domain and an unlabeled target domain are given. Recent advanced UDA methods attempt to remove domain-specific properties by separating domain-specific information from domain-invariant representations, which heavily rely on the designed neural network structures. Meanwhile, they do not consider class discriminate representations when learning domain-invariant representations. To this end, this article proposes a co-training framework for heterogeneous heuristic domain adaptation (CO-HHDA) to address the above issues. First, a heterogeneous heuristic network is introduced to model domain-specific characters. It allows structures of heuristic network to be different between domains to avoid underfitting or overfitting. Specially, we initialize a small structure that is shared between domains and increase a subnetwork for the domain which preserves rich specific information. Second, we propose a co-training scheme to train two classifiers, a source classifier and a target classifier, to enhance class discriminate representations. The two classifiers are designed based on domain-invariant representations, where the source classifier learns from the labeled source data, and the target classifier is trained from the generated target pseudolabeled data. The two classifiers teach each other in the training process with high-quality pseudolabeled data. Meanwhile, an adaptive threshold is presented to select reliable pseudolabels in each classifier. Empirical results on three commonly used benchmark datasets demonstrate that the proposed CO-HHDA outperforms the state-of-the-art domain adaptation methods.
Collapse
|
16
|
Saluja S, Trivedi MC, Saha A. Deep CNNs for glioma grading on conventional MRIs: Performance analysis, challenges, and future directions. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5250-5282. [PMID: 38872535 DOI: 10.3934/mbe.2024232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
The increasing global incidence of glioma tumors has raised significant healthcare concerns due to their high mortality rates. Traditionally, tumor diagnosis relies on visual analysis of medical imaging and invasive biopsies for precise grading. As an alternative, computer-assisted methods, particularly deep convolutional neural networks (DCNNs), have gained traction. This research paper explores the recent advancements in DCNNs for glioma grading using brain magnetic resonance images (MRIs) from 2015 to 2023. The study evaluated various DCNN architectures and their performance, revealing remarkable results with models such as hybrid and ensemble based DCNNs achieving accuracy levels of up to 98.91%. However, challenges persisted in the form of limited datasets, lack of external validation, and variations in grading formulations across diverse literature sources. Addressing these challenges through expanding datasets, conducting external validation, and standardizing grading formulations can enhance the performance and reliability of DCNNs in glioma grading, thereby advancing brain tumor classification and extending its applications to other neurological disorders.
Collapse
Affiliation(s)
- Sonam Saluja
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Munesh Chandra Trivedi
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Ashim Saha
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| |
Collapse
|
17
|
Chen H, Luo H, Huang B, Jiang B, Kaynak O. Transfer Learning-Motivated Intelligent Fault Diagnosis Designs: A Survey, Insights, and Perspectives. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:2969-2983. [PMID: 37467093 DOI: 10.1109/tnnls.2023.3290974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/21/2023]
Abstract
Over the last decade, transfer learning has attracted a great deal of attention as a new learning paradigm, based on which fault diagnosis (FD) approaches have been intensively developed to improve the safety and reliability of modern automation systems. Because of inevitable factors such as the varying work environment, performance degradation of components, and heterogeneity among similar automation systems, the FD method having long-term applicabilities becomes attractive. Motivated by these facts, transfer learning has been an indispensable tool that endows the FD methods with self-learning and adaptive abilities. On the presentation of basic knowledge in this field, a comprehensive review of transfer learning-motivated FD methods, whose two subclasses are developed based on knowledge calibration and knowledge compromise, is carried out in this survey article. Finally, some open problems, potential research directions, and conclusions are highlighted. Different from the existing reviews of transfer learning, this survey focuses on how to utilize previous knowledge specifically for the FD tasks, based on which three principles and a new classification strategy of transfer learning-motivated FD techniques are also presented. We hope that this work will constitute a timely contribution to transfer learning-motivated techniques regarding the FD topic.
Collapse
|
18
|
Pereira FES, Jagatheesaperumal SK, Benjamin SR, Filho PCDN, Duarte FT, de Albuquerque VHC. Advancements in non-invasive microwave brain stimulation: A comprehensive survey. Phys Life Rev 2024; 48:132-161. [PMID: 38219370 DOI: 10.1016/j.plrev.2024.01.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 01/07/2024] [Indexed: 01/16/2024]
Abstract
This survey provides a comprehensive insight into the world of non-invasive brain stimulation and focuses on the evolving landscape of deep brain stimulation through microwave research. Non-invasive brain stimulation techniques provide new prospects for comprehending and treating neurological disorders. We investigate the methods shaping the future of deep brain stimulation, emphasizing the role of microwave technology in this transformative journey. Specifically, we explore antenna structures and optimization strategies to enhance the efficiency of high-frequency microwave stimulation. These advancements can potentially revolutionize the field by providing a safer and more precise means of modulating neural activity. Furthermore, we address the challenges that researchers currently face in the realm of microwave brain stimulation. From safety concerns to methodological intricacies, this survey outlines the barriers that must be overcome to fully unlock the potential of this technology. This survey serves as a roadmap for advancing research in microwave brain stimulation, pointing out potential directions and innovations that promise to reshape the field.
Collapse
Affiliation(s)
| | - Senthil Kumar Jagatheesaperumal
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza, 60455-970, Ceará, Brazil; Department of Electronics and Communication Engineering, Mepco Schlenk Engineering College, Sivakasi, 626005, Tamilnadu, India
| | - Stephen Rathinaraj Benjamin
- Department of Pharmacology and Pharmacy, Laboratory of Behavioral Neuroscience, Faculty of Medicine, Federal University of Ceará, Fortaleza, 60430-160, Ceará, Brazil
| | | | | | | |
Collapse
|
19
|
Domadia SG, Thakkar FN, Ardeshana MA. Segmenting brain glioblastoma using dense-attentive 3D DAF 2. Phys Med 2024; 119:103304. [PMID: 38340694 DOI: 10.1016/j.ejmp.2024.103304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 12/18/2023] [Accepted: 01/29/2024] [Indexed: 02/12/2024] Open
Abstract
Precise delineation of brain glioblastoma or tumor through segmentation is pivotal in the diagnosis, formulating treatment strategies, and evaluating therapeutic progress in patients. Precisely identifying brain glioblastoma within multimodal MRI scans poses a significant challenge in the field of medical image analysis as different intensity profiles are observed across the sub-regions, reflecting diverse tumor biological properties. For segmenting glioblastoma areas, convolutional neural networks have displayed astounding performance in recent years. This paper introduces an innovative methodology for brain glioblastoma segmentation by combining the Dense-Attention 3D U-Net network with a fusion strategy and the focal tversky loss function. By fusing information from multiple resolution segmentation maps, our model enhances its ability to discern intricate tumor boundaries. Incorporating the focal tversky loss function, we effectively emphasize critical regions and mitigate class imbalance. Recursive Convolution Block 2 is proposed after fusion to ensure efficient utilization of all accessible features while maintaining rapid convergence. The network's effectiveness is assessed using diverse datasets BraTS 2020 and BraTS 2021. Results show comparable dice similarity coefficient compared to other methods with increased efficiency and segmentation performance. Additionally, the architecture achieved an average dice similarity coefficient of 82.4% and an average hausdorff distance (HD95) of 10.426, which demonstrated consistent performance improvement compared to baseline models like U-Net, Attention U-Net, V-Net and Res U-Net and indicating the effectiveness of proposed architecture.
Collapse
|
20
|
Chen W, Tan X, Zhang J, Du G, Fu Q, Jiang H. A robust approach for multi-type classification of brain tumor using deep feature fusion. Front Neurosci 2024; 18:1288274. [PMID: 38440396 PMCID: PMC10909817 DOI: 10.3389/fnins.2024.1288274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 02/05/2024] [Indexed: 03/06/2024] Open
Abstract
Brain tumors can be classified into many different types based on their shape, texture, and location. Accurate diagnosis of brain tumor types can help doctors to develop appropriate treatment plans to save patients' lives. Therefore, it is very crucial to improve the accuracy of this classification system for brain tumors to assist doctors in their treatment. We propose a deep feature fusion method based on convolutional neural networks to enhance the accuracy and robustness of brain tumor classification while mitigating the risk of over-fitting. Firstly, the extracted features of three pre-trained models including ResNet101, DenseNet121, and EfficientNetB0 are adjusted to ensure that the shape of extracted features for the three models is the same. Secondly, the three models are fine-tuned to extract features from brain tumor images. Thirdly, pairwise summation of the extracted features is carried out to achieve feature fusion. Finally, classification of brain tumors based on fused features is performed. The public datasets including Figshare (Dataset 1) and Kaggle (Dataset 2) are used to verify the reliability of the proposed method. Experimental results demonstrate that the fusion method of ResNet101 and DenseNet121 features achieves the best performance, which achieves classification accuracy of 99.18 and 97.24% in Figshare dataset and Kaggle dataset, respectively.
Collapse
Affiliation(s)
- Wenna Chen
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Xinghua Tan
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Jincan Zhang
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Ganqin Du
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Qizhi Fu
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Hongwei Jiang
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| |
Collapse
|
21
|
Yun WJ, Shin M, Mohaisen D, Lee K, Kim J. Hierarchical Deep Reinforcement Learning-Based Propofol Infusion Assistant Framework in Anesthesia. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:2510-2521. [PMID: 35853065 DOI: 10.1109/tnnls.2022.3190379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
This article aims to provide a hierarchical reinforcement learning (RL)-based solution to the automated drug infusion field. The learning policy is divided into the tasks of: 1) learning trajectory generative model and 2) planning policy model. The proposed deep infusion assistant policy gradient (DIAPG) model draws inspiration from adversarial autoencoders (AAEs) and learns latent representations of hypnotic depth trajectories. Given the trajectories drawn from the generative model, the planning policy infers a dose of propofol for stable sedation of a patient under total intravenous anesthesia (TIVA) using propofol and remifentanil. Through extensive evaluation, the DIAPG model can effectively stabilize bispectral index (BIS) and effect site concentration given a potentially time-varying target sequence. The proposed DIAPG shows an increased performance of 530% and 15% when a human expert and a standard reinforcement algorithm are used to infuse drugs, respectively.
Collapse
|
22
|
Rahman A, Debnath T, Kundu D, Khan MSI, Aishi AA, Sazzad S, Sayduzzaman M, Band SS. Machine learning and deep learning-based approach in smart healthcare: Recent advances, applications, challenges and opportunities. AIMS Public Health 2024; 11:58-109. [PMID: 38617415 PMCID: PMC11007421 DOI: 10.3934/publichealth.2024004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 12/18/2023] [Indexed: 04/16/2024] Open
Abstract
In recent years, machine learning (ML) and deep learning (DL) have been the leading approaches to solving various challenges, such as disease predictions, drug discovery, medical image analysis, etc., in intelligent healthcare applications. Further, given the current progress in the fields of ML and DL, there exists the promising potential for both to provide support in the realm of healthcare. This study offered an exhaustive survey on ML and DL for the healthcare system, concentrating on vital state of the art features, integration benefits, applications, prospects and future guidelines. To conduct the research, we found the most prominent journal and conference databases using distinct keywords to discover scholarly consequences. First, we furnished the most current along with cutting-edge progress in ML-DL-based analysis in smart healthcare in a compendious manner. Next, we integrated the advancement of various services for ML and DL, including ML-healthcare, DL-healthcare, and ML-DL-healthcare. We then offered ML and DL-based applications in the healthcare industry. Eventually, we emphasized the research disputes and recommendations for further studies based on our observations.
Collapse
Affiliation(s)
- Anichur Rahman
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Tanoy Debnath
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
- Department of CSE, Green University of Bangladesh, 220/D, Begum Rokeya Sarani, Dhaka -1207, Bangladesh
| | - Dipanjali Kundu
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Md. Saikat Islam Khan
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Airin Afroj Aishi
- Department of Computing and Information System, Daffodil International University, Savar, Dhaka, Bangladesh
| | - Sadia Sazzad
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Mohammad Sayduzzaman
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Shahab S. Band
- Department of Information Management, International Graduate School of Artificial Intelligence, National Yunlin University of Science and Technology, Taiwan
| |
Collapse
|
23
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
24
|
Yu L, Liu J, Wu Q, Wang J, Qu A. A Siamese-Transport Domain Adaptation Framework for 3D MRI Classification of Gliomas and Alzheimer's Diseases. IEEE J Biomed Health Inform 2024; 28:391-402. [PMID: 37955996 DOI: 10.1109/jbhi.2023.3332419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Accurate and fully automated brain structure examination and prediction from 3D volumetric magnetic resonance imaging (MRI) is a necessary step in medical imaging analysis, which can assist greatly in clinical diagnosis. Traditional deep learning models suffer from severe performance degradation when applied to clinically acquired unlabeled data. The performance degradation is mainly caused by domain discrepancy such as different device types and parameter settings for data acquisition. However, existing approaches focus on the reduction of domain discrepancies but ignore the entanglement of semantic features and domain information. In this article, we explore the feature invariance of categories and domains in different projection spaces and propose a Siamese-Transport Domain Adaptation (STDA) method using a joint optimal transport theory and contrastive learning for automatic 3D MRI classification and glioma multi-grade prediction. Specifically, the learning framework updates the distribution of features across domains and categories by Siamese transport network training with an Optimal Cost Transfer Strategy (OCTS) and a Mutual Invariant Constraint (MIC) in two projective spaces to find multiple invariants in potential heterogeneity. We design three sets of transfer task scenarios with different source and target domains, and demonstrate that STDA yields substantially higher generalization performance than other state-of-the-art unsupervised domain adaptation (UDA) methods. The method is applicable on 3D MRI data from glioma to Alzheimer's disease and has promising applications in the future clinical diagnosis and treatment of brain diseases.
Collapse
|
25
|
Aggarwal K, Manso Jimeno M, Ravi KS, Gonzalez G, Geethanath S. Developing and deploying deep learning models in brain magnetic resonance imaging: A review. NMR IN BIOMEDICINE 2023; 36:e5014. [PMID: 37539775 DOI: 10.1002/nbm.5014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 08/05/2023]
Abstract
Magnetic resonance imaging (MRI) of the brain has benefited from deep learning (DL) to alleviate the burden on radiologists and MR technologists, and improve throughput. The easy accessibility of DL tools has resulted in a rapid increase of DL models and subsequent peer-reviewed publications. However, the rate of deployment in clinical settings is low. Therefore, this review attempts to bring together the ideas from data collection to deployment in the clinic, building on the guidelines and principles that accreditation agencies have espoused. We introduce the need for and the role of DL to deliver accessible MRI. This is followed by a brief review of DL examples in the context of neuropathologies. Based on these studies and others, we collate the prerequisites to develop and deploy DL models for brain MRI. We then delve into the guiding principles to develop good machine learning practices in the context of neuroimaging, with a focus on explainability. A checklist based on the United States Food and Drug Administration's good machine learning practices is provided as a summary of these guidelines. Finally, we review the current challenges and future opportunities in DL for brain MRI.
Collapse
Affiliation(s)
- Kunal Aggarwal
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
- Department of Electrical and Computer Engineering, Technical University Munich, Munich, Germany
| | - Marina Manso Jimeno
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Keerthi Sravan Ravi
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Gilberto Gonzalez
- Division of Neuroradiology, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Sairam Geethanath
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
| |
Collapse
|
26
|
Aminizadeh S, Heidari A, Toumaj S, Darbandi M, Navimipour NJ, Rezaei M, Talebi S, Azad P, Unal M. The applications of machine learning techniques in medical data processing based on distributed computing and the Internet of Things. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107745. [PMID: 37579550 DOI: 10.1016/j.cmpb.2023.107745] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 07/15/2023] [Accepted: 08/02/2023] [Indexed: 08/16/2023]
Abstract
Medical data processing has grown into a prominent topic in the latest decades with the primary goal of maintaining patient data via new information technologies, including the Internet of Things (IoT) and sensor technologies, which generate patient indexes in hospital data networks. Innovations like distributed computing, Machine Learning (ML), blockchain, chatbots, wearables, and pattern recognition can adequately enable the collection and processing of medical data for decision-making in the healthcare era. Particularly, to assist experts in the disease diagnostic process, distributed computing is beneficial by digesting huge volumes of data swiftly and producing personalized smart suggestions. On the other side, the current globe is confronting an outbreak of COVID-19, so an early diagnosis technique is crucial to lowering the fatality rate. ML systems are beneficial in aiding radiologists in examining the incredible amount of medical images. Nevertheless, they demand a huge quantity of training data that must be unified for processing. Hence, developing Deep Learning (DL) confronts multiple issues, such as conventional data collection, quality assurance, knowledge exchange, privacy preservation, administrative laws, and ethical considerations. In this research, we intend to convey an inclusive analysis of the most recent studies in distributed computing platform applications based on five categorized platforms, including cloud computing, edge, fog, IoT, and hybrid platforms. So, we evaluated 27 articles regarding the usage of the proposed framework, deployed methods, and applications, noting the advantages, drawbacks, and the applied dataset and screening the security mechanism and the presence of the Transfer Learning (TL) method. As a result, it was proved that most recent research (about 43%) used the IoT platform as the environment for the proposed architecture, and most of the studies (about 46%) were done in 2021. In addition, the most popular utilized DL algorithm was the Convolutional Neural Network (CNN), with a percentage of 19.4%. Hence, despite how technology changes, delivering appropriate therapy for patients is the primary aim of healthcare-associated departments. Therefore, further studies are recommended to develop more functional architectures based on DL and distributed environments and better evaluate the present healthcare data analysis models.
Collapse
Affiliation(s)
| | - Arash Heidari
- Department of Computer Engineering, Tabriz Branch, Islamic Azad University, Tabriz, Iran; Department of Software Engineering, Haliç University, Istanbul, Turkiye.
| | - Shiva Toumaj
- Urmia University of Medical Sciences, Urmia, Iran
| | - Mehdi Darbandi
- Department of Electrical and Electronic Engineering, Eastern Mediterranean University, Gazimagusa 99628, Turkiye
| | - Nima Jafari Navimipour
- Department of Computer Engineering, Kadir Has University, Istanbul, Turkiye; Future Technology Research Center, National Yunlin University of Science and Technology, Douliou, Yunlin 64002, Taiwan.
| | - Mahsa Rezaei
- Tabriz University of Medical Sciences, Faculty of Surgery, Tabriz, Iran
| | - Samira Talebi
- Department of Computer Science, University of Texas at San Antonio, TX, USA
| | - Poupak Azad
- Department of Computer Science, University of Manitoba, Winnipeg, Canada
| | - Mehmet Unal
- Department of Computer Engineering, Nisantasi University, Istanbul, Turkiye
| |
Collapse
|
27
|
Yang C, Cheung YM, Ding J, Tan KC, Xue B, Zhang M. Contrastive Learning Assisted-Alignment for Partial Domain Adaptation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7621-7634. [PMID: 35130173 DOI: 10.1109/tnnls.2022.3145034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This work addresses unsupervised partial domain adaptation (PDA), in which classes in the target domain are a subset of the source domain. The key challenges of PDA are how to leverage source samples in the shared classes to promote positive transfer and filter out the irrelevant source samples to mitigate negative transfer. Existing PDA methods based on adversarial DA do not consider the loss of class discriminative representation. To this end, this article proposes a contrastive learning-assisted alignment (CLA) approach for PDA to jointly align distributions across domains for better adaptation and to reweight source instances to reduce the contribution of outlier instances. A contrastive learning-assisted conditional alignment (CLCA) strategy is presented for distribution alignment. CLCA first exploits contrastive losses to discover the class discriminative information in both domains. It then employs a contrastive loss to match the clusters across the two domains based on adversarial domain learning. In this respect, CLCA attempts to reduce the domain discrepancy by matching the class-conditional and marginal distributions. Moreover, a new reweighting scheme is developed to improve the quality of weights estimation, which explores information from both the source and the target domains. Empirical results on several benchmark datasets demonstrate that the proposed CLA outperforms the existing state-of-the-art PDA methods.
Collapse
|
28
|
Iiduka H. ϵ-Approximation of Adaptive Leaning Rate Optimization Algorithms for Constrained Nonconvex Stochastic Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:8108-8115. [PMID: 35089865 DOI: 10.1109/tnnls.2022.3142726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This brief considers constrained nonconvex stochastic finite-sum and online optimization in deep neural networks. Adaptive-learning-rate optimization algorithms (ALROAs), such as Adam, AMSGrad, and their variants, have widely been used for these optimizations because they are powerful and useful in theory and practice. Here, it is shown that the ALROAs are ϵ -approximations for these optimizations. We provide the learning rates, mini-batch sizes, number of iterations, and stochastic gradient complexity with which to achieve ϵ -approximations of the algorithms.
Collapse
|
29
|
Luckett PH, Olufawo M, Lamichhane B, Park KY, Dierker D, Verastegui GT, Yang P, Kim AH, Chheda MG, Snyder AZ, Shimony JS, Leuthardt EC. Predicting survival in glioblastoma with multimodal neuroimaging and machine learning. J Neurooncol 2023; 164:309-320. [PMID: 37668941 PMCID: PMC10522528 DOI: 10.1007/s11060-023-04439-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 08/26/2023] [Indexed: 09/06/2023]
Abstract
PURPOSE Glioblastoma (GBM) is the most common and aggressive malignant glioma, with an overall median survival of less than two years. The ability to predict survival before treatment in GBM patients would lead to improved disease management, clinical trial enrollment, and patient care. METHODS GBM patients (N = 133, mean age 60.8 years, median survival 14.1 months, 57.9% male) were retrospectively recruited from the neurosurgery brain tumor service at Washington University Medical Center. All patients completed structural neuroimaging and resting state functional MRI (RS-fMRI) before surgery. Demographics, measures of cortical thickness (CT), and resting state functional network connectivity (FC) were used to train a deep neural network to classify patients based on survival (< 1y, 1-2y, >2y). Permutation feature importance identified the strongest predictors of survival based on the trained models. RESULTS The models achieved a combined cross-validation and hold out accuracy of 90.6% in classifying survival (< 1y, 1-2y, >2y). The strongest demographic predictors were age at diagnosis and sex. The strongest CT predictors of survival included the superior temporal sulcus, parahippocampal gyrus, pericalcarine, pars triangularis, and middle temporal regions. The strongest FC features primarily involved dorsal and inferior somatomotor, visual, and cingulo-opercular networks. CONCLUSION We demonstrate that machine learning can accurately classify survival in GBM patients based on multimodal neuroimaging before any surgical or medical intervention. These results were achieved without information regarding presentation symptoms, treatments, postsurgical outcomes, or tumor genomic information. Our results suggest GBMs have a global effect on the brain's structural and functional organization, which is predictive of survival.
Collapse
Affiliation(s)
- Patrick H Luckett
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA.
| | - Michael Olufawo
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Bidhan Lamichhane
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
- Center for Health Sciences, Oklahoma State University, Tulsa, OK, 74136, USA
| | - Ki Yun Park
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Donna Dierker
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | | | - Peter Yang
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Albert H Kim
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
- Brain Tumor Center at Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO, USA
| | - Milan G Chheda
- Brain Tumor Center at Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO, USA
- Department of Medicine, Washington University School of Medicine, St. Louis, MO, USA
| | - Abraham Z Snyder
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
| | - Joshua S Shimony
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Brain Tumor Center at Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO, USA
| | - Eric C Leuthardt
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
- Brain Tumor Center at Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO, USA
- Department of Biomedical Engineering, Washington University in Saint Louis, St. Louis, MO, 63130, USA
- Department of Mechanical Engineering and Materials Science, Washington University in Saint Louis, St. Louis, MO, 63130, USA
- Center for Innovation in Neuroscience and Technology, Washington University School of Medicine, St. Louis, MO, 63110, USA
- Brain Laser Center, Washington University School of Medicine, St. Louis, MO, 63110, USA
- National Center for Adaptive Neurotechnologies, Albany, USA
| |
Collapse
|
30
|
Abdusalomov AB, Mukhiddinov M, Whangbo TK. Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging. Cancers (Basel) 2023; 15:4172. [PMID: 37627200 PMCID: PMC10453020 DOI: 10.3390/cancers15164172] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/11/2023] [Accepted: 08/17/2023] [Indexed: 08/27/2023] Open
Abstract
The rapid development of abnormal brain cells that characterizes a brain tumor is a major health risk for adults since it can cause severe impairment of organ function and even death. These tumors come in a wide variety of sizes, textures, and locations. When trying to locate cancerous tumors, magnetic resonance imaging (MRI) is a crucial tool. However, detecting brain tumors manually is a difficult and time-consuming activity that might lead to inaccuracies. In order to solve this, we provide a refined You Only Look Once version 7 (YOLOv7) model for the accurate detection of meningioma, glioma, and pituitary gland tumors within an improved detection of brain tumors system. The visual representation of the MRI scans is enhanced by the use of image enhancement methods that apply different filters to the original pictures. To further improve the training of our proposed model, we apply data augmentation techniques to the openly accessible brain tumor dataset. The curated data include a wide variety of cases, such as 2548 images of gliomas, 2658 images of pituitary, 2582 images of meningioma, and 2500 images of non-tumors. We included the Convolutional Block Attention Module (CBAM) attention mechanism into YOLOv7 to further enhance its feature extraction capabilities, allowing for better emphasis on salient regions linked with brain malignancies. To further improve the model's sensitivity, we have added a Spatial Pyramid Pooling Fast+ (SPPF+) layer to the network's core infrastructure. YOLOv7 now includes decoupled heads, which allow it to efficiently glean useful insights from a wide variety of data. In addition, a Bi-directional Feature Pyramid Network (BiFPN) is used to speed up multi-scale feature fusion and to better collect features associated with tumors. The outcomes verify the efficiency of our suggested method, which achieves a higher overall accuracy in tumor detection than previous state-of-the-art models. As a result, this framework has a lot of potential as a helpful decision-making tool for experts in the field of diagnosing brain tumors.
Collapse
Affiliation(s)
| | | | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Seongnam-si 13120, Republic of Korea;
| |
Collapse
|
31
|
Ji Y, Gao Y, Bao R, Li Q, Liu D, Sun Y, Ye Y. Prediction of COVID-19 Patients' Emergency Room Revisit using Multi-Source Transfer Learning. IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS. IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS 2023; 2023:138-144. [PMID: 38486663 PMCID: PMC10939709 DOI: 10.1109/ichi57859.2023.00028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/17/2024]
Abstract
The coronavirus disease 2019 (COVID-19) has led to a global pandemic of significant severity. In addition to its high level of contagiousness, COVID-19 can have a heterogeneous clinical course, ranging from asymptomatic carriers to severe and potentially life-threatening health complications. Many patients have to revisit the emergency room (ER) within a short time after discharge, which significantly increases the workload for medical staff. Early identification of such patients is crucial for helping physicians focus on treating life-threatening cases. In this study, we obtained Electronic Health Records (EHRs) of 3,210 encounters from 13 affiliated ERs within the University of Pittsburgh Medical Center between March 2020 and January 2021. We leveraged a Natural Language Processing technique, ScispaCy, to extract clinical concepts and used the 1001 most frequent concepts to develop 7-day revisit models for COVID-19 patients in ERs. The research data we collected were obtained from 13 ERs, which may have distributional differences that could affect the model development. To address this issue, we employed a classic deep transfer learning method called the Domain Adversarial Neural Network (DANN) and evaluated different modeling strategies, including the Multi-DANN algorithm (which considers the source differences), the Single-DANN algorithm (which doesn't consider the source differences), and three baseline methods: using only source data, using only target data, and using a mixture of source and target data. Results showed that the Multi-DANN models outperformed the Single-DANN models and baseline models in predicting revisits of COVID-19 patients to the ER within 7 days after discharge (median AUROC = 0.8 vs. 0.5). Notably, the Multi-DANN strategy effectively addressed the heterogeneity among multiple source domains and improved the adaptation of source data to the target domain. Moreover, the high performance of Multi-DANN models indicates that EHRs are informative for developing a prediction model to identify COVID-19 patients who are very likely to revisit an ER within 7 days after discharge.
Collapse
Affiliation(s)
- Yuelyu Ji
- Department of Information Science, School of Computing and Information, University of Pittsburgh, Pittsburgh,USA
| | - Yuhe Gao
- Department of Biomedical Informatics, School of Medicine, University of Pittsburgh, Pittsburgh, USA
| | - Runxue Bao
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, USA
| | - Qi Li
- School of Business, State University of New York at New Paltz, New Paltz, USA
| | - Disheng Liu
- Department of Information Science, School of Computing and Information, University of Pittsburgh Pittsburgh, USA
| | - Yiming Sun
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh Pittsburgh, USA
| | - Ye Ye
- Department of Biomedical Informatics, School of Medicine, University of Pittsburgh, Pittsburgh, USA
| |
Collapse
|
32
|
Xu M, Ouyang Y, Yuan Z. Deep Learning Aided Neuroimaging and Brain Regulation. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23114993. [PMID: 37299724 DOI: 10.3390/s23114993] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/15/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
Currently, deep learning aided medical imaging is becoming the hot spot of AI frontier application and the future development trend of precision neuroscience. This review aimed to render comprehensive and informative insights into the recent progress of deep learning and its applications in medical imaging for brain monitoring and regulation. The article starts by providing an overview of the current methods for brain imaging, highlighting their limitations and introducing the potential benefits of using deep learning techniques to overcome these limitations. Then, we further delve into the details of deep learning, explaining the basic concepts and providing examples of how it can be used in medical imaging. One of the key strengths is its thorough discussion of the different types of deep learning models that can be used in medical imaging including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial network (GAN) assisted magnetic resonance imaging (MRI), positron emission tomography (PET)/computed tomography (CT), electroencephalography (EEG)/magnetoencephalography (MEG), optical imaging, and other imaging modalities. Overall, our review on deep learning aided medical imaging for brain monitoring and regulation provides a referrable glance for the intersection of deep learning aided neuroimaging and brain regulation.
Collapse
Affiliation(s)
- Mengze Xu
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, China
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| | - Yuanyuan Ouyang
- Nanomicro Sino-Europe Technology Company Limited, Zhuhai 519031, China
- Jiangfeng China-Portugal Technology Co., Ltd., Macau SAR 999078, China
| | - Zhen Yuan
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| |
Collapse
|
33
|
Hammad M, ElAffendi M, Ateya AA, Abd El-Latif AA. Efficient Brain Tumor Detection with Lightweight End-to-End Deep Learning Model. Cancers (Basel) 2023; 15:2837. [PMID: 37345173 PMCID: PMC10216217 DOI: 10.3390/cancers15102837] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 04/25/2023] [Accepted: 04/28/2023] [Indexed: 06/23/2023] Open
Abstract
In the field of medical imaging, deep learning has made considerable strides, particularly in the diagnosis of brain tumors. The Internet of Medical Things (IoMT) has made it possible to combine these deep learning models into advanced medical devices for more accurate and efficient diagnosis. Convolutional neural networks (CNNs) are a popular deep learning technique for brain tumor detection because they can be trained on vast medical imaging datasets to recognize cancers in new images. Despite its benefits, which include greater accuracy and efficiency, deep learning has disadvantages, such as high computing costs and the possibility of skewed findings due to inadequate training data. Further study is needed to fully understand the potential and limitations of deep learning in brain tumor detection in the IoMT and to overcome the obstacles associated with real-world implementation. In this study, we propose a new CNN-based deep learning model for brain tumor detection. The suggested model is an end-to-end model, which reduces the system's complexity in comparison to earlier deep learning models. In addition, our model is lightweight, as it is built from a small number of layers compared to other previous models, which makes the model suitable for real-time applications. The optimistic findings of a rapid increase in accuracy (99.48% for binary class and 96.86% for multi-class) demonstrate that the new framework model has excelled in the competition. This study demonstrates that the suggested deep model outperforms other CNNs for detecting brain tumors. Additionally, the study provides a framework for secure data transfer of medical lab results with security recommendations to ensure security in the IoMT.
Collapse
Affiliation(s)
- Mohamed Hammad
- EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia; (M.E.); (A.A.A.)
- Department of Information Technology, Faculty of Computers and Information, Menoufia University, Shibin El Kom 32511, Egypt
| | - Mohammed ElAffendi
- EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia; (M.E.); (A.A.A.)
| | - Abdelhamied A. Ateya
- EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia; (M.E.); (A.A.A.)
- Department of Electronics and Communications Engineering, Zagazig University, Zagazig 44519, Egypt
| | - Ahmed A. Abd El-Latif
- EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia; (M.E.); (A.A.A.)
- Department of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebin El Koom 32511, Egypt
| |
Collapse
|
34
|
Kibriya H, Amin R, Kim J, Nawaz M, Gantassi R. A Novel Approach for Brain Tumor Classification Using an Ensemble of Deep and Hand-Crafted Features. SENSORS (BASEL, SWITZERLAND) 2023; 23:4693. [PMID: 37430604 PMCID: PMC10221077 DOI: 10.3390/s23104693] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/24/2023] [Accepted: 05/08/2023] [Indexed: 07/12/2023]
Abstract
One of the most severe types of cancer caused by the uncontrollable proliferation of brain cells inside the skull is brain tumors. Hence, a fast and accurate tumor detection method is critical for the patient's health. Many automated artificial intelligence (AI) methods have recently been developed to diagnose tumors. These approaches, however, result in poor performance; hence, there is a need for an efficient technique to perform precise diagnoses. This paper suggests a novel approach for brain tumor detection via an ensemble of deep and hand-crafted feature vectors (FV). The novel FV is an ensemble of hand-crafted features based on the GLCM (gray level co-occurrence matrix) and in-depth features based on VGG16. The novel FV contains robust features compared to independent vectors, which improve the suggested method's discriminating capabilities. The proposed FV is then classified using SVM or support vector machines and the k-nearest neighbor classifier (KNN). The framework achieved the highest accuracy of 99% on the ensemble FV. The results indicate the reliability and efficacy of the proposed methodology; hence, radiologists can use it to detect brain tumors through MRI (magnetic resonance imaging). The results show the robustness of the proposed method and can be deployed in the real environment to detect brain tumors from MRI images accurately. In addition, the performance of our model was validated via cross-tabulated data.
Collapse
Affiliation(s)
- Hareem Kibriya
- Department of Computer Sciences, University of Engineering and Technology, Taxila 47050, Pakistan
| | - Rashid Amin
- Department of Computer Sciences, University of Chakwal, Chakwal 48800, Pakistan
| | - Jinsul Kim
- School of Electronics and Computer Engineering, Chonnam National University, 300 Yongbong-dong, Buk-gu, Gwangju 500757, Republic of Korea
| | - Marriam Nawaz
- Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
| | - Rahma Gantassi
- Department of Electrical Engineering, Chonnam National University, Gwangju 61186, Republic of Korea
| |
Collapse
|
35
|
Anagun Y. Smart brain tumor diagnosis system utilizing deep convolutional neural networks. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-27. [PMID: 37362644 PMCID: PMC10140727 DOI: 10.1007/s11042-023-15422-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 03/12/2023] [Accepted: 04/18/2023] [Indexed: 06/28/2023]
Abstract
The early diagnosis of cancer is crucial to provide prompt and adequate management of the diseases. Imaging tests, in particular magnetic resonance imaging (MRI), are the first preferred method for diagnosis. However, these tests have some limitations which can cause a delay in detection and diagnosis. The use of computer-aided intelligent systems can assist physicians in diagnosis. In this study, we established a Convolutional Neural Network (CNN)-based brain tumor diagnosis system using EfficientNetv2s architecture, which was improved with the Ranger optimization and extensive pre-processing. We also compared the proposed model with state-of-the-art deep learning architectures such as ResNet18, ResNet200d, and InceptionV4 in discriminating brain tumors based on their spatial features. We achieved the best micro-average results with 99.85% test accuracy, 99.89% Area under the Curve (AUC), 98.16% precision, 98.17% recall, and 98.21% f1-score. Furthermore, the experimental results of the improved model were compared to various CNN-based architectures using key performance metrics and were shown to have a strong impact on tumor categorization. The proposed system has been experimentally evaluated with different optimizers and compared with recent CNN architectures, on both augmented and original data. The results demonstrated a convincing performance in tumor detection and diagnosis.
Collapse
Affiliation(s)
- Yildiray Anagun
- Department of Computer Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| |
Collapse
|
36
|
Yu J, Ma T, Fu Y, Chen H, Lai M, Zhuo C, Xu Y. Local-to-global spatial learning for whole-slide image representation and classification. Comput Med Imaging Graph 2023; 107:102230. [PMID: 37116341 DOI: 10.1016/j.compmedimag.2023.102230] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 03/27/2023] [Accepted: 04/05/2023] [Indexed: 04/30/2023]
Abstract
Whole-slide image (WSI) provides an important reference for clinical diagnosis. Classification with only WSI-level labels can be recognized for multi-instance learning (MIL) tasks. However, most existing MIL-based WSI classification methods have moderate performance on correlation mining between instances limited by their instance- level classification strategy. Herein, we propose a novel local-to-global spatial learning method to mine global position and local morphological information by redefining the MIL-based WSI classification strategy, better at learning WSI-level representation, called Global-Local Attentional Multi-Instance Learning (GLAMIL). GLAMIL can focus on regional relationships rather than single instances. It first learns relationships between patches in the local pool to aggregate region correlation (tissue types of a WSI). These correlations then can be further mined to fulfill WSI-level representation, where position correlation between different regions can be modeled. Furthermore, Transformer layers are employed to model global and local spatial information rather than being simply used as feature extractors, and the corresponding structure improvements are present. In addition, we evaluate GIAMIL on three benchmarks considering various challenging factors and achieve satisfactory results. GLAMIL outperforms state-of-the-art methods and baselines by about 1 % and 10 %, respectively.
Collapse
Affiliation(s)
- Jiahui Yu
- Department of Biomedical Enginearing, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou 310027, China; Innovation Center for Smart Medical Technologies & Devices, Binjiang Institute of Zhejiang University, Hangzhou 310053, China
| | - Tianyu Ma
- Innovation Center for Smart Medical Technologies & Devices, Binjiang Institute of Zhejiang University, Hangzhou 310053, China
| | - Yu Fu
- College of Information Science & Electronic Engineering, Zhejiang University, Hangzhou 310027, China
| | - Hang Chen
- Department of Biomedical Enginearing, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou 310027, China
| | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang University, Hangzhou 310053, China
| | - Cheng Zhuo
- College of Information Science & Electronic Engineering, Zhejiang University, Hangzhou 310027, China
| | - Yingke Xu
- Department of Biomedical Enginearing, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou 310027, China; Innovation Center for Smart Medical Technologies & Devices, Binjiang Institute of Zhejiang University, Hangzhou 310053, China; Department of Endocrinology, Children's Hospital of Zhejiang University School of Medicine, National Clinical Research Center for Children's Health, Hangzhou, Zhejiang 310051, China.
| |
Collapse
|
37
|
Hussain S, Haider S, Maqsood S, Damaševičius R, Maskeliūnas R, Khan M. ETISTP: An Enhanced Model for Brain Tumor Identification and Survival Time Prediction. Diagnostics (Basel) 2023; 13:diagnostics13081456. [PMID: 37189556 DOI: 10.3390/diagnostics13081456] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 03/30/2023] [Accepted: 04/14/2023] [Indexed: 05/17/2023] Open
Abstract
Technology-assisted diagnosis is increasingly important in healthcare systems. Brain tumors are a leading cause of death worldwide, and treatment plans rely heavily on accurate survival predictions. Gliomas, a type of brain tumor, have particularly high mortality rates and can be further classified as low- or high-grade, making survival prediction challenging. Existing literature provides several survival prediction models that use different parameters, such as patient age, gross total resection status, tumor size, or tumor grade. However, accuracy is often lacking in these models. The use of tumor volume instead of size may improve the accuracy of survival prediction. In response to this need, we propose a novel model, the enhanced brain tumor identification and survival time prediction (ETISTP), which computes tumor volume, classifies it into low- or high-grade glioma, and predicts survival time with greater accuracy. The ETISTP model integrates four parameters: patient age, survival days, gross total resection (GTR) status, and tumor volume. Notably, ETISTP is the first model to employ tumor volume for prediction. Furthermore, our model minimizes the computation time by allowing for parallel execution of tumor volume computation and classification. The simulation results demonstrate that ETISTP outperforms prominent survival prediction models.
Collapse
Affiliation(s)
- Shah Hussain
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan
| | - Shahab Haider
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan
| | - Sarmad Maqsood
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| | - Muzammil Khan
- Department of Computer & Software Technology, University of Swat, Swat 19200, Pakistan
| |
Collapse
|
38
|
Khattab R, Abdelmaksoud IR, Abdelrazek S. Deep Convolutional Neural Networks for Detecting COVID-19 Using Medical Images: A Survey. NEW GENERATION COMPUTING 2023; 41:343-400. [PMID: 37229176 PMCID: PMC10071474 DOI: 10.1007/s00354-023-00213-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
Coronavirus Disease 2019 (COVID-19), which is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2), surprised the world in December 2019 and has threatened the lives of millions of people. Countries all over the world closed worship places and shops, prevented gatherings, and implemented curfews to stand against the spread of COVID-19. Deep Learning (DL) and Artificial Intelligence (AI) can have a great role in detecting and fighting this disease. Deep learning can be used to detect COVID-19 symptoms and signs from different imaging modalities, such as X-Ray, Computed Tomography (CT), and Ultrasound Images (US). This could help in identifying COVID-19 cases as a first step to curing them. In this paper, we reviewed the research studies conducted from January 2020 to September 2022 about deep learning models that were used in COVID-19 detection. This paper clarified the three most common imaging modalities (X-Ray, CT, and US) in addition to the DL approaches that are used in this detection and compared these approaches. This paper also provided the future directions of this field to fight COVID-19 disease.
Collapse
Affiliation(s)
- Rana Khattab
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Islam R. Abdelmaksoud
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Samir Abdelrazek
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| |
Collapse
|
39
|
Luo C, Yang J, Liu Z, Jing D. Predicting the recurrence and overall survival of patients with glioma based on histopathological images using deep learning. Front Neurol 2023; 14:1100933. [PMID: 37064206 PMCID: PMC10102594 DOI: 10.3389/fneur.2023.1100933] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 03/13/2023] [Indexed: 04/03/2023] Open
Abstract
BackgroundA deep learning (DL) model based on representative biopsy tissues can predict the recurrence and overall survival of patients with glioma, leading to optimized personalized medicine. This research aimed to develop a DL model based on hematoxylin-eosin (HE) stained pathological images and verify its diagnostic accuracy.MethodsOur study retrospectively collected 162 patients with glioma and randomly divided them into a training set (n = 113) and a validation set (n = 49) to build a DL model. The HE-stained slide was segmented into a size of 180 × 180 pixels without overlapping. The patch-level features were extracted by the pre-trained ResNet50 to predict the recurrence and overall survival. Additionally, a light-strategy was introduced where low-size digital biopsy images with clinical information were inputted into the DL model to ensure minimum memory occupation.ResultsOur study extracted 512 histopathological features from the HE-stained slides of each glioma patient. We identified 36 and 18 features as significantly related to disease-free survival (DFS) and overall survival (OS), respectively, (P < 0.05) using the univariate Cox proportional-hazards model. Pathomics signature showed a C-index of 0.630 and 0.652 for DFS and OS prediction, respectively. The time-dependent receiver operating characteristic (ROC) curves, along with nomograms, were used to assess the diagnostic accuracy at a fixed time point. In the validation set (n = 49), the area under the curve (AUC) in the 1- and 2-year DFS was 0.955 and 0.904, respectively, and the 2-, 3-, and 5-year OS were 0.969, 0.955, and 0.960, respectively. We stratified the patients into low- and high-risk groups using the median hazard score (0.083 for DFS and−0.177 for OS) and showed significant differences between these groups (P < 0.001).ConclusionOur results demonstrated that the DL model based on the HE-stained slides showed the predictability of recurrence and survival in patients with glioma. The results can be used to assist oncologists in selecting the optimal treatment strategy in clinical practice.
Collapse
Affiliation(s)
- Chenhua Luo
- Department of Oncology, National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Jiyan Yang
- Department of Oncology, National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Zhengzheng Liu
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Di Jing
- Xiangya School of Medicine, Central South University, Changsha, China
- *Correspondence: Di Jing
| |
Collapse
|
40
|
SSO-RBNN driven brain tumor classification with Saliency-K-means segmentation technique. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
41
|
Liu X, Hou S, Liu S, Ding W, Zhang Y. Attention-based Multimodal Glioma Segmentation with Multi-attention Layers for Small-intensity Dissimilarity. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2023. [DOI: 10.1016/j.jksuci.2023.03.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
|
42
|
Shahin AI, Aly S, Aly W. A novel multi-class brain tumor classification method based on unsupervised PCANet features. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08281-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023]
|
43
|
DTBV: A Deep Transfer-Based Bone Cancer Diagnosis System Using VGG16 Feature Extraction. Diagnostics (Basel) 2023; 13:diagnostics13040757. [PMID: 36832245 PMCID: PMC9955441 DOI: 10.3390/diagnostics13040757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/23/2023] [Accepted: 01/25/2023] [Indexed: 02/19/2023] Open
Abstract
Among the many different types of cancer, bone cancer is the most lethal and least prevalent. More cases are reported each year. Early diagnosis of bone cancer is crucial since it helps limit the spread of malignant cells and reduce mortality. The manual method of detection of bone cancer is cumbersome and requires specialized knowledge. A deep transfer-based bone cancer diagnosis (DTBV) system using VGG16 feature extraction is proposed to address these issues. The proposed DTBV system uses a transfer learning (TL) approach in which a pre-trained convolutional neural network (CNN) model is used to extract features from the pre-processed input image and a support vector machine (SVM) model is used to train using these features to distinguish between cancerous and healthy bone. The CNN is applied to the image datasets as it provides better image recognition with high accuracy when the layers in neural network feature extraction increase. In the proposed DTBV system, the VGG16 model extracts the features from the input X-ray image. A mutual information statistic that measures the dependency between the different features is then used to select the best features. This is the first time this method has been used for detecting bone cancer. Once selected features are selected, they are fed into the SVM classifier. The SVM model classifies the given testing dataset into malignant and benign categories. A comprehensive performance evaluation has demonstrated that the proposed DTBV system is highly efficient in detecting bone cancer, with an accuracy of 93.9%, which is more accurate than other existing systems.
Collapse
|
44
|
Rohmetra H, Raghunath N, Narang P, Chamola V, Guizani M, Lakkaniga NR. AI-enabled remote monitoring of vital signs for COVID-19: methods, prospects and challenges. COMPUTING 2023; 105. [PMCID: PMC8006120 DOI: 10.1007/s00607-021-00937-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
The COVID-19 pandemic has overwhelmed the existing healthcare infrastructure in many parts of the world. Healthcare professionals are not only over-burdened but also at a high risk of nosocomial transmission from COVID-19 patients. Screening and monitoring the health of a large number of susceptible or infected individuals is a challenging task. Although professional medical attention and hospitalization are necessary for high-risk COVID-19 patients, home isolation is an effective strategy for low and medium risk patients as well as for those who are at risk of infection and have been quarantined. However, this necessitates effective techniques for remotely monitoring the patients’ symptoms. Recent advances in Machine Learning (ML) and Deep Learning (DL) have strengthened the power of imaging techniques and can be used to remotely perform several tasks that previously required the physical presence of a medical professional. In this work, we study the prospects of vital signs monitoring for COVID-19 infected as well as quarantined individuals by using DL and image/signal-processing techniques, many of which can be deployed using simple cameras and sensors available on a smartphone or a personal computer, without the need of specialized equipment. We demonstrate the potential of ML-enabled workflows for several vital signs such as heart and respiratory rates, cough, blood pressure, and oxygen saturation. We also discuss the challenges involved in implementing ML-enabled techniques.
Collapse
Affiliation(s)
- Honnesh Rohmetra
- Department of CSIS, Birla Institute of Technology and Science, Pilani, Pilani, Rajasthan India
| | - Navaneeth Raghunath
- Department of CSIS, Birla Institute of Technology and Science, Pilani, Pilani, Rajasthan India
| | - Pratik Narang
- Department of CSIS, Birla Institute of Technology and Science, Pilani, Pilani, Rajasthan India
| | - Vinay Chamola
- Department of EEE & APPCAIR, Birla Institute of Technology and Science, Pilani, Pilani, Rajasthan India
| | | | - Naga Rajiv Lakkaniga
- Department of Integrative Structural and Computational Biology, The Scripps Research Institute, La Jolla, USA
- SmartBio Labs, Chennai, India
| |
Collapse
|
45
|
Chang Y, Zheng Z, Sun Y, Zhao M, Lu Y, Zhang Y. DPAFNet: A Residual Dual-Path Attention-Fusion Convolutional Neural Network for Multimodal Brain Tumor Segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
46
|
Abstract
The COVID-19 pandemic continues to have a destructive effect on the health and well-being of the global population. A vital step in the battle against it is the successful screening of infected patients, together with one of the effective screening methods being radiology examination using chest radiography. Recognition of epidemic growth patterns across temporal and social factors can improve our capability to create epidemic transmission designs, including the critical job of predicting the estimated intensity of the outbreak morbidity or mortality impact at the end. The study's primary motivation is to be able to estimate with a certain level of accuracy the number of deaths due to COVID-19, managing to model the progression of the pandemic. Predicting the number of possible deaths from COVID-19 can provide governments and decision-makers with indicators for purchasing respirators and pandemic prevention policies. Thus, this work presents itself as an essential contribution to combating the pandemic. Kalman Filter is a widely used method for tracking and navigation and filtering and time series. Designing and tuning machine learning methods are a labor- and time-intensive task that requires extensive experience. The field of automated machine learning Auto Machine Learning relies on automating this task. Auto Machine Learning tools enable novice users to create useful machine learning units, while experts can use them to free up valuable time for other tasks. This paper presents an objective method of forecasting the COVID-19 outbreak using Kalman Filter and Auto Machine Learning. We use a COVID-19 dataset of Ceará, one of the 27 federative units in Brazil. Ceará has more than 235,222 confirmed cases of COVID-19 and 8850 deaths due to the disease. The TPOT automobile model showed the best result with a 0.99 of R 2 score.
Collapse
|
47
|
Patro KK, Allam JP, Hammad M, Tadeusiewicz R, Pławiak P. SCovNet: A skip connection-based feature union deep learning technique with statistical approach analysis for the detection of COVID-19. Biocybern Biomed Eng 2023; 43:352-368. [PMID: 36819118 PMCID: PMC9928742 DOI: 10.1016/j.bbe.2023.01.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/21/2022] [Accepted: 01/30/2023] [Indexed: 02/17/2023]
Abstract
BACKGROUND AND OBJECTIVE The global population has been heavily impacted by the COVID-19 pandemic of coronavirus. Infections are spreading quickly around the world, and new spikes (Delta, Delta Plus, and Omicron) are still being made. The real-time reverse transcription-polymerase chain reaction (RT-PCR) is the method most often used to find viral RNA in a nasopharyngeal swab. However, these diagnostic approaches require human involvement and consume more time per prediction. Moreover, the existing conventional test mainly suffers from false negatives, so there is a chance for the virus to spread quickly. Therefore, a rapid and early diagnosis of COVID-19 patients is needed to overcome these problems. METHODS Existing approaches based on deep learning for COVID detection are suffering from unbalanced datasets, poor performance, and gradient vanishing problems. A customized skip connection-based network with a feature union approach has been developed in this work to overcome some of the issues mentioned above. Gradient information from chest X-ray (CXR) images to subsequent layers is bypassed through skip connections. In the script's title, "SCovNet" refers to a skip-connection-based feature union network for detecting COVID-19 in a short notation. The performance of the proposed model was tested with two publicly available CXR image databases, including balanced and unbalanced datasets. RESULTS A modified skip connection-based CNN model was suggested for a small unbalanced dataset (Kaggle) and achieved remarkable performance. In addition, the proposed model was also tested with a large GitHub database of CXR images and obtained an overall best accuracy of 98.67% with an impressive low false-negative rate of 0.0074. CONCLUSIONS The results of the experiments show that the proposed method works better than current methods at finding early signs of COVID-19. As an additional point of interest, we must mention the innovative hierarchical classification strategy provided for this work, which considered both balanced and unbalanced datasets to get the best COVID-19 identification rate.
Collapse
Affiliation(s)
- Kiran Kumar Patro
- Department of ECE, Aditya Institute of Technology and Management, Tekkali AP-532201, India
| | - Jaya Prakash Allam
- Department of EC, National Institute of Technology Rourkela, Rourkela, Odisha 769008, India
| | - Mohamed Hammad
- Information Technology Dept., Faculty of Computers and Information, Menoufia University, Menoufia, Egypt
| | - Ryszard Tadeusiewicz
- Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, Krakow, Poland
| | - Paweł Pławiak
- Department of Computer Science, Faculty of Computer Science and Telecommunications, Cracow University of Technology, Warszawska 24, 31-155 Krakow, Poland
- Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Bałtycka 5, 44-100 Gliwice, Poland
| |
Collapse
|
48
|
Munnangi AK, UdhayaKumar S, Ravi V, Sekaran R, Kannan S. Survival study on deep learning techniques for IoT enabled smart healthcare system. HEALTH AND TECHNOLOGY 2023; 13:215-228. [PMID: 36818549 PMCID: PMC9918340 DOI: 10.1007/s12553-023-00736-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 02/07/2023] [Indexed: 02/13/2023]
Abstract
Purpose The paper is to study a review of the employment of deep learning (DL) techniques inside the healthcare sector, together with the highlight of the strength and shortcomings of existing methods together with several research ultimatums. Our study lays the foundation for healthcare professionals and government with present-day inclinations in DL-based data analytics for smart healthcare. Methods A deep learning-based technique is designed to extract sensor displacement effects and predict abnormalities for activity recognition via Artificial Intelligence (AI). The presented technique minimizes the vanishing gradient issue of Recurrent Neural Networks (RNN), thereby reducing the time for detecting abnormalities with consideration of temporal and spatial factors. Proposed Moran Autocorrelation and Regression-based Elman Recurrent Neural Network (MAR-ERNN) introduced. Results Experimental results show the feasibility of the proposed method. The results show that the proposed method improves accuracy by 95% and reduces execution time by 18%. Conclusion MAR-ERNN performs well in the activity recognition of health status. Collectively, this IoT-enabled smart healthcare system is utilized by enhancing accuracy, and minimizing time and overhead reduction.
Collapse
Affiliation(s)
- Ashok Kumar Munnangi
- Department of Information Technology, Velagapudi Ramakrishna Siddhartha Engineering College (Autonomous), Vijayawada, Andhra Pradesh India
| | - Satheeshwaran UdhayaKumar
- Department of Electronics and Communication Engineering, Pragati Engineering College, Surampalem, Andhra Pradesh India
| | - Vinayakumar Ravi
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar, Saudi Arabia
| | - Ramesh Sekaran
- Department of Computer Science and Engineering, Jain University (Deemed to be University), Bangalore, Karnataka India
| | - Suthendran Kannan
- Department of Information Technology, Kalasalingam Academy of Research and Education, Krishnankoil, India
| |
Collapse
|
49
|
Poyatos J, Molina D, Martinez AD, Del Ser J, Herrera F. EvoPruneDeepTL: An evolutionary pruning model for transfer learning based deep neural networks. Neural Netw 2023; 158:59-82. [PMID: 36442374 DOI: 10.1016/j.neunet.2022.10.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 09/27/2022] [Accepted: 10/11/2022] [Indexed: 11/06/2022]
Abstract
In recent years, Deep Learning models have shown a great performance in complex optimization problems. They generally require large training datasets, which is a limitation in most practical cases. Transfer learning allows importing the first layers of a pre-trained architecture and connecting them to fully-connected layers to adapt them to a new problem. Consequently, the configuration of the these layers becomes crucial for the performance of the model. Unfortunately, the optimization of these models is usually a computationally demanding task. One strategy to optimize Deep Learning models is the pruning scheme. Pruning methods are focused on reducing the complexity of the network, assuming an expected performance penalty of the model once pruned. However, the pruning could potentially be used to improve the performance, using an optimization algorithm to identify and eventually remove unnecessary connections among neurons. This work proposes EvoPruneDeepTL, an evolutionary pruning model for Transfer Learning based Deep Neural Networks which replaces the last fully-connected layers with sparse layers optimized by a genetic algorithm. Depending on its solution encoding strategy, our proposed model can either perform optimized pruning or feature selection over the densely connected part of the neural network. We carry out different experiments with several datasets to assess the benefits of our proposal. Results show the contribution of EvoPruneDeepTL and feature selection to the overall computational efficiency of the network as a result of the optimization process. In particular, the accuracy is improved, reducing at the same time the number of active neurons in the final layers.
Collapse
Affiliation(s)
- Javier Poyatos
- Department of Computer Science and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, 18071, Spain.
| | - Daniel Molina
- Department of Computer Science and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, 18071, Spain.
| | - Aritz D Martinez
- TECNALIA, Basque Research & Technology Alliance (BRTA), Derio, 48160, Spain.
| | - Javier Del Ser
- TECNALIA, Basque Research & Technology Alliance (BRTA), Derio, 48160, Spain; University of the Basque Country (UPV/EHU), Bilbao, 48013, Spain.
| | - Francisco Herrera
- Department of Computer Science and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, 18071, Spain; Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.
| |
Collapse
|
50
|
Samee NA, Ahmad T, Mahmoud NF, Atteia G, Abdallah HA, Rizwan A. Clinical Decision Support Framework for Segmentation and Classification of Brain Tumor MRIs Using a U-Net and DCNN Cascaded Learning Algorithm. Healthcare (Basel) 2022; 10:healthcare10122340. [PMID: 36553864 PMCID: PMC9777942 DOI: 10.3390/healthcare10122340] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/11/2022] [Accepted: 11/15/2022] [Indexed: 11/23/2022] Open
Abstract
Brain tumors (BTs) are an uncommon but fatal kind of cancer. Therefore, the development of computer-aided diagnosis (CAD) systems for classifying brain tumors in magnetic resonance imaging (MRI) has been the subject of many research papers so far. However, research in this sector is still in its early stage. The ultimate goal of this research is to develop a lightweight effective implementation of the U-Net deep network for use in performing exact real-time segmentation. Moreover, a simplified deep convolutional neural network (DCNN) architecture for the BT classification is presented for automatic feature extraction and classification of the segmented regions of interest (ROIs). Five convolutional layers, rectified linear unit, normalization, and max-pooling layers make up the DCNN's proposed simplified architecture. The introduced method was verified on multimodal brain tumor segmentation (BRATS 2015) datasets. Our experimental results on BRATS 2015 acquired Dice similarity coefficient (DSC) scores, sensitivity, and classification accuracy of 88.8%, 89.4%, and 88.6% for high-grade gliomas. When it comes to segmenting BRATS 2015 BT images, the performance of our proposed CAD framework is on par with existing state-of-the-art methods. However, the accuracy achieved in this study for the classification of BT images has improved upon the accuracy reported in prior studies. Image classification accuracy for BRATS 2015 BT has been improved from 88% to 88.6%.
Collapse
Affiliation(s)
- Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Tahir Ahmad
- Department of Computer Science, COMSATS University Islamabad, Attock Campus, Attock 43600, Pakistan
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence: (N.F.M.); (G.A.); (A.R.)
| | - Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence: (N.F.M.); (G.A.); (A.R.)
| | - Hanaa A. Abdallah
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Atif Rizwan
- Department of Computer Engineering, Jeju National University, Jejusi 63243, Republic of Korea
- Correspondence: (N.F.M.); (G.A.); (A.R.)
| |
Collapse
|