1
|
Ilesanmi AE, Ilesanmi T, Ajayi B, Gbotoso GA, Belhaouari SB. Unlocking the Power of 3D Convolutional Neural Networks for COVID-19 Detection: A Comprehensive Review. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01393-x. [PMID: 39849202 DOI: 10.1007/s10278-025-01393-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2024] [Revised: 12/11/2024] [Accepted: 12/23/2024] [Indexed: 01/25/2025]
Abstract
The advent of three-dimensional convolutional neural networks (3D CNNs) has revolutionized the detection and analysis of COVID-19 cases. As imaging technologies have advanced, 3D CNNs have emerged as a powerful tool for segmenting and classifying COVID-19 in medical images. These networks have demonstrated both high accuracy and rapid detection capabilities, making them crucial for effective COVID-19 diagnostics. This study offers a thorough review of various 3D CNN algorithms, evaluating their efficacy in segmenting and classifying COVID-19 across a range of medical imaging modalities. This review systematically examines recent advancements in 3D CNN methodologies. The process involved a comprehensive screening of abstracts and titles to ensure relevance, followed by a meticulous selection and analysis of research papers from academic repositories. The study evaluates these papers based on specific criteria and provides detailed insights into the network architectures and algorithms used for COVID-19 detection. The review reveals significant trends in the use of 3D CNNs for COVID-19 segmentation and classification. It highlights key findings, including the diverse range of networks employed for COVID-19 detection compared to other diseases, which predominantly utilize encoder/decoder frameworks. The study provides an in-depth analysis of these methods, discussing their strengths, limitations, and potential areas for future research. The study reviewed a total of 60 papers published across various repositories, including Springer and Elsevier. The insights from this study have implications for clinical diagnosis and treatment strategies. Despite some limitations, the accuracy and efficiency of 3D CNN algorithms underscore their potential for advancing medical image segmentation and classification. The findings suggest that 3D CNNs could significantly enhance the detection and management of COVID-19, contributing to improved healthcare outcomes.
Collapse
Affiliation(s)
| | | | | | - Gbenga A Gbotoso
- Lagos State University of Science and Technology, Ikorodu, Nigeria
| | | |
Collapse
|
2
|
Muhammad D, Bendechache M. Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis. Comput Struct Biotechnol J 2024; 24:542-560. [PMID: 39252818 PMCID: PMC11382209 DOI: 10.1016/j.csbj.2024.08.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 08/07/2024] [Accepted: 08/07/2024] [Indexed: 09/11/2024] Open
Abstract
This systematic literature review examines state-of-the-art Explainable Artificial Intelligence (XAI) methods applied to medical image analysis, discussing current challenges and future research directions, and exploring evaluation metrics used to assess XAI approaches. With the growing efficiency of Machine Learning (ML) and Deep Learning (DL) in medical applications, there's a critical need for adoption in healthcare. However, their "black-box" nature, where decisions are made without clear explanations, hinders acceptance in clinical settings where decisions have significant medicolegal consequences. Our review highlights the advanced XAI methods, identifying how they address the need for transparency and trust in ML/DL decisions. We also outline the challenges faced by these methods and propose future research directions to improve XAI in healthcare. This paper aims to bridge the gap between cutting-edge computational techniques and their practical application in healthcare, nurturing a more transparent, trustworthy, and effective use of AI in medical settings. The insights guide both research and industry, promoting innovation and standardisation in XAI implementation in healthcare.
Collapse
Affiliation(s)
- Dost Muhammad
- ADAPT Research Centre, School of Computer Science, University of Galway, Galway, Ireland
| | - Malika Bendechache
- ADAPT Research Centre, School of Computer Science, University of Galway, Galway, Ireland
| |
Collapse
|
3
|
Diao Z, Jiang H. A multi-instance tumor subtype classification method for small PET datasets using RA-DL attention module guided deep feature extraction with radiomics features. Comput Biol Med 2024; 174:108461. [PMID: 38626509 DOI: 10.1016/j.compbiomed.2024.108461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 03/21/2024] [Accepted: 04/07/2024] [Indexed: 04/18/2024]
Abstract
BACKGROUND Positron emission tomography (PET) is extensively employed for diagnosing and staging various tumors, including liver cancer, lung cancer, and lymphoma. Accurate subtype classification of tumors plays a crucial role in formulating effective treatment plans for patients. Notably, lymphoma comprises subtypes like diffuse large B-cell lymphoma and Hodgkin's lymphoma, while lung cancer encompasses adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. Similarly, liver cancer consists of subtypes such as cholangiocarcinoma and hepatocellular carcinoma. Consequently, the subtype classification of tumors based on PET images holds immense clinical significance. However, in clinical practice, the number of cases available for each subtype is often limited and imbalanced. Therefore, the primary challenge lies in achieving precise subtype classification using a small dataset. METHOD This paper presents a novel approach for tumor subtype classification in small datasets using RA-DL (Radiomics-DeepLearning) attention. To address the limited sample size, Support Vector Machines (SVM) is employed as the classifier for tumor subtypes instead of deep learning methods. Emphasizing the importance of texture information in tumor subtype recognition, radiomics features are extracted from the tumor regions during the feature extraction stage. These features are compressed using an autoencoder to reduce redundancy. In addition to radiomics features, deep features are also extracted from the tumors to leverage the feature extraction capabilities of deep learning. In contrast to existing methods, our proposed approach utilizes the RA-DL-Attention mechanism to guide the deep network in extracting complementary deep features that enhance the expressive capacity of the final features while minimizing redundancy. To address the challenges of limited and imbalanced data, our method avoids using classification labels during deep feature extraction and instead incorporates 2D Region of Interest (ROI) segmentation and image reconstruction as auxiliary tasks. Subsequently, all lesion features of a single patient are aggregated into a feature vector using a multi-instance aggregation layer. RESULT Validation experiments were conducted on three PET datasets, specifically the liver cancer dataset, lung cancer dataset, and lymphoma dataset. In the context of lung cancer, our proposed method achieved impressive performance with Area Under Curve (AUC) values of 0.82, 0.84, and 0.83 for the three-classification task. For the binary classification task of lymphoma, our method demonstrated notable results with AUC values of 0.95 and 0.75. Moreover, in the binary classification task of liver tumor, our method exhibited promising performance with AUC values of 0.84 and 0.86. CONCLUSION The experimental results clearly indicate that our proposed method outperforms alternative approaches significantly. Through the extraction of complementary radiomics features and deep features, our method achieves a substantial improvement in tumor subtype classification performance using small PET datasets.
Collapse
Affiliation(s)
- Zhaoshuo Diao
- Software College, Northeastern University, Shenyang 110819, China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, China.
| |
Collapse
|
4
|
Rufino J, Ramírez JM, Aguilar J, Baquero C, Champati J, Frey D, Lillo RE, Fernández-Anta A. Performance and explainability of feature selection-boosted tree-based classifiers for COVID-19 detection. Heliyon 2024; 10:e23219. [PMID: 38170121 PMCID: PMC10758803 DOI: 10.1016/j.heliyon.2023.e23219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 10/18/2023] [Accepted: 11/29/2023] [Indexed: 01/05/2024] Open
Abstract
In this paper, we evaluate the performance and analyze the explainability of machine learning models boosted by feature selection in predicting COVID-19-positive cases from self-reported information. In essence, this work describes a methodology to identify COVID-19 infections that considers the large amount of information collected by the University of Maryland Global COVID-19 Trends and Impact Survey (UMD-CTIS). More precisely, this methodology performs a feature selection stage based on the recursive feature elimination (RFE) method to reduce the number of input variables without compromising detection accuracy. A tree-based supervised machine learning model is then optimized with the selected features to detect COVID-19-active cases. In contrast to previous approaches that use a limited set of selected symptoms, the proposed approach builds the detection engine considering a broad range of features including self-reported symptoms, local community information, vaccination acceptance, and isolation measures, among others. To implement the methodology, three different supervised classifiers were used: random forests (RF), light gradient boosting (LGB), and extreme gradient boosting (XGB). Based on data collected from the UMD-CTIS, we evaluated the detection performance of the methodology for four countries (Brazil, Canada, Japan, and South Africa) and two periods (2020 and 2021). The proposed approach was assessed in terms of various quality metrics: F1-score, sensitivity, specificity, precision, receiver operating characteristic (ROC), and area under the ROC curve (AUC). This work also shows the normalized daily incidence curves obtained by the proposed approach for the four countries. Finally, we perform an explainability analysis using Shapley values and feature importance to determine the relevance of each feature and the corresponding contribution for each country and each country/year.
Collapse
Affiliation(s)
| | | | - Jose Aguilar
- IMDEA Networks Institute, 28918, Madrid, Spain
- CEMISID, Universidad de Los Andes, Mérida, 5101, Venezuela
- CIDITIC, Universidad EAFIT, Medellín, Colombia
| | | | | | | | | | | |
Collapse
|
5
|
Li M, Zhou H, Li X, Yan P, Jiang Y, Luo H, Zhou X, Yin S. SDA-Net: Self-distillation driven deformable attentive aggregation network for thyroid nodule identification in ultrasound images. Artif Intell Med 2023; 146:102699. [PMID: 38042598 DOI: 10.1016/j.artmed.2023.102699] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 07/12/2023] [Accepted: 10/29/2023] [Indexed: 12/04/2023]
Abstract
Early detection and accurate identification of thyroid nodules are the major challenges in controlling and treating thyroid cancer that can be difficult even for expert physicians. Currently, many computer-aided diagnosis (CAD) systems have been developed to assist this clinical process. However, most of these systems are unable to well capture geometrically diverse thyroid nodule representations from ultrasound images with subtle and various characteristic differences, resulting in suboptimal diagnosis and lack of clinical interpretability, which may affect their credibility in the clinic. In this context, a novel end-to-end network equipped with a deformable attention network and a distillation-driven interaction aggregation module (DIAM) is developed for thyroid nodule identification. The deformable attention network learns to identify discriminative features of nodules under the guidance of the deformable attention module (DAM) and an online class activation mapping (CAM) mechanism and suggests the location of diagnostic features to provide interpretable predictions. DIAM is designed to take advantage of the complementarities of adjacent layers, thus enhancing the representation capabilities of aggregated features; driven by an efficient self-distillation mechanism, the identification process is complemented with more multi-scale semantic information to calibrate the diagnosis results. Experimental results on a large dataset with varying nodule appearances show that the proposed network can achieve competitive performance in nodule diagnosis and provide interpretability suitable for clinical needs.
Collapse
Affiliation(s)
- Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Hang Zhou
- Department of In-Patient Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, 150001, Heilongjiang, China
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Pengfei Yan
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China.
| | - Xianli Zhou
- Department of In-Patient Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, 150001, Heilongjiang, China.
| | - Shen Yin
- Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
6
|
Ippolito D, Maino C, Gandola D, Franco PN, Miron R, Barbu V, Bologna M, Corso R, Breaban ME. Artificial Intelligence Applied to Chest X-ray: A Reliable Tool to Assess the Differential Diagnosis of Lung Pneumonia in the Emergency Department. Diseases 2023; 11:171. [PMID: 37987282 PMCID: PMC10660530 DOI: 10.3390/diseases11040171] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/09/2023] [Accepted: 11/13/2023] [Indexed: 11/22/2023] Open
Abstract
BACKGROUND Considering the large number of patients with pulmonary symptoms admitted to the emergency department daily, it is essential to diagnose them correctly. It is necessary to quickly solve the differential diagnosis between COVID-19 and typical bacterial pneumonia to address them with the best management possible. In this setting, an artificial intelligence (AI) system can help radiologists detect pneumonia more quickly. METHODS We aimed to test the diagnostic performance of an AI system in detecting COVID-19 pneumonia and typical bacterial pneumonia in patients who underwent a chest X-ray (CXR) and were admitted to the emergency department. The final dataset was composed of three sub-datasets: the first included all patients positive for COVID-19 pneumonia (n = 1140, namely "COVID-19+"), the second one included all patients with typical bacterial pneumonia (n = 500, "pneumonia+"), and the third one was composed of healthy subjects (n = 1000). Two radiologists were blinded to demographic, clinical, and laboratory data. The developed AI system was used to evaluate all CXRs randomly and was asked to classify them into three classes. Cohen's κ was used for interrater reliability analysis. The AI system's diagnostic accuracy was evaluated using a confusion matrix, and 95%CIs were reported as appropriate. RESULTS The interrater reliability analysis between the most experienced radiologist and the AI system reported an almost perfect agreement for COVID-19+ (κ = 0.822) and pneumonia+ (κ = 0.913). We found 96% sensitivity (95% CIs = 94.9-96.9) and 79.8% specificity (76.4-82.9) for the radiologist and 94.7% sensitivity (93.4-95.8) and 80.2% specificity (76.9-83.2) for the AI system in the detection of COVID-19+. Moreover, we found 97.9% sensitivity (98-99.3) and 88% specificity (83.5-91.7) for the radiologist and 97.5% sensitivity (96.5-98.3) and 83.9% specificity (79-87.9) for the AI system in the detection of pneumonia+ patients. Finally, the AI system reached an accuracy of 93.8%, with a misclassification rate of 6.2% and weighted-F1 of 93.8% in detecting COVID+, pneumonia+, and healthy subjects. CONCLUSIONS The AI system demonstrated excellent diagnostic performance in identifying COVID-19 and typical bacterial pneumonia in CXRs acquired in the emergency setting.
Collapse
Affiliation(s)
- Davide Ippolito
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, Italy; (D.I.); (D.G.); (P.N.F.); (R.C.)
- School of Medicine, University of Milano-Bicocca, Via Cadore 48, 20900 Monza, Italy
| | - Cesare Maino
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, Italy; (D.I.); (D.G.); (P.N.F.); (R.C.)
| | - Davide Gandola
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, Italy; (D.I.); (D.G.); (P.N.F.); (R.C.)
| | - Paolo Niccolò Franco
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, Italy; (D.I.); (D.G.); (P.N.F.); (R.C.)
| | - Radu Miron
- Sentic Lab, Strada Elena Doamna 20, 700398 Iași, Romania; (R.M.); (V.B.)
| | - Vlad Barbu
- Sentic Lab, Strada Elena Doamna 20, 700398 Iași, Romania; (R.M.); (V.B.)
| | | | - Rocco Corso
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, Italy; (D.I.); (D.G.); (P.N.F.); (R.C.)
| | - Mihaela Elena Breaban
- Faculty of Computer Science, “Alexandru Ioan Cuza” University of Iasi, Strada General Henri Mathias Berthelot 16, 700483 Iași, Romania
| |
Collapse
|
7
|
Yan P, Sun W, Li X, Li M, Jiang Y, Luo H. PKDN: Prior Knowledge Distillation Network for bronchoscopy diagnosis. Comput Biol Med 2023; 166:107486. [PMID: 37757599 DOI: 10.1016/j.compbiomed.2023.107486] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/15/2023] [Accepted: 09/15/2023] [Indexed: 09/29/2023]
Abstract
Bronchoscopy plays a crucial role in diagnosing and treating lung diseases. The deep learning-based diagnostic system for bronchoscopic images can assist physicians in accurately and efficiently diagnosing lung diseases, enabling patients to undergo timely pathological examinations and receive appropriate treatment. However, the existing diagnostic methods overlook the utilization of prior knowledge of medical images, and the limited feature extraction capability hinders precise focus on lesion regions, consequently affecting the overall diagnostic effectiveness. To address these challenges, this paper proposes a prior knowledge distillation network (PKDN) for identifying lung diseases through bronchoscopic images. The proposed method extracts color and edge features from lesion images using the prior knowledge guidance module, and subsequently enhances spatial and channel features by employing the dynamic spatial attention module and gated channel attention module, respectively. Finally, the extracted features undergo refinement and self-regulation through feature distillation. Furthermore, decoupled distillation is implemented to balance the importance of target and non-target class distillation, thereby enhancing the diagnostic performance of the network. The effectiveness of the proposed method is validated on the bronchoscopic dataset provided by Harbin Medical University Cancer Hospital, which consists of 2,029 bronchoscopic images from 200 patients. Experimental results demonstrate that the proposed method achieves an accuracy of 94.78% and an AUC of 98.17%, outperforming other methods significantly in diagnostic performance. These results indicate that the computer-aided diagnostic system based on PKDN provides satisfactory accuracy in diagnosing lung diseases during bronchoscopy.
Collapse
Affiliation(s)
- Pengfei Yan
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Weiling Sun
- Department of Endoscope, Harbin Medical University Cancer Hospital, Harbin 150040, China
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China.
| |
Collapse
|
8
|
de Vries BM, Zwezerijnen GJC, Burchell GL, van Velden FHP, Menke-van der Houven van Oordt CW, Boellaard R. Explainable artificial intelligence (XAI) in radiology and nuclear medicine: a literature review. Front Med (Lausanne) 2023; 10:1180773. [PMID: 37250654 PMCID: PMC10213317 DOI: 10.3389/fmed.2023.1180773] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 04/17/2023] [Indexed: 05/31/2023] Open
Abstract
Rational Deep learning (DL) has demonstrated a remarkable performance in diagnostic imaging for various diseases and modalities and therefore has a high potential to be used as a clinical tool. However, current practice shows low deployment of these algorithms in clinical practice, because DL algorithms lack transparency and trust due to their underlying black-box mechanism. For successful employment, explainable artificial intelligence (XAI) could be introduced to close the gap between the medical professionals and the DL algorithms. In this literature review, XAI methods available for magnetic resonance (MR), computed tomography (CT), and positron emission tomography (PET) imaging are discussed and future suggestions are made. Methods PubMed, Embase.com and Clarivate Analytics/Web of Science Core Collection were screened. Articles were considered eligible for inclusion if XAI was used (and well described) to describe the behavior of a DL model used in MR, CT and PET imaging. Results A total of 75 articles were included of which 54 and 17 articles described post and ad hoc XAI methods, respectively, and 4 articles described both XAI methods. Major variations in performance is seen between the methods. Overall, post hoc XAI lacks the ability to provide class-discriminative and target-specific explanation. Ad hoc XAI seems to tackle this because of its intrinsic ability to explain. However, quality control of the XAI methods is rarely applied and therefore systematic comparison between the methods is difficult. Conclusion There is currently no clear consensus on how XAI should be deployed in order to close the gap between medical professionals and DL algorithms for clinical implementation. We advocate for systematic technical and clinical quality assessment of XAI methods. Also, to ensure end-to-end unbiased and safe integration of XAI in clinical workflow, (anatomical) data minimization and quality control methods should be included.
Collapse
Affiliation(s)
- Bart M. de Vries
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Gerben J. C. Zwezerijnen
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | | | | | | | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
9
|
Althaqafi T, AL-Ghamdi ASALM, Ragab M. Artificial Intelligence Based COVID-19 Detection and Classification Model on Chest X-ray Images. Healthcare (Basel) 2023; 11:1204. [PMID: 37174746 PMCID: PMC10177894 DOI: 10.3390/healthcare11091204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 04/06/2023] [Accepted: 04/18/2023] [Indexed: 05/15/2023] Open
Abstract
Diagnostic and predictive models of disease have been growing rapidly due to developments in the field of healthcare. Accurate and early diagnosis of COVID-19 is an underlying process for controlling the spread of this deadly disease and its death rates. The chest radiology (CT) scan is an effective device for the diagnosis and earlier management of COVID-19, meanwhile, the virus mainly targets the respiratory system. Chest X-ray (CXR) images are extremely helpful in the effective diagnosis of COVID-19 due to their rapid outcomes, cost-effectiveness, and availability. Although the radiological image-based diagnosis method seems faster and accomplishes a better recognition rate in the early phase of the epidemic, it requires healthcare experts to interpret the images. Thus, Artificial Intelligence (AI) technologies, such as the deep learning (DL) model, play an integral part in developing automated diagnosis process using CXR images. Therefore, this study designs a sine cosine optimization with DL-based disease detection and classification (SCODL-DDC) for COVID-19 on CXR images. The proposed SCODL-DDC technique examines the CXR images to identify and classify the occurrence of COVID-19. In particular, the SCODL-DDC technique uses the EfficientNet model for feature vector generation, and its hyperparameters can be adjusted by the SCO algorithm. Furthermore, the quantum neural network (QNN) model can be employed for an accurate COVID-19 classification process. Finally, the equilibrium optimizer (EO) is exploited for optimum parameter selection of the QNN model, showing the novelty of the work. The experimental results of the SCODL-DDC method exhibit the superior performance of the SCODL-DDC technique over other approaches.
Collapse
Affiliation(s)
- Turki Althaqafi
- Information Systems Department, HECI School, Dar Al-Hekma University, Jeddah 34801, Saudi Arabia
| | - Abdullah S. AL-Malaise AL-Ghamdi
- Information Systems Department, HECI School, Dar Al-Hekma University, Jeddah 34801, Saudi Arabia
- Information Systems Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Mathematics Department, Faculty of Science, Al-Azhar University, Naser City 11884, Cairo, Egypt
| |
Collapse
|
10
|
Li X, Li M, Yan P, Li G, Jiang Y, Luo H, Yin S. Deep Learning Attention Mechanism in Medical Image Analysis: Basics and Beyonds. INTERNATIONAL JOURNAL OF NETWORK DYNAMICS AND INTELLIGENCE 2023:93-116. [DOI: 10.53941/ijndi0201006] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2025]
Abstract
Survey/review study Deep Learning Attention Mechanism in Medical Image Analysis: Basics and Beyonds Xiang Li 1, Minglei Li 1, Pengfei Yan 1, Guanyi Li 1, Yuchen Jiang 1, Hao Luo 1,*, and Shen Yin 2 1 Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China 2 Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology, Trondheim 7034, Norway * Correspondence: hao.luo@hit.edu.cn Received: 16 October 2022 Accepted: 25 November 2022 Published: 27 March 2023 Abstract: With the improvement of hardware computing power and the development of deep learning algorithms, a revolution of "artificial intelligence (AI) + medical image" is taking place. Benefiting from diversified modern medical measurement equipment, a large number of medical images will be produced in the clinical process. These images improve the diagnostic accuracy of doctors, but also increase the labor burden of doctors. Deep learning technology is expected to realize an auxiliary diagnosis and improve diagnostic efficiency. At present, the method of deep learning technology combined with attention mechanism is a research hotspot and has achieved state-of-the-art results in many medical image tasks. This paper reviews the deep learning attention methods in medical image analysis. A comprehensive literature survey is first conducted to analyze the keywords and literature. Then, we introduce the development and technical characteristics of the attention mechanism. For its application in medical image analysis, we summarize the related methods in medical image classification, segmentation, detection, and enhancement. The remaining challenges, potential solutions, and future research directions are also discussed.
Collapse
|