1
|
Shamas M, Tauseef H, Ahmad A, Raza A, Ghadi YY, Mamyrbayev O, Momynzhanova K, Alahmadi TJ. Classification of pulmonary diseases from chest radiographs using deep transfer learning. PLoS One 2025; 20:e0316929. [PMID: 40096069 PMCID: PMC11913265 DOI: 10.1371/journal.pone.0316929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Accepted: 12/18/2024] [Indexed: 03/19/2025] Open
Abstract
Pulmonary diseases are the leading causes of disabilities and deaths worldwide. Early diagnosis of pulmonary diseases can reduce the fatality rate. Chest radiographs are commonly used to diagnose pulmonary diseases. In clinical practice, diagnosing pulmonary diseases using chest radiographs is challenging due to Overlapping and complex anatomical Structures, variability in radiographs, and their quality. The availability of a medical specialist with extensive professional experience is profoundly required. With the use of Convolutional Neural Networks in the medical field, diagnosis can be improved by automatically detecting and classifying these diseases. This paper has explored the effectiveness of Convolutional Neural Networks and transfer learning to improve the predictive outcomes of fifteen different pulmonary diseases using chest radiographs. Our proposed deep transfer learning-based computational model achieved promising results as compared to existing state-of-the-art methods. Our model reported an overall specificity of 97.92%, a sensitivity of 97.30%, a precision of 97.94%, and an Area under the Curve of 97.61%. It has been observed that the promising results of our proposed model will be valuable tool for practitioners in decision-making and efficiently diagnosing various pulmonary diseases.
Collapse
Affiliation(s)
- Muneeba Shamas
- Department of Computer Science, Lahore College for Women University, Lahore, Pakistan
| | - Huma Tauseef
- Department of Computer Science, Lahore College for Women University, Lahore, Pakistan
| | - Ashfaq Ahmad
- Department of Computer Science, MY University, Islamabad, Pakistan
| | - Ali Raza
- Department of Computer Science, MY University, Islamabad, Pakistan
| | - Yazeed Yasin Ghadi
- Department of Computer Science, Al Ain University, Abu Dhabi, United Arab Emirates
| | - Orken Mamyrbayev
- Institute of Information and Computational Technologies, Almaty, Kazakhstan
| | | | - Tahani Jaser Alahmadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
2
|
Hansun S, Argha A, Bakhshayeshi I, Wicaksana A, Alinejad-Rokny H, Fox GJ, Liaw ST, Celler BG, Marks GB. Diagnostic Performance of Artificial Intelligence-Based Methods for Tuberculosis Detection: Systematic Review. J Med Internet Res 2025; 27:e69068. [PMID: 40053773 PMCID: PMC11928776 DOI: 10.2196/69068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2024] [Revised: 01/10/2025] [Accepted: 02/07/2025] [Indexed: 03/09/2025] Open
Abstract
BACKGROUND Tuberculosis (TB) remains a significant health concern, contributing to the highest mortality among infectious diseases worldwide. However, none of the various TB diagnostic tools introduced is deemed sufficient on its own for the diagnostic pathway, so various artificial intelligence (AI)-based methods have been developed to address this issue. OBJECTIVE We aimed to provide a comprehensive evaluation of AI-based algorithms for TB detection across various data modalities. METHODS Following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) 2020 guidelines, we conducted a systematic review to synthesize current knowledge on this topic. Our search across 3 major databases (Scopus, PubMed, Association for Computing Machinery [ACM] Digital Library) yielded 1146 records, of which we included 152 (13.3%) studies in our analysis. QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies version 2) was performed for the risk-of-bias assessment of all included studies. RESULTS Radiographic biomarkers (n=129, 84.9%) and deep learning (DL; n=122, 80.3%) approaches were predominantly used, with convolutional neural networks (CNNs) using Visual Geometry Group (VGG)-16 (n=37, 24.3%), ResNet-50 (n=33, 21.7%), and DenseNet-121 (n=19, 12.5%) architectures being the most common DL approach. The majority of studies focused on model development (n=143, 94.1%) and used a single modality approach (n=141, 92.8%). AI methods demonstrated good performance in all studies: mean accuracy=91.93% (SD 8.10%, 95% CI 90.52%-93.33%; median 93.59%, IQR 88.33%-98.32%), mean area under the curve (AUC)=93.48% (SD 7.51%, 95% CI 91.90%-95.06%; median 95.28%, IQR 91%-99%), mean sensitivity=92.77% (SD 7.48%, 95% CI 91.38%-94.15%; median 94.05% IQR 89%-98.87%), and mean specificity=92.39% (SD 9.4%, 95% CI 90.30%-94.49%; median 95.38%, IQR 89.42%-99.19%). AI performance across different biomarker types showed mean accuracies of 92.45% (SD 7.83%), 89.03% (SD 8.49%), and 84.21% (SD 0%); mean AUCs of 94.47% (SD 7.32%), 88.45% (SD 8.33%), and 88.61% (SD 5.9%); mean sensitivities of 93.8% (SD 6.27%), 88.41% (SD 10.24%), and 93% (SD 0%); and mean specificities of 94.2% (SD 6.63%), 85.89% (SD 14.66%), and 95% (SD 0%) for radiographic, molecular/biochemical, and physiological types, respectively. AI performance across various reference standards showed mean accuracies of 91.44% (SD 7.3%), 93.16% (SD 6.44%), and 88.98% (SD 9.77%); mean AUCs of 90.95% (SD 7.58%), 94.89% (SD 5.18%), and 92.61% (SD 6.01%); mean sensitivities of 91.76% (SD 7.02%), 93.73% (SD 6.67%), and 91.34% (SD 7.71%); and mean specificities of 86.56% (SD 12.8%), 93.69% (SD 8.45%), and 92.7% (SD 6.54%) for bacteriological, human reader, and combined reference standards, respectively. The transfer learning (TL) approach showed increasing popularity (n=89, 58.6%). Notably, only 1 (0.7%) study conducted domain-shift analysis for TB detection. CONCLUSIONS Findings from this review underscore the considerable promise of AI-based methods in the realm of TB detection. Future research endeavors should prioritize conducting domain-shift analyses to better simulate real-world scenarios in TB detection. TRIAL REGISTRATION PROSPERO CRD42023453611; https://www.crd.york.ac.uk/PROSPERO/view/CRD42023453611.
Collapse
Affiliation(s)
- Seng Hansun
- School of Clinical Medicine, South West Sydney, UNSW Medicine & Health, UNSW Sydney, Sydney, Australia
- Woolcock Vietnam Research Group, Woolcock Institute of Medical Research, Sydney, Australia
| | - Ahmadreza Argha
- Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
- Tyree Institute of Health Engineering, UNSW Sydney, Sydney, Australia
- Ageing Future Institute, UNSW Sydney, Sydney, Australia
| | - Ivan Bakhshayeshi
- Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
- BioMedical Machine Learning Lab, Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
| | - Arya Wicaksana
- Informatics Department, Universitas Multimedia Nusantara, Tangerang, Indonesia
| | - Hamid Alinejad-Rokny
- Tyree Institute of Health Engineering, UNSW Sydney, Sydney, Australia
- Ageing Future Institute, UNSW Sydney, Sydney, Australia
- BioMedical Machine Learning Lab, Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
| | - Greg J Fox
- NHMRC Clinical Trials Centre, Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Siaw-Teng Liaw
- School of Population Health and School of Clinical Medicine, UNSW Sydney, Sydney, Australia
| | - Branko G Celler
- Biomedical Systems Research Laboratory, School of Electrical Engineering and Telecommunications, UNSW Sydney, Sydney, Australia
| | - Guy B Marks
- School of Clinical Medicine, South West Sydney, UNSW Medicine & Health, UNSW Sydney, Sydney, Australia
- Woolcock Vietnam Research Group, Woolcock Institute of Medical Research, Sydney, Australia
- Burnet Institute, Melbourne, Australia
| |
Collapse
|
3
|
Chen C, Mat Isa NA, Liu X. A review of convolutional neural network based methods for medical image classification. Comput Biol Med 2025; 185:109507. [PMID: 39631108 DOI: 10.1016/j.compbiomed.2024.109507] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 11/20/2024] [Accepted: 11/28/2024] [Indexed: 12/07/2024]
Abstract
This study systematically reviews CNN-based medical image classification methods. We surveyed 149 of the latest and most important papers published to date and conducted an in-depth analysis of the methods used therein. Based on the selected literature, we organized this review systematically. First, the development and evolution of CNN in the field of medical image classification are analyzed. Subsequently, we provide an in-depth overview of the main techniques of CNN applied to medical image classification, which is also the current research focus in this field, including data preprocessing, transfer learning, CNN architectures, and explainability, and their role in improving classification accuracy and efficiency. In addition, this overview summarizes the main public datasets for various diseases. Although CNN has great potential in medical image classification tasks and has achieved good results, clinical application is still difficult. Therefore, we conclude by discussing the main challenges faced by CNNs in medical image analysis and pointing out future research directions to address these challenges. This review will help researchers with their future studies and can promote the successful integration of deep learning into clinical practice and smart medical systems.
Collapse
Affiliation(s)
- Chao Chen
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia; School of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin, 644000, China
| | - Nor Ashidi Mat Isa
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia.
| | - Xin Liu
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia
| |
Collapse
|
4
|
Al-Haddad LA, Alawee WH, Basem A. Advancing task recognition towards artificial limbs control with ReliefF-based deep neural network extreme learning. Comput Biol Med 2024; 169:107894. [PMID: 38154161 DOI: 10.1016/j.compbiomed.2023.107894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/04/2023] [Accepted: 12/21/2023] [Indexed: 12/30/2023]
Abstract
In the rapidly advancing field of biomedical engineering, effective real-time control of artificial limbs is a pressing research concern. Addressing this, the current study introduces a pioneering method for augmenting task recognition in prosthetic control systems, combining a ReliefF-based Deep Neural Networks (DNNs) approach. This paper has leveraged the MILimbEEG dataset, a comprehensive rich source collection of EEG signals, to calculate statistical features of Arithmetic Mean (AM), Standard Deviation (SD), and Skewness (S) across various motor activities. Supreme Feature Selection (SFS), of the adopted time-domain features, was performed using the ReliefF algorithm. The highest scored DNN-ReliefF developed model demonstrated remarkable performance, achieving accuracy, precision, and recall rates of 97.4 %, 97.3 %, and 97.4 %, respectively. In contrast, a traditional DNN model yielded accuracy, precision, and recall rates of 50.8 %, 51.1 %, and 50.8 %, highlighting the significant improvements made possible by incorporating SFS. This stark contrast underscores the transformative potential of incorporating ReliefF, situating the DNN-ReliefF model as a robust platform for forthcoming advancements in real-time prosthetic control systems.
Collapse
Affiliation(s)
- Luttfi A Al-Haddad
- Training and Workshops Center, University of Technology- Iraq, Baghdad, Iraq.
| | - Wissam H Alawee
- Training and Workshops Center, University of Technology- Iraq, Baghdad, Iraq; Control and Systems Engineering Department, University of Technology- Iraq, Baghdad, Iraq
| | - Ali Basem
- Air Conditioning Engineering Department, Faculty of Engineering, Warith Al-Anbiyaa University, Iraq
| |
Collapse
|
5
|
Al-Sheikh MH, Al Dandan O, Al-Shamayleh AS, Jalab HA, Ibrahim RW. Multi-class deep learning architecture for classifying lung diseases from chest X-Ray and CT images. Sci Rep 2023; 13:19373. [PMID: 37938631 PMCID: PMC10632494 DOI: 10.1038/s41598-023-46147-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 10/27/2023] [Indexed: 11/09/2023] Open
Abstract
Medical imaging is considered a suitable alternative testing method for the detection of lung diseases. Many researchers have been working to develop various detection methods that have aided in the prevention of lung diseases. To better understand the condition of the lung disease infection, chest X-Ray and CT scans are utilized to check the disease's spread throughout the lungs. This study proposes an automated system for the detection multi lung diseases in X-Ray and CT scans. A customized convolutional neural network (CNN) and two pre-trained deep learning models with a new image enhancement model are proposed for image classification. The proposed lung disease detection comprises two main steps: pre-processing, and deep learning classification. The new image enhancement algorithm is developed in the pre-processing step using k-symbol Lerch transcendent functions model which enhancement images based on image pixel probability. While, in the classification step, the customized CNN architecture and two pre-trained CNN models Alex Net, and VGG16Net are developed. The proposed approach was tested on publicly available image datasets (CT, and X-Ray image dataset), and the results showed classification accuracy, sensitivity, and specificity of 98.60%, 98.40%, and 98.50% for the X-Ray image dataset, respectively, and 98.80%, 98.50%, 98.40% for the CT scans dataset, respectively. Overall, the obtained results highlight the advantages of the image enhancement model as a first step in processing.
Collapse
Affiliation(s)
- Mona Hmoud Al-Sheikh
- Physiology Department, College of Medicine, Imam Abdulrahman Bin Faisal University, 34212, Dammam, Saudi Arabia
| | - Omran Al Dandan
- Department of Radiology, College of Medicine, Imam Abdulrahman Bin Faisal University, 34212, Dammam, Saudi Arabia
| | - Ahmad Sami Al-Shamayleh
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Al-Ahliyya Amman University, Al-Salt, Amman, 19328, Jordan
| | - Hamid A Jalab
- Information and Communication Technology Research Group, Scientific Research Center, Al-Ayen University, Nile Street, 64001, Thi-Qar, Iraq.
| | - Rabha W Ibrahim
- Information and Communication Technology Research Group, Scientific Research Center, Al-Ayen University, Nile Street, 64001, Thi-Qar, Iraq
- Department of Mathematics, Mathematics Research Center, Near East University, Near East Boulevard, PC: 99138, Nicosia/Mersin 10, Turkey
- Department of Computer Science and Mathematics, Lebanese American University, Beirut, 1102 2801, Lebanon
| |
Collapse
|