1
|
Alaoui Abdalaoui Slimani F, Bentourkia M. Improving deep learning U-Net++ by discrete wavelet and attention gate mechanisms for effective pathological lung segmentation in chest X-ray imaging. Phys Eng Sci Med 2025; 48:59-73. [PMID: 39495449 DOI: 10.1007/s13246-024-01489-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 10/09/2024] [Indexed: 11/05/2024]
Abstract
Since its introduction in 2015, the U-Net architecture used in Deep Learning has played a crucial role in medical imaging. Recognized for its ability to accurately discriminate small structures, the U-Net has received more than 2600 citations in academic literature, which motivated continuous enhancements to its architecture. In hospitals, chest radiography is the primary diagnostic method for pulmonary disorders, however, accurate lung segmentation in chest X-ray images remains a challenging task, primarily due to the significant variations in lung shapes and the presence of intense opacities caused by various diseases. This article introduces a new approach for the segmentation of lung X-ray images. Traditional max-pooling operations, commonly employed in conventional U-Net++ models, were replaced with the discrete wavelet transform (DWT), offering a more accurate down-sampling technique that potentially captures detailed features of lung structures. Additionally, we used attention gate (AG) mechanisms that enable the model to focus on specific regions in the input image, which improves the accuracy of the segmentation process. When compared with current techniques like Atrous Convolutions, Improved FCN, Improved SegNet, U-Net, and U-Net++, our method (U-Net++-DWT) showed remarkable efficacy, particularly on the Japanese Society of Radiological Technology dataset, achieving an accuracy of 99.1%, specificity of 98.9%, sensitivity of 97.8%, Dice Coefficient of 97.2%, and Jaccard Index of 96.3%. Its performance on the Montgomery County dataset further demonstrated its consistent effectiveness. Moreover, when applied to additional datasets of Chest X-ray Masks and Labels and COVID-19, our method maintained high performance levels, achieving up to 99.3% accuracy, thereby underscoring its adaptability and potential for broad applications in medical imaging diagnostics.
Collapse
Affiliation(s)
| | - M'hamed Bentourkia
- Department of Nuclear Medicine and Radiobiology, 12th Avenue North, 3001, Sherbrooke, QC, J1H5N4, Canada.
| |
Collapse
|
2
|
Altan G, Narli SS. DeepCOVIDNet-CXR: deep learning strategies for identifying COVID-19 on enhanced chest X-rays. BIOMED ENG-BIOMED TE 2025; 70:21-35. [PMID: 39370946 DOI: 10.1515/bmt-2021-0272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 09/10/2024] [Indexed: 10/08/2024]
Abstract
OBJECTIVES COVID-19 is one of the recent major epidemics, which accelerates its mortality and prevalence worldwide. Most literature on chest X-ray-based COVID-19 analysis has focused on multi-case classification (COVID-19, pneumonia, and normal) by the advantages of Deep Learning. However, the limited number of chest X-rays with COVID-19 is a prominent deficiency for clinical relevance. This study aims at evaluating COVID-19 identification performances using adaptive histogram equalization (AHE) to feed the ConvNet architectures with reliable lung anatomy of airways. METHODS We experimented with balanced small- and large-scale COVID-19 databases using left lung, right lung, and complete chest X-rays with various AHE parameters. On multiple strategies, we applied transfer learning on four ConvNet architectures (MobileNet, DarkNet19, VGG16, and AlexNet). RESULTS Whereas DarkNet19 reached the highest multi-case identification performance with an accuracy rate of 98.26 % on the small-scale dataset, VGG16 achieved the best generalization performance with an accuracy rate of 95.04 % on the large-scale dataset. CONCLUSIONS Our study is one of the pioneering approaches that analyses 3615 COVID-19 cases and specifies the most responsible AHE parameters for ConvNet architectures in the multi-case classification.
Collapse
Affiliation(s)
- Gokhan Altan
- Computer Engineering Department, Iskenderun Technical University, Hatay, Türkiye
| | | |
Collapse
|
3
|
Bercea CI, Wiestler B, Rueckert D, Schnabel JA. Evaluating normative representation learning in generative AI for robust anomaly detection in brain imaging. Nat Commun 2025; 16:1624. [PMID: 39948337 PMCID: PMC11825664 DOI: 10.1038/s41467-025-56321-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 01/15/2025] [Indexed: 02/16/2025] Open
Abstract
Normative representation learning focuses on understanding the typical anatomical distributions from large datasets of medical scans from healthy individuals. Generative Artificial Intelligence (AI) leverages this attribute to synthesize images that accurately reflect these normative patterns. This capability enables the AI allowing them to effectively detect and correct anomalies in new, unseen pathological data without the need for expert labeling. Traditional anomaly detection methods often evaluate the anomaly detection performance, overlooking the crucial role of normative learning. In our analysis, we introduce novel metrics, specifically designed to evaluate this facet in AI models. We apply these metrics across various generative AI frameworks, including advanced diffusion models, and rigorously test them against complex and diverse brain pathologies. In addition, we conduct a large multi-reader study to compare these metrics to experts' evaluations. Our analysis demonstrates that models proficient in normative learning exhibit exceptional versatility, adeptly detecting a wide range of unseen medical conditions. Our code is available at https://github.com/compai-lab/2024-ncomms-bercea.git .
Collapse
Affiliation(s)
- Cosmin I Bercea
- Chair of Computational Imaging and AI in Medicine, Technical University of Munich (TUM), Munich, Germany.
- Helmholtz AI and Helmholtz Center Munich, Munich, Germany.
| | - Benedikt Wiestler
- Chair of AI for Image-Guided Diagnosis and Therapy, TUM School of Medicine and Health, Munich, Germany
- Munich Center for Machine Learning (MCML), Munich, Germany
| | - Daniel Rueckert
- Munich Center for Machine Learning (MCML), Munich, Germany
- Chair of AI in Healthcare and Medicine, Technical University of Munich (TUM) and TUM University Hospital, Munich, Germany
- Department of Computing, Imperial College London, London, UK
| | - Julia A Schnabel
- Chair of Computational Imaging and AI in Medicine, Technical University of Munich (TUM), Munich, Germany
- Helmholtz AI and Helmholtz Center Munich, Munich, Germany
- Munich Center for Machine Learning (MCML), Munich, Germany
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
4
|
Wang X, Lu Z, Huang S, Ting Y, Ting JSZ, Chen W, Tan CH, Huang W. TransMVAN: Multi-view Aggregation Network with Transformer for Pneumonia Diagnosis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:60-73. [PMID: 38977615 PMCID: PMC11810860 DOI: 10.1007/s10278-024-01169-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 04/30/2024] [Accepted: 05/01/2024] [Indexed: 07/10/2024]
Abstract
Automated and accurate classification of pneumonia plays a crucial role in improving the performance of computer-aided diagnosis systems for chest X-ray images. Nevertheless, it is a challenging task due to the difficulty of learning the complex structure information of lung abnormality from chest X-ray images. In this paper, we propose a multi-view aggregation network with Transformer (TransMVAN) for pneumonia classification in chest X-ray images. Specifically, we propose to incorporate the knowledge from glance and focus views to enrich the feature representation of lung abnormality. Moreover, to capture the complex relationships among different lung regions, we propose a bi-directional multi-scale vision Transformer (biMSVT), with which the informative messages between different lung regions are propagated through two directions. In addition, we also propose a gated multi-view aggregation (GMVA) to adaptively select the feature information from glance and focus views for further performance enhancement of pneumonia diagnosis. Our proposed method achieves AUCs of 0.9645 and 0.9550 for pneumonia classification on two different chest X-ray image datasets. In addition, it achieves an AUC of 0.9761 for evaluating positive and negative polymerase chain reaction (PCR). Furthermore, our proposed method also attains an AUC of 0.9741 for classifying non-COVID-19 pneumonia, COVID-19 pneumonia, and normal cases. Experimental results demonstrate the effectiveness of our method over other methods used for comparison in pneumonia diagnosis from chest X-ray images.
Collapse
Affiliation(s)
- Xiaohong Wang
- Institute for Infocomm Research (I²R), A*STAR, 138632, Singapore, Singapore
| | - Zhongkang Lu
- Institute for Infocomm Research (I²R), A*STAR, 138632, Singapore, Singapore
| | - Su Huang
- Institute for Infocomm Research (I²R), A*STAR, 138632, Singapore, Singapore
| | - Yonghan Ting
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, 308433, Singapore, Singapore
| | - Jordan Sim Zheng Ting
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, 308433, Singapore, Singapore
| | - Wenxiang Chen
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, 308433, Singapore, Singapore
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, 308433, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, 308232, Singapore, Singapore
| | - Weimin Huang
- Institute for Infocomm Research (I²R), A*STAR, 138632, Singapore, Singapore.
| |
Collapse
|
5
|
Kamalakannan N, Macharla SR, Kanimozhi M, Sudhakar MS. Exponential Pixelating Integral transform with dual fractal features for enhanced chest X-ray abnormality detection. Comput Biol Med 2024; 182:109093. [PMID: 39232407 DOI: 10.1016/j.compbiomed.2024.109093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 08/25/2024] [Accepted: 08/29/2024] [Indexed: 09/06/2024]
Abstract
The heightened prevalence of respiratory disorders, particularly exacerbated by a significant upswing in fatalities due to the novel coronavirus, underscores the critical need for early detection and timely intervention. This imperative is paramount, possessing the potential to profoundly impact and safeguard numerous lives. Medically, chest radiography stands out as an essential and economically viable medical imaging approach for diagnosing and assessing the severity of diverse Respiratory Disorders. However, their detection in Chest X-Rays is a cumbersome task even for well-trained radiologists owing to low contrast issues, overlapping of the tissue structures, subjective variability, and the presence of noise. To address these issues, a novel analytical model termed Exponential Pixelating Integral is introduced for the automatic detection of infections in Chest X-Rays in this work. Initially, the presented Exponential Pixelating Integral enhances the pixel intensities to overcome the low-contrast issues that are then polar-transformed followed by their representation using the locally invariant Mandelbrot and Julia fractal geometries for effective distinction of structural features. The collated features labeled Exponential Pixelating Integral with dually characterized fractal features are then classified by the non-parametric multivariate adaptive regression splines to establish an ensemble model between each pair of classes for effective diagnosis of diverse diseases. Rigorous analysis of the proposed classification framework on large medical benchmarked datasets showcases its superiority over its peers by registering a higher classification accuracy and F1 scores ranging from 98.46 to 99.45 % and 96.53-98.10 % respectively, making it a precise and interpretable automated system for diagnosing respiratory disorders.
Collapse
Affiliation(s)
| | | | - M Kanimozhi
- School of Electrical & Electronics, Sathyabama Institute of Science and Technology, Chennai, Tamilnadu, India
| | - M S Sudhakar
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, India.
| |
Collapse
|
6
|
Hong Q, Lin L, Li Z, Li Q, Yao J, Wu Q, Liu K, Tian J. A Distance Transformation Deep Forest Framework With Hybrid-Feature Fusion for CXR Image Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:14633-14644. [PMID: 37285251 DOI: 10.1109/tnnls.2023.3280646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Detecting pneumonia, especially coronavirus disease 2019 (COVID-19), from chest X-ray (CXR) images is one of the most effective ways for disease diagnosis and patient triage. The application of deep neural networks (DNNs) for CXR image classification is limited due to the small sample size of the well-curated data. To tackle this problem, this article proposes a distance transformation-based deep forest framework with hybrid-feature fusion (DTDF-HFF) for accurate CXR image classification. In our proposed method, hybrid features of CXR images are extracted in two ways: hand-crafted feature extraction and multigrained scanning. Different types of features are fed into different classifiers in the same layer of the deep forest (DF), and the prediction vector obtained at each layer is transformed to form distance vector based on a self-adaptive scheme. The distance vectors obtained by different classifiers are fused and concatenated with the original features, then input into the corresponding classifier at the next layer. The cascade grows until DTDF-HFF can no longer gain benefits from the new layer. We compare the proposed method with other methods on the public CXR datasets, and the experimental results show that the proposed method can achieve state-of-the art (SOTA) performance. The code will be made publicly available at https://github.com/hongqq/DTDF-HFF.
Collapse
|
7
|
Singh T, Mishra S, Kalra R, Satakshi, Kumar M, Kim T. COVID-19 severity detection using chest X-ray segmentation and deep learning. Sci Rep 2024; 14:19846. [PMID: 39191941 PMCID: PMC11349901 DOI: 10.1038/s41598-024-70801-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 08/21/2024] [Indexed: 08/29/2024] Open
Abstract
COVID-19 has resulted in a significant global impact on health, the economy, education, and daily life. The disease can range from mild to severe, with individuals over 65 or those with underlying medical conditions being more susceptible to severe illness. Early testing and isolation are vital due to the virus's variable incubation period. Chest radiographs (CXR) have gained importance as a diagnostic tool due to their efficiency and reduced radiation exposure compared to CT scans. However, the sensitivity of CXR in detecting COVID-19 may be lower. This paper introduces a deep learning framework for accurate COVID-19 classification and severity prediction using CXR images. U-Net is used for lung segmentation, achieving a precision of 0.9924. Classification is performed using a Convulation-capsule network, with high true positive rates of 86% for COVID-19, 93% for pneumonia, and 85% for normal cases. Severity assessment employs ResNet50, VGG-16, and DenseNet201, with DenseNet201 showing superior accuracy. Empirical results, validated with 95% confidence intervals, confirm the framework's reliability and robustness. This integration of advanced deep learning techniques with radiological imaging enhances early detection and severity assessment, improving patient management and resource allocation in clinical settings.
Collapse
Affiliation(s)
- Tinku Singh
- School of Information and Communication Engineering, Chungbuk National University, Cheongju, South Korea
| | - Suryanshi Mishra
- Department of Mathematics & Statistics, SHUATS, Prayagraj, Uttar Pradesh, India
| | - Riya Kalra
- Indian Institute of Information Technology Allahabad, Prayagraj, Uttar Pradesh, India
| | - Satakshi
- Department of Mathematics & Statistics, SHUATS, Prayagraj, Uttar Pradesh, India
| | - Manish Kumar
- Indian Institute of Information Technology Allahabad, Prayagraj, Uttar Pradesh, India
| | - Taehong Kim
- School of Information and Communication Engineering, Chungbuk National University, Cheongju, South Korea.
| |
Collapse
|
8
|
Alkhalil M, Abbara A, Grangier C, Ekzayez A. AI in conflict zones: the potential to revitalise healthcare in Syria and beyond. BMJ Glob Health 2024; 9:e015755. [PMID: 39117373 PMCID: PMC11404241 DOI: 10.1136/bmjgh-2024-015755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 07/23/2024] [Indexed: 08/10/2024] Open
Affiliation(s)
- Munzer Alkhalil
- Research for Health System Strengthening in northern Syria (R4HSSS), Union for Medical and Relief Organizations, Gaziantep, Turkey
- LSE IDEAS Conflict and Civicness Research Group, The London School of Economics and Political Science, London, UK
| | - Aula Abbara
- Department of Infection, Imperial College London, London, UK
- Syria Public Health Network, London, UK
| | - Caroline Grangier
- ESSEC Business School, La Défense, France
- Antei Global, Paris, France
| | - Abdulkarim Ekzayez
- War Studies (Research for Health System Strengthening in northern Syria (R4HSSS), King's College London, London, UK
- Research & Development, Syria Development Centre, London, UK
| |
Collapse
|
9
|
Dulaimy K, Pham RH, Farag A. The Impact of COVID on Health Systems: The Workforce and Telemedicine Perspective. Semin Ultrasound CT MR 2024; 45:314-317. [PMID: 38527671 DOI: 10.1053/j.sult.2024.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
The COVID-19 pandemic significantly strained global health systems, leading to the rapid adoption of telemedicine and changes in workforce management. Previously underused, telemedicine became an essential means of delivering healthcare while adhering to physical distancing guidelines. This transition addressed longstanding barriers like connectivity issues. Simultaneously, the radiology sector innovated by widely implementing remote reading stations, which helped manage exposure risks and conserve human resources. Moreover, the pandemic highlighted the critical role of technological advancements beyond telemedicine, such as the accelerated integration of AI in diagnostics and management. This article examines these comprehensive effects, emphasizing the remote work adaptations and innovations in healthcare systems that have reshaped both healthcare delivery and workforce dynamics during the pandemic.
Collapse
Affiliation(s)
- Kal Dulaimy
- Department of Radiology, UMass Chan Medical School-Baystate Medical Center, Springfield, MA
| | - Richard H Pham
- B.S. Biology student, Class of 2025, University of Massachusetts-Amherst, Amherst, MA
| | - Ahmed Farag
- Department of Radiology, UMass Chan Medical School-Baystate Medical Center, Springfield, MA.
| |
Collapse
|
10
|
Kanwal K, Asif M, Khalid SG, Liu H, Qurashi AG, Abdullah S. Current Diagnostic Techniques for Pneumonia: A Scoping Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:4291. [PMID: 39001069 PMCID: PMC11244398 DOI: 10.3390/s24134291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 06/22/2024] [Accepted: 06/28/2024] [Indexed: 07/16/2024]
Abstract
Community-acquired pneumonia is one of the most lethal infectious diseases, especially for infants and the elderly. Given the variety of causative agents, the accurate early detection of pneumonia is an active research area. To the best of our knowledge, scoping reviews on diagnostic techniques for pneumonia are lacking. In this scoping review, three major electronic databases were searched and the resulting research was screened. We categorized these diagnostic techniques into four classes (i.e., lab-based methods, imaging-based techniques, acoustic-based techniques, and physiological-measurement-based techniques) and summarized their recent applications. Major research has been skewed towards imaging-based techniques, especially after COVID-19. Currently, chest X-rays and blood tests are the most common tools in the clinical setting to establish a diagnosis; however, there is a need to look for safe, non-invasive, and more rapid techniques for diagnosis. Recently, some non-invasive techniques based on wearable sensors achieved reasonable diagnostic accuracy that could open a new chapter for future applications. Consequently, further research and technology development are still needed for pneumonia diagnosis using non-invasive physiological parameters to attain a better point of care for pneumonia patients.
Collapse
Affiliation(s)
- Kehkashan Kanwal
- College of Speech, Language, and Hearing Sciences, Ziauddin University, Karachi 75000, Pakistan
| | - Muhammad Asif
- Faculty of Computing and Applied Sciences, Sir Syed University of Engineering and Technology, Karachi 75300, Pakistan;
| | - Syed Ghufran Khalid
- Department of Engineering, Faculty of Science and Technology, Nottingham Trent University, Nottingham B15 3TN, UK
| | - Haipeng Liu
- Research Centre for Intelligent Healthcare, Coventry University, Coventry CV1 5FB, UK;
| | | | - Saad Abdullah
- School of Innovation, Design and Engineering, Mälardalen University, 721 23 Västerås, Sweden
| |
Collapse
|
11
|
Biswas S, Mostafiz R, Uddin MS, Paul BK. XAI-FusionNet: Diabetic foot ulcer detection based on multi-scale feature fusion with explainable artificial intelligence. Heliyon 2024; 10:e31228. [PMID: 38803883 PMCID: PMC11129011 DOI: 10.1016/j.heliyon.2024.e31228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Revised: 05/11/2024] [Accepted: 05/13/2024] [Indexed: 05/29/2024] Open
Abstract
Diabetic foot ulcer (DFU) poses a significant threat to individuals affected by diabetes, often leading to limb amputation. Early detection of DFU can greatly improve the chances of survival for diabetic patients. This work introduces FusionNet, a novel multi-scale feature fusion network designed to accurately differentiate DFU skin from healthy skin using multiple pre-trained convolutional neural network (CNN) algorithms. A dataset comprising 6963 skin images (3574 healthy and 3389 ulcer) from various patients was divided into training (6080 images), validation (672 images), and testing (211 images) sets. Initially, three image preprocessing techniques - Gaussian filter, median filter, and motion blur estimation - were applied to eliminate irrelevant, noisy, and blurry data. Subsequently, three pre-trained CNN algorithms -DenseNet201, VGG19, and NASNetMobile - were utilized to extract high-frequency features from the input images. These features were then inputted into a meta-tuner module to predict DFU by selecting the most discriminative features. Statistical tests, including Friedman and analysis of variance (ANOVA), were employed to identify significant differences between FusionNet and other sub-networks. Finally, three eXplainable Artificial Intelligence (XAI) algorithms - SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and Grad-CAM (Gradient-weighted Class Activation Mapping) - were integrated into FusionNet to enhance transparency and explainability. The FusionNet classifier achieved exceptional classification results with 99.05 % accuracy, 98.18 % recall, 100.00 % precision, 99.09 % AUC, and 99.08 % F1 score. We believe that our proposed FusionNet will be a valuable tool in the medical field to distinguish DFU from healthy skin.
Collapse
Affiliation(s)
- Shuvo Biswas
- Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Bangladesh
| | - Rafid Mostafiz
- Institute of Information Technology, Noakhali Science and Technology University, Bangladesh
| | - Mohammad Shorif Uddin
- Department of Computer Science and Engineering, Jahangirnagar University, Bangladesh
| | - Bikash Kumar Paul
- Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Bangladesh
- Department of Software Engineering, Daffodil International University, Bangladesh
| |
Collapse
|
12
|
Bennour A, Ben Aoun N, Khalaf OI, Ghabban F, Wong WK, Algburi S. Contribution to pulmonary diseases diagnostic from X-ray images using innovative deep learning models. Heliyon 2024; 10:e30308. [PMID: 38707425 PMCID: PMC11068804 DOI: 10.1016/j.heliyon.2024.e30308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 04/09/2024] [Accepted: 04/23/2024] [Indexed: 05/07/2024] Open
Abstract
Pulmonary disease identification and characterization are among the most intriguing research topics of recent years since they require an accurate and prompt diagnosis. Although pulmonary radiography has helped in lung disease diagnosis, the interpretation of the radiographic image has always been a major concern for doctors and radiologists to reduce diagnosis errors. Due to their success in image classification and segmentation tasks, cutting-edge artificial intelligence techniques like machine learning (ML) and deep learning (DL) are widely encouraged to be applied in the field of diagnosing lung disorders and identifying them using medical images, particularly radiographic ones. For this end, the researchers are concurring to build systems based on these techniques in particular deep learning ones. In this paper, we proposed three deep-learning models that were trained to identify the presence of certain lung diseases using thoracic radiography. The first model, named "CovCXR-Net", identifies the COVID-19 disease (two cases: COVID-19 or normal). The second model, named "MDCXR3-Net", identifies the COVID-19 and pneumonia diseases (three cases: COVID-19, pneumonia, or normal), and the last model, named "MDCXR4-Net", is destined to identify the COVID-19, pneumonia and the pulmonary opacity diseases (4 cases: COVID-19, pneumonia, pulmonary opacity or normal). These models have proven their superiority in comparison with the state-of-the-art models and reached an accuracy of 99,09 %, 97.74 %, and 90,37 % respectively with three benchmarks.
Collapse
Affiliation(s)
- Akram Bennour
- LAMIS Laboratiry, Echahid Cheikh Larbi Tebessi University, Tebessa, Algeria
| | - Najib Ben Aoun
- College of Computer Science and Information Technology, Al-Baha University, Al Baha, Saudi Arabia
- REGIM-Lab: Research Groups in Intelligent Machines, National School of Engineers of Sfax (ENIS), University of Sfax, Tunisia
| | - Osamah Ibrahim Khalaf
- Department of Solar, Al-Nahrain Research Center for Renewable Energy, Al-Nahrain University, Jadriya, Baghdad, Iraq
| | - Fahad Ghabban
- College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia
| | | | - Sameer Algburi
- Al-Kitab University, College of Engineering Techniques, Kirkuk, Iraq
| |
Collapse
|
13
|
Seeböck P, Orlando JI, Michl M, Mai J, Schmidt-Erfurth U, Bogunović H. Anomaly guided segmentation: Introducing semantic context for lesion segmentation in retinal OCT using weak context supervision from anomaly detection. Med Image Anal 2024; 93:103104. [PMID: 38350222 DOI: 10.1016/j.media.2024.103104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 12/01/2023] [Accepted: 02/05/2024] [Indexed: 02/15/2024]
Abstract
Automated lesion detection in retinal optical coherence tomography (OCT) scans has shown promise for several clinical applications, including diagnosis, monitoring and guidance of treatment decisions. However, segmentation models still struggle to achieve the desired results for some complex lesions or datasets that commonly occur in real-world, e.g. due to variability of lesion phenotypes, image quality or disease appearance. While several techniques have been proposed to improve them, one line of research that has not yet been investigated is the incorporation of additional semantic context through the application of anomaly detection models. In this study we experimentally show that incorporating weak anomaly labels to standard segmentation models consistently improves lesion segmentation results. This can be done relatively easy by detecting anomalies with a separate model and then adding these output masks as an extra class for training the segmentation model. This provides additional semantic context without requiring extra manual labels. We empirically validated this strategy using two in-house and two publicly available retinal OCT datasets for multiple lesion targets, demonstrating the potential of this generic anomaly guided segmentation approach to be used as an extra tool for improving lesion detection models.
Collapse
Affiliation(s)
- Philipp Seeböck
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Computational Imaging Research Lab, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Austria.
| | - José Ignacio Orlando
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Yatiris Group at PLADEMA Institute, CONICET, Universidad Nacional del Centro de la Provincia de Buenos Aires, Gral. Pinto 399, Tandil, Buenos Aires, Argentina
| | - Martin Michl
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Julia Mai
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Hrvoje Bogunović
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria.
| |
Collapse
|
14
|
Zhang Z, Chen B, Luo Y. A Deep Ensemble Dynamic Learning Network for Corona Virus Disease 2019 Diagnosis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:3912-3926. [PMID: 36054386 DOI: 10.1109/tnnls.2022.3201198] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Corona virus disease 2019 is an extremely fatal pandemic around the world. Intelligently recognizing X-ray chest radiography images for automatically identifying corona virus disease 2019 from other types of pneumonia and normal cases provides clinicians with tremendous conveniences in diagnosis process. In this article, a deep ensemble dynamic learning network is proposed. After a chain of image preprocessing steps and the division of image dataset, convolution blocks and the final average pooling layer are pretrained as a feature extractor. For classifying the extracted feature samples, two-stage bagging dynamic learning network is trained based on neural dynamic learning and bagging algorithms, which diagnoses the presence and types of pneumonia successively. Experimental results manifest that using the proposed deep ensemble dynamic learning network obtains 98.7179% diagnosis accuracy, which indicates more excellent diagnosis effect than existing state-of-the-art models on the open image dataset. Such accurate diagnosis effects provide convincing evidences for further detections and treatments.
Collapse
|
15
|
Prodanovic T, Petrovic Savic S, Prodanovic N, Simovic A, Zivojinovic S, Djordjevic JC, Savic D. Advanced Diagnostics of Respiratory Distress Syndrome in Premature Infants Treated with Surfactant and Budesonide through Computer-Assisted Chest X-ray Analysis. Diagnostics (Basel) 2024; 14:214. [PMID: 38275461 PMCID: PMC10814713 DOI: 10.3390/diagnostics14020214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 12/28/2023] [Accepted: 01/05/2024] [Indexed: 01/27/2024] Open
Abstract
This research addresses the respiratory distress syndrome (RDS) in preterm newborns caused by insufficient surfactant synthesis, which can lead to serious complications, including pneumothorax, pulmonary hypertension, and pulmonary hemorrhage, increasing the risk of a fatal outcome. By analyzing chest radiographs and blood gases, we specifically focus on the significant contributions of these parameters to the diagnosis and analysis of the recovery of patients with RDS. The study involved 32 preterm newborns, and the analysis of gas parameters before and after the administration of surfactants and inhalation corticosteroid therapy revealed statistically significant changes in values of parameters such as FiO2, pH, pCO2, HCO3, and BE (Sig. < 0.05), while the pO2 parameter showed a potential change (Sig. = 0.061). Parallel to this, the research emphasizes the development of a lung segmentation algorithm implemented in the MATLAB programming environment. The key steps of the algorithm include preprocessing, segmentation, and visualization for a more detailed understanding of the recovery dynamics after RDS. These algorithms have achieved promising results, with a global accuracy of 0.93 ± 0.06, precision of 0.81 ± 0.16, and an F-score of 0.82 ± 0.14. These results highlight the potential application of algorithms in the analysis and monitoring of recovery in newborns with RDS, also underscoring the need for further development of software solutions in medicine, particularly in neonatology, to enhance the diagnosis and treatment of preterm newborns with respiratory distress syndrome.
Collapse
Affiliation(s)
- Tijana Prodanovic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (T.P.); (A.S.); (S.Z.); (J.C.D.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Suzana Petrovic Savic
- Department for Production Engineering, Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac, Serbia;
| | - Nikola Prodanovic
- Department of Surgery, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia
- Clinic for Orthopaedic and Trauma Surgery, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Aleksandra Simovic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (T.P.); (A.S.); (S.Z.); (J.C.D.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Suzana Zivojinovic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (T.P.); (A.S.); (S.Z.); (J.C.D.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Jelena Cekovic Djordjevic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (T.P.); (A.S.); (S.Z.); (J.C.D.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Dragana Savic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (T.P.); (A.S.); (S.Z.); (J.C.D.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| |
Collapse
|
16
|
Li W, Liu GH, Fan H, Li Z, Zhang D. Self-Supervised Multi-Scale Cropping and Simple Masked Attentive Predicting for Lung CT-Scan Anomaly Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:594-607. [PMID: 37695968 DOI: 10.1109/tmi.2023.3313778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2023]
Abstract
Anomaly detection has been widely explored by training an out-of-distribution detector with only normal data for medical images. However, detecting local and subtle irregularities without prior knowledge of anomaly types brings challenges for lung CT-scan image anomaly detection. In this paper, we propose a self-supervised framework for learning representations of lung CT-scan images via both multi-scale cropping and simple masked attentive predicting, which is capable of constructing a powerful out-of-distribution detector. Firstly, we propose CropMixPaste, a self-supervised augmentation task for generating density shadow-like anomalies that encourage the model to detect local irregularities of lung CT-scan images. Then, we propose a self-supervised reconstruction block, named simple masked attentive predicting block (SMAPB), to better refine local features by predicting masked context information. Finally, the learned representations by self-supervised tasks are used to build an out-of-distribution detector. The results on real lung CT-scan datasets demonstrate the effectiveness and superiority of our proposed method compared with state-of-the-art methods.
Collapse
|
17
|
Liu W, Ni Z, Chen Q, Ni L. Attention-Guided Partial Domain Adaptation for Automated Pneumonia Diagnosis From Chest X-Ray Images. IEEE J Biomed Health Inform 2023; 27:5848-5859. [PMID: 37695960 DOI: 10.1109/jbhi.2023.3313886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2023]
Abstract
Deep neural networks (DNN) supported by multicenter large-scale Chest X-Ray (CXR) datasets can efficiently perform tasks such as disease identification, lesion segmentation, and report generation. However, the non-ignorable inter-domain heterogeneity caused by different equipment, ethnic groups, and scanning protocols may lead to dramatic degradation in model performance. Unsupervised domain adaptation (UDA) methods help alleviate the cross-domain discrepancy for subsequent analysis. Nevertheless, they may be prone to: 1) spatial negative transfer: misaligning non-transferable regions which have inadequate knowledge, and 2) semantic negative transfer: failing to extend to scenarios where the label spaces of the source and target domain are partially shared. In this work, we propose a classification-based framework named attention-guided partial domain adaptation (AGPDA) network for overcoming these two negative transfer challenges. AGPDA is composed of two key modules: 1) a region attention discrimination block (RADB) to generate fine-grained attention value via lightweight region-wise multi-adversarial networks. 2) a residual feature recalibration block (RFRB) trained with class-weighted maximum mean discrepancy (MMD) loss for down-weighing the irrelevant source samples. Extensive experiments on two publicly available CXR datasets containing a total of 8598 pneumonia (viral, bacterial, and COVID-19) cases, 7163 non-pneumonia or healthy cases, demonstrate the superior performance of our AGPDA. Especially on three partial transfer tasks, AGPDA significantly increases the accuracy, sensitivity, and F1 score by 4.35%, 4.05%, and 1.78% compared to recently strong baselines.
Collapse
|
18
|
Shen HC, Chen CC, Chen WC, Yu WK, Yang KY, Chen YM. Association of Late Radiographic Assessment of Lung Edema Score with Clinical Outcome in Patients with Influenza-Associated Acute Respiratory Distress Syndrome. Diagnostics (Basel) 2023; 13:3572. [PMID: 38066813 PMCID: PMC10706585 DOI: 10.3390/diagnostics13233572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 11/24/2023] [Accepted: 11/28/2023] [Indexed: 10/16/2024] Open
Abstract
Background: Influenza virus infection leads to acute pulmonary injury and acute respiratory distress syndrome (ARDS). The Radiographic Assessment of Lung Edema (RALE) score has been proposed as a reliable tool for the evaluation of the opacity of chest X-rays (CXRs). This study aimed to examine the RALE scores and outcomes in patients with influenza-associated ARDS. Methods: Patients who were newly diagnosed with influenza-associated ARDS from December 2015 to March 2016 were enrolled. Two independent reviewers scored the CXRs obtained on the day of ICU admission and on days 2 and 7 after intensive care unit (ICU) admission. Results: During the study, 47 patients had influenza-associated ARDS. Five died within 7 days of ICU admission. Of the remaining 42, non-survivors (N = 12) had higher Sequential Organ Failure Assessment scores (SOFA) at ICU admission and higher day 7 RALE scores than survivors (N = 30). The day 7 RALE score independently related to late in-hospital mortality (aOR = 1.121, 95% CI: 1.014-1.240, p = 0.025). Conclusions: The RALE score for the evaluation of opacity on CXRs is a highly reproducible tool. Moreover, RALE score on day 7 was an independent predictor of late in-hospital mortality in patients with influenza-associated ARDS.
Collapse
Affiliation(s)
- Hsiao-Chin Shen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan; (H.-C.S.)
- Department of Medical Education, Taipei Veterans General Hospital, Taipei 112, Taiwan
| | - Chun-Chia Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan; (H.-C.S.)
| | - Wei-Chih Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan; (H.-C.S.)
- Faculty of Medicine, School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Institute of Emergency and Critical Care Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Wen-Kuang Yu
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan; (H.-C.S.)
- Faculty of Medicine, School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Kuang-Yao Yang
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan; (H.-C.S.)
- Faculty of Medicine, School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Institute of Emergency and Critical Care Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Cancer Progression Research Center, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yuh-Min Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan; (H.-C.S.)
- Faculty of Medicine, School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| |
Collapse
|
19
|
Ahmad IS, Li N, Wang T, Liu X, Dai J, Chan Y, Liu H, Zhu J, Kong W, Lu Z, Xie Y, Liang X. COVID-19 Detection via Ultra-Low-Dose X-ray Images Enabled by Deep Learning. Bioengineering (Basel) 2023; 10:1314. [PMID: 38002438 PMCID: PMC10669345 DOI: 10.3390/bioengineering10111314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 10/28/2023] [Accepted: 11/02/2023] [Indexed: 11/26/2023] Open
Abstract
The detection of Coronavirus disease 2019 (COVID-19) is crucial for controlling the spread of the virus. Current research utilizes X-ray imaging and artificial intelligence for COVID-19 diagnosis. However, conventional X-ray scans expose patients to excessive radiation, rendering repeated examinations impractical. Ultra-low-dose X-ray imaging technology enables rapid and accurate COVID-19 detection with minimal additional radiation exposure. In this retrospective cohort study, ULTRA-X-COVID, a deep neural network specifically designed for automatic detection of COVID-19 infections using ultra-low-dose X-ray images, is presented. The study included a multinational and multicenter dataset consisting of 30,882 X-ray images obtained from approximately 16,600 patients across 51 countries. It is important to note that there was no overlap between the training and test sets. The data analysis was conducted from 1 April 2020 to 1 January 2022. To evaluate the effectiveness of the model, various metrics such as the area under the receiver operating characteristic curve, receiver operating characteristic, accuracy, specificity, and F1 score were utilized. In the test set, the model demonstrated an AUC of 0.968 (95% CI, 0.956-0.983), accuracy of 94.3%, specificity of 88.9%, and F1 score of 99.0%. Notably, the ULTRA-X-COVID model demonstrated a performance comparable to conventional X-ray doses, with a prediction time of only 0.1 s per image. These findings suggest that the ULTRA-X-COVID model can effectively identify COVID-19 cases using ultra-low-dose X-ray scans, providing a novel alternative for COVID-19 detection. Moreover, the model exhibits potential adaptability for diagnoses of various other diseases.
Collapse
Affiliation(s)
- Isah Salim Ahmad
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (I.S.A.); (T.W.); (X.L.); (J.D.); (Y.C.); (Y.X.)
| | - Na Li
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan 523808, China; (N.L.); (H.L.); (J.Z.); (W.K.); (Z.L.)
| | - Tangsheng Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (I.S.A.); (T.W.); (X.L.); (J.D.); (Y.C.); (Y.X.)
| | - Xuan Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (I.S.A.); (T.W.); (X.L.); (J.D.); (Y.C.); (Y.X.)
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (I.S.A.); (T.W.); (X.L.); (J.D.); (Y.C.); (Y.X.)
| | - Yinping Chan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (I.S.A.); (T.W.); (X.L.); (J.D.); (Y.C.); (Y.X.)
| | - Haoyang Liu
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan 523808, China; (N.L.); (H.L.); (J.Z.); (W.K.); (Z.L.)
| | - Junming Zhu
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan 523808, China; (N.L.); (H.L.); (J.Z.); (W.K.); (Z.L.)
| | - Weibin Kong
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan 523808, China; (N.L.); (H.L.); (J.Z.); (W.K.); (Z.L.)
| | - Zefeng Lu
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan 523808, China; (N.L.); (H.L.); (J.Z.); (W.K.); (Z.L.)
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (I.S.A.); (T.W.); (X.L.); (J.D.); (Y.C.); (Y.X.)
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (I.S.A.); (T.W.); (X.L.); (J.D.); (Y.C.); (Y.X.)
| |
Collapse
|
20
|
Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, Shin K, Kim KD, Ryu SM, Seo JB, Lee SM, Kim N. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean J Radiol 2023; 24:1061-1080. [PMID: 37724586 PMCID: PMC10613849 DOI: 10.3348/kjr.2023.0393] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/01/2023] [Accepted: 07/30/2023] [Indexed: 09/21/2023] Open
Abstract
Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.
Collapse
Affiliation(s)
- Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jiheon Jeong
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Grace Yoojin Lee
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Keewon Shin
- Laboratory for Biosignal Analysis and Perioperative Outcome Research, Biomedical Engineering Center, Asan Institute of Lifesciences, Asan Medical Center, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Min Ryu
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
21
|
Socha M, Prażuch W, Suwalska A, Foszner P, Tobiasz J, Jaroszewicz J, Gruszczynska K, Sliwinska M, Nowak M, Gizycka B, Zapolska G, Popiela T, Przybylski G, Fiedor P, Pawlowska M, Flisiak R, Simon K, Walecki J, Cieszanowski A, Szurowska E, Marczyk M, Polanska J. Pathological changes or technical artefacts? The problem of the heterogenous databases in COVID-19 CXR image analysis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107684. [PMID: 37356354 PMCID: PMC10278898 DOI: 10.1016/j.cmpb.2023.107684] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 06/11/2023] [Accepted: 06/18/2023] [Indexed: 06/27/2023]
Abstract
BACKGROUND When the COVID-19 pandemic commenced in 2020, scientists assisted medical specialists with diagnostic algorithm development. One scientific research area related to COVID-19 diagnosis was medical imaging and its potential to support molecular tests. Unfortunately, several systems reported high accuracy in development but did not fare well in clinical application. The reason was poor generalization, a long-standing issue in AI development. Researchers found many causes of this issue and decided to refer to them as confounders, meaning a set of artefacts and methodological errors associated with the method. We aim to contribute to this steed by highlighting an undiscussed confounder related to image resolution. METHODS 20 216 chest X-ray images (CXR) from worldwide centres were analyzed. The CXRs were bijectively projected into the 2D domain by performing Uniform Manifold Approximation and Projection (UMAP) embedding on the radiomic features (rUMAP) or CNN-based neural features (nUMAP) from the pre-last layer of the pre-trained classification neural network. Additional 44 339 thorax CXRs were used for validation. The comprehensive analysis of the multimodality of the density distribution in rUMAP/nUMAP domains and its relation to the original image properties was used to identify the main confounders. RESULTS nUMAP revealed a hidden bias of neural networks towards the image resolution, which the regular up-sampling procedure cannot compensate for. The issue appears regardless of the network architecture and is not observed in a high-resolution dataset. The impact of the resolution heterogeneity can be partially diminished by applying advanced deep-learning-based super-resolution networks. CONCLUSIONS rUMAP and nUMAP are great tools for image homogeneity analysis and bias discovery, as demonstrated by applying them to COVID-19 image data. Nonetheless, nUMAP could be applied to any type of data for which a deep neural network could be constructed. Advanced image super-resolution solutions are needed to reduce the impact of the resolution diversity on the classification network decision.
Collapse
Affiliation(s)
- Marek Socha
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
| | - Wojciech Prażuch
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
| | - Aleksandra Suwalska
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
| | - Paweł Foszner
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland; Department of Computer Graphics, Vision and Digital Systems, Silesian University of Technology, Gliwice, Poland
| | - Joanna Tobiasz
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland; Department of Computer Graphics, Vision and Digital Systems, Silesian University of Technology, Gliwice, Poland
| | - Jerzy Jaroszewicz
- Department of Infectious Diseases and Hepatology, Medical University of Silesia, Katowice, Poland
| | - Katarzyna Gruszczynska
- Department of Radiology and Nuclear Medicine, Medical University of Silesia, Katowice, Poland
| | - Magdalena Sliwinska
- Department of Diagnostic Imaging, Voivodship Specialist Hospital, Wroclaw, Poland
| | - Mateusz Nowak
- Department of Radiology, Silesian Hospital, Cieszyn, Poland
| | - Barbara Gizycka
- Department of Imaging Diagnostics, MEGREZ Hospital, Tychy, Poland
| | | | - Tadeusz Popiela
- Department of Radiology, Jagiellonian University Medical College, Krakow, Poland
| | - Grzegorz Przybylski
- Department of Lung Diseases, Cancer and Tuberculosis, Kujawsko-Pomorskie Pulmonology Center, Bydgoszcz, Poland
| | - Piotr Fiedor
- Department of General and Transplantation Surgery, Medical University of Warsaw, Warsaw, Poland
| | - Malgorzata Pawlowska
- Department of Infectious Diseases and Hepatology, Collegium Medicum in Bydgoszcz, Nicolaus Copernicus University, Torun, Poland
| | - Robert Flisiak
- Department of Infectious Diseases and Hepatology, Medical University of Bialystok, Bialystok, Poland
| | - Krzysztof Simon
- Department of Infectious Diseases and Hepatology, Wroclaw Medical University, Wroclaw, Poland
| | - Jerzy Walecki
- Department of Radiology, Centre of Postgraduate Medical Education, Central Clinical Hospital of the Ministry of Interior in Warsaw, Poland
| | - Andrzej Cieszanowski
- Department of Radiology I, The Maria Sklodowska-Curie National Research Institute of Oncology, Warsaw, Poland
| | - Edyta Szurowska
- 2nd Department of Radiology, Medical University of Gdansk, Poland
| | - Michal Marczyk
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland; Yale Cancer Center, Yale School of Medicine, New Haven, CT, USA
| | - Joanna Polanska
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland.
| |
Collapse
|
22
|
Arslan M, Haider A, Khurshid M, Abu Bakar SSU, Jani R, Masood F, Tahir T, Mitchell K, Panchagnula S, Mandair S. From Pixels to Pathology: Employing Computer Vision to Decode Chest Diseases in Medical Images. Cureus 2023; 15:e45587. [PMID: 37868395 PMCID: PMC10587792 DOI: 10.7759/cureus.45587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/19/2023] [Indexed: 10/24/2023] Open
Abstract
Radiology has been a pioneer in the healthcare industry's digital transformation, incorporating digital imaging systems like picture archiving and communication system (PACS) and teleradiology over the past thirty years. This shift has reshaped radiology services, positioning the field at a crucial junction for potential evolution into an integrated diagnostic service through artificial intelligence and machine learning. These technologies offer advanced tools for radiology's transformation. The radiology community has advanced computer-aided diagnosis (CAD) tools using machine learning techniques, notably deep learning convolutional neural networks (CNNs), for medical image pattern recognition. However, the integration of CAD tools into clinical practice has been hindered by challenges in workflow integration, unclear business models, and limited clinical benefits, despite development dating back to the 1990s. This comprehensive review focuses on detecting chest-related diseases through techniques like chest X-rays (CXRs), magnetic resonance imaging (MRI), nuclear medicine, and computed tomography (CT) scans. It examines the utilization of computer-aided programs by researchers for disease detection, addressing key areas: the role of computer-aided programs in disease detection advancement, recent developments in MRI, CXR, radioactive tracers, and CT scans for chest disease identification, research gaps for more effective development, and the incorporation of machine learning programs into diagnostic tools.
Collapse
Affiliation(s)
- Muhammad Arslan
- Department of Emergency Medicine, Royal Infirmary of Edinburgh, National Health Service (NHS) Lothian, Edinburgh, GBR
| | - Ali Haider
- Department of Allied Health Sciences, The University of Lahore, Gujrat Campus, Gujrat, PAK
| | - Mohsin Khurshid
- Department of Microbiology, Government College University Faisalabad, Faisalabad, PAK
| | | | - Rutva Jani
- Department of Internal Medicine, C. U. Shah Medical College and Hospital, Gujarat, IND
| | - Fatima Masood
- Department of Internal Medicine, Gulf Medical University, Ajman, ARE
| | - Tuba Tahir
- Department of Business Administration, Iqra University, Karachi, PAK
| | - Kyle Mitchell
- Department of Internal Medicine, University of Science, Arts and Technology, Olveston, MSR
| | - Smruthi Panchagnula
- Department of Internal Medicine, Ganni Subbalakshmi Lakshmi (GSL) Medical College, Hyderabad, IND
| | - Satpreet Mandair
- Department of Internal Medicine, Medical University of the Americas, Charlestown, KNA
| |
Collapse
|
23
|
Romaszko-Wojtowicz A, Jaśkiewicz Ł, Jurczak P, Doboszyńska A. Telemedicine in Primary Practice in the Age of the COVID-19 Pandemic-Review. MEDICINA (KAUNAS, LITHUANIA) 2023; 59:1541. [PMID: 37763659 PMCID: PMC10532942 DOI: 10.3390/medicina59091541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 08/18/2023] [Accepted: 08/22/2023] [Indexed: 09/29/2023]
Abstract
Background and Objectives: In the era of the COVID-19 pandemic, telemedicine, so far underestimated, has gained in value. Currently, telemedicine is not only a telephone or chat consultation, but also the possibility of the remote recording of signals (such as ECG, saturation, and heart rate) or even remote auscultation of the lungs. The objective of this review article is to present a potential role for, and disseminate knowledge of, telemedicine during the COVID-19 pandemic. Material and Methods: In order to analyze the research material in accordance with PRISMA guidelines, a systematic search of the ScienceDirect, Web of Science, and PubMed databases was conducted. Out of the total number of 363 papers identified, 22 original articles were subjected to analysis. Results: This article presents the possibilities of remote patient registration, which contributes to an improvement in remote diagnostics and diagnoses. Conclusions: Telemedicine is, although not always and not by everyone, an accepted form of providing medical services. It cannot replace direct patient-doctor contact, but it can undoubtedly contribute to accelerating diagnoses and improving their quality at a distance.
Collapse
Affiliation(s)
- Anna Romaszko-Wojtowicz
- Department of Pulmonology, School of Public Health, Collegium Medicum, University of Warmia and Mazury in Olsztyn, 10-719 Olsztyn, Poland;
| | - Łukasz Jaśkiewicz
- Department of Human Physiology and Pathophysiology, School of Medicine, Collegium Medicum, University of Warmia and Mazury in Olsztyn, 10-082 Olsztyn, Poland;
| | - Paweł Jurczak
- Student Scientific Club of Cardiopulmonology and Rare Diseases of the Respiratory System, School of Medicine, Collegium Medicum, University of Warmia and Mazury in Olsztyn, 10-082 Olsztyn, Poland;
| | - Anna Doboszyńska
- Department of Pulmonology, School of Public Health, Collegium Medicum, University of Warmia and Mazury in Olsztyn, 10-719 Olsztyn, Poland;
| |
Collapse
|
24
|
Arora M, Davis CM, Gowda NR, Foster DG, Mondal A, Coopersmith CM, Kamaleswaran R. Uncertainty-Aware Convolutional Neural Network for Identifying Bilateral Opacities on Chest X-rays: A Tool to Aid Diagnosis of Acute Respiratory Distress Syndrome. Bioengineering (Basel) 2023; 10:946. [PMID: 37627831 PMCID: PMC10451804 DOI: 10.3390/bioengineering10080946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/26/2023] [Accepted: 08/03/2023] [Indexed: 08/27/2023] Open
Abstract
Acute Respiratory Distress Syndrome (ARDS) is a severe lung injury with high mortality, primarily characterized by bilateral pulmonary opacities on chest radiographs and hypoxemia. In this work, we trained a convolutional neural network (CNN) model that can reliably identify bilateral opacities on routine chest X-ray images of critically ill patients. We propose this model as a tool to generate predictive alerts for possible ARDS cases, enabling early diagnosis. Our team created a unique dataset of 7800 single-view chest-X-ray images labeled for the presence of bilateral or unilateral pulmonary opacities, or 'equivocal' images, by three blinded clinicians. We used a novel training technique that enables the CNN to explicitly predict the 'equivocal' class using an uncertainty-aware label smoothing loss. We achieved an Area under the Receiver Operating Characteristic Curve (AUROC) of 0.82 (95% CI: 0.80, 0.85), a precision of 0.75 (95% CI: 0.73, 0.78), and a sensitivity of 0.76 (95% CI: 0.73, 0.78) on the internal test set while achieving an (AUROC) of 0.84 (95% CI: 0.81, 0.86), a precision of 0.73 (95% CI: 0.63, 0.69), and a sensitivity of 0.73 (95% CI: 0.70, 0.75) on an external validation set. Further, our results show that this approach improves the model calibration and diagnostic odds ratio of the hypothesized alert tool, making it ideal for clinical decision support systems.
Collapse
Affiliation(s)
- Mehak Arora
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA 30332, USA;
| | - Carolyn M. Davis
- Department of Surgery, Emory University School of Medicine, Atlanta, GA 30332, USA; (C.M.D.); (D.G.F.); (C.M.C.)
- Emory Critical Care Center, Emory University School of Medicine, Atlanta, GA 30332, USA
| | - Niraj R. Gowda
- Division of Pulmonary, Critical Care, Allergy and Sleep Medicine, Emory University School of Medicine, Atlanta, GA 30332, USA;
| | - Dennis G. Foster
- Department of Surgery, Emory University School of Medicine, Atlanta, GA 30332, USA; (C.M.D.); (D.G.F.); (C.M.C.)
| | - Angana Mondal
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA 30332, USA;
| | - Craig M. Coopersmith
- Department of Surgery, Emory University School of Medicine, Atlanta, GA 30332, USA; (C.M.D.); (D.G.F.); (C.M.C.)
- Emory Critical Care Center, Emory University School of Medicine, Atlanta, GA 30332, USA
| | - Rishikesan Kamaleswaran
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA 30332, USA;
- Emory Critical Care Center, Emory University School of Medicine, Atlanta, GA 30332, USA
| |
Collapse
|
25
|
Sato J, Suzuki Y, Wataya T, Nishigaki D, Kita K, Yamagata K, Tomiyama N, Kido S. Anatomy-aware self-supervised learning for anomaly detection in chest radiographs. iScience 2023; 26:107086. [PMID: 37434699 PMCID: PMC10331430 DOI: 10.1016/j.isci.2023.107086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 04/17/2023] [Accepted: 06/06/2023] [Indexed: 07/13/2023] Open
Abstract
In this study, we present a self-supervised learning (SSL)-based model that enables anatomical structure-based unsupervised anomaly detection (UAD). The model employs an anatomy-aware pasting (AnatPaste) augmentation tool that uses a threshold-based lung segmentation pretext task to create anomalies in normal chest radiographs used for model pretraining. These anomalies are similar to real anomalies and help the model recognize them. We evaluate our model using three open-source chest radiograph datasets. Our model exhibits area under curves of 92.1%, 78.7%, and 81.9%, which are the highest among those of existing UAD models. To the best of our knowledge, this is the first SSL model to employ anatomical information from segmentation as a pretext task. The performance of AnatPaste shows that incorporating anatomical information into SSL can effectively improve accuracy.
Collapse
Affiliation(s)
- Junya Sato
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
- Graduate School of Information Science and Technology, Osaka University, Yamadaoka, 1-5 Suita, Osaka 565-0871, Japan
| | - Yuki Suzuki
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Tomohiro Wataya
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Daiki Nishigaki
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Kosuke Kita
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Kazuki Yamagata
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Noriyuki Tomiyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Shoji Kido
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka 565-0871, Japan
| |
Collapse
|
26
|
Fu Y, Xue P, Zhang Z, Dong E. PKA 2-Net: Prior Knowledge-Based Active Attention Network for Accurate Pneumonia Diagnosis on Chest X-Ray Images. IEEE J Biomed Health Inform 2023; 27:3513-3524. [PMID: 37058372 DOI: 10.1109/jbhi.2023.3267057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/15/2023]
Abstract
To accurately diagnose pneumonia patients on a limited annotated chest X-ray image dataset, a prior knowledge-based active attention network (PKA2-Net1) was constructed. The PKA2-Net uses improved ResNet as the backbone network and consists of residual blocks, novel subject enhancement and background suppression (SEBS) blocks and candidate template generators, where template generators are designed to generate candidate templates for characterizing the importance of different spatial locations in feature maps. The core of PKA2-Net is SEBS block, which is proposed based on the prior knowledge that highlighting distinctive features and suppressing irrelevant features can improve the recognition effect. The purpose of SEBS block is to generate active attention features without any high-level features and enhance the ability of the model to localize lung lesions. In SEBS block, first, a series of candidate templates T with different spatial energy distributions are generated and the controllability of the energy distribution in T enables active attention features to maintain the continuity and integrity of the feature space distributions. Second, Top-n templates are selected from T according to certain learning rules, which are then operated by a convolution layer for generating supervision information that can guide the inputs of SEBS block to form active attention features. We evaluated the PKA2-Net on the binary classification problem of identifying pneumonia and healthy controls on a dataset containing 5856 chest X-ray images (ChestXRay2017), the results showed that our method can achieve 97.63% accuracy and 0.9872 sensitivity.
Collapse
|
27
|
Alloqmani A, Abushark YB, Khan AI. Anomaly Detection of Breast Cancer Using Deep Learning. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2023; 48:1-26. [PMID: 37361464 PMCID: PMC10258083 DOI: 10.1007/s13369-023-07945-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 03/27/2023] [Indexed: 06/28/2023]
Abstract
Cancer is one of the deadliest diseases facing humanity, one of the which is breast cancer, and it can be considered one of the primary causes of death for most women. Early detection and treatment can significantly improve outcomes and reduce the death rate and treatment costs. This article proposes an efficient and accurate deep learning-based anomaly detection framework. The framework aims to recognize breast abnormalities (benign and malignant) by considering normal data. Also, we address the problem of imbalanced data, which can be claimed to be a popular issue in the medical field. The framework consists of two stages: (1) data pre-processing (i.e., image pre-processing); and (2) feature extraction through the adoption of a MobileNetV2 pre-trained model. After that classification step, a single-layer perceptron is used. Two public datasets were used for the evaluation: INbreast and MIAS. The experimental results showed that the proposed framework is efficient and accurate in detecting anomalies (e.g., 81.40% to 97.36% in terms of area under the curve). As per the evaluation results, the proposed framework outperforms recent and relevant works and overcomes their limitations.
Collapse
Affiliation(s)
- Ahad Alloqmani
- Computer Science Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Yoosef B. Abushark
- Computer Science Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Asif Irshad Khan
- Computer Science Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
28
|
Das S, Ayus I, Gupta D. A comprehensive review of COVID-19 detection with machine learning and deep learning techniques. HEALTH AND TECHNOLOGY 2023; 13:1-14. [PMID: 37363343 PMCID: PMC10244837 DOI: 10.1007/s12553-023-00757-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/14/2023] [Indexed: 06/28/2023]
Abstract
Purpose The first transmission of coronavirus to humans started in Wuhan city of China, took the shape of a pandemic called Corona Virus Disease 2019 (COVID-19), and posed a principal threat to the entire world. The researchers are trying to inculcate artificial intelligence (Machine learning or deep learning models) for the efficient detection of COVID-19. This research explores all the existing machine learning (ML) or deep learning (DL) models, used for COVID-19 detection which may help the researcher to explore in different directions. The main purpose of this review article is to present a compact overview of the application of artificial intelligence to the research experts, helping them to explore the future scopes of improvement. Methods The researchers have used various machine learning, deep learning, and a combination of machine and deep learning models for extracting significant features and classifying various health conditions in COVID-19 patients. For this purpose, the researchers have utilized different image modalities such as CT-Scan, X-Ray, etc. This study has collected over 200 research papers from various repositories like Google Scholar, PubMed, Web of Science, etc. These research papers were passed through various levels of scrutiny and finally, 50 research articles were selected. Results In those listed articles, the ML / DL models showed an accuracy of 99% and above while performing the classification of COVID-19. This study has also presented various clinical applications of various research. This study specifies the importance of various machine and deep learning models in the field of medical diagnosis and research. Conclusion In conclusion, it is evident that ML/DL models have made significant progress in recent years, but there are still limitations that need to be addressed. Overfitting is one such limitation that can lead to incorrect predictions and overburdening of the models. The research community must continue to work towards finding ways to overcome these limitations and make machine and deep learning models even more effective and efficient. Through this ongoing research and development, we can expect even greater advances in the future.
Collapse
Affiliation(s)
- Sreeparna Das
- Department of Computer Science and Engineering, National Institute of Technology Arunachal Pradesh, Jote, Arunachal Pradesh 791113 India
| | - Ishan Ayus
- Department of Computer Science and Engineering, ITER, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha 751030 India
| | - Deepak Gupta
- Department of Computer Science and Engineering, Motilal Nehru National Institute of Technology Allahabad, Prayagraj, UP 211004 India
| |
Collapse
|
29
|
Yin M, Liang X, Wang Z, Zhou Y, He Y, Xue Y, Gao J, Lin J, Yu C, Liu L, Liu X, Xu C, Zhu J. Identification of Asymptomatic COVID-19 Patients on Chest CT Images Using Transformer-Based or Convolutional Neural Network-Based Deep Learning Models. J Digit Imaging 2023; 36:827-836. [PMID: 36596937 PMCID: PMC9810383 DOI: 10.1007/s10278-022-00754-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 11/30/2022] [Accepted: 12/07/2022] [Indexed: 01/04/2023] Open
Abstract
Novel coronavirus disease 2019 (COVID-19) has rapidly spread throughout the world; however, it is difficult for clinicians to make early diagnoses. This study is to evaluate the feasibility of using deep learning (DL) models to identify asymptomatic COVID-19 patients based on chest CT images. In this retrospective study, six DL models (Xception, NASNet, ResNet, EfficientNet, ViT, and Swin), based on convolutional neural networks (CNNs) or transformer architectures, were trained to identify asymptomatic patients with COVID-19 on chest CT images. Data from Yangzhou were randomly split into a training set (n = 2140) and an internal-validation set (n = 360). Data from Suzhou was the external-test set (n = 200). Model performance was assessed by the metrics accuracy, recall, and specificity and was compared with the assessments of two radiologists. A total of 2700 chest CT images were collected in this study. In the validation dataset, the Swin model achieved the highest accuracy of 0.994, followed by the EfficientNet model (0.954). The recall and the precision of the Swin model were 0.989 and 1.000, respectively. In the test dataset, the Swin model was still the best and achieved the highest accuracy (0.980). All the DL models performed remarkably better than the two experts. Last, the time on the test set diagnosis spent by two experts-42 min, 17 s (junior); and 29 min, 43 s (senior)-was significantly higher than those of the DL models (all below 2 min). This study evaluated the feasibility of multiple DL models in distinguishing asymptomatic patients with COVID-19 from healthy subjects on chest CT images. It found that a transformer-based model, the Swin model, performed best.
Collapse
Affiliation(s)
- Minyue Yin
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Xiaolong Liang
- Department of Orthopedics, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
| | - Zilan Wang
- Department of Neurosurgery, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
| | - Yijia Zhou
- Medical School, Soochow University, Suzhou, 215006, Jiangsu, China
| | - Yu He
- Medical School, Soochow University, Suzhou, 215006, Jiangsu, China
| | - Yuhan Xue
- Medical School, Soochow University, Suzhou, 215006, Jiangsu, China
| | - Jingwen Gao
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Jiaxi Lin
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Chenyan Yu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Lu Liu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Xiaolin Liu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Chao Xu
- Department of Radiotherapy, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
| | - Jinzhou Zhu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China.
- The 23Rd Ward, Yangzhou Third People's Hospital, Yangzhou, 225000, Jiangsu, China.
| |
Collapse
|
30
|
Poola RG, Pl L, Y SS. COVID-19 diagnosis: A comprehensive review of pre-trained deep learning models based on feature extraction algorithm. RESULTS IN ENGINEERING 2023; 18:101020. [PMID: 36945336 PMCID: PMC10017171 DOI: 10.1016/j.rineng.2023.101020] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 03/01/2023] [Accepted: 03/08/2023] [Indexed: 05/14/2023]
Abstract
Due to the augmented rise of COVID-19, clinical specialists are looking for fast faultless diagnosis strategies to restrict Covid spread while attempting to lessen the computational complexity. In this way, swift diagnosis techniques for COVID-19 with high precision can offer valuable aid to clinical specialists. RT- PCR test is an expensive and tedious COVID diagnosis technique in practice. Medical imaging is feasible to diagnose COVID-19 by X-ray chest radiography to get around the shortcomings of RT-PCR. Through a variety of Deep Transfer-learning models, this research investigates the potential of Artificial Intelligence -based early diagnosis of COVID-19 via X-ray chest radiographs. With 10,192 normal and 3616 Covid X-ray chest radiographs, the deep transfer-learning models are optimized to further the accurate diagnosis. The x-ray chest radiographs undergo a data augmentation phase before developing a modified dataset to train the Deep Transfer-learning models. The Deep Transfer-learning architectures are trained using the extracted features from the Feature Extraction stage. During training, the classification of X-ray Chest radiographs based on feature extraction algorithm values is converted into a feature label set containing the classified image data with a feature string value representing the number of edges detected after edge detection. The feature label set is further tested with the SVM, KNN, NN, Naive Bayes and Logistic Regression classifiers to audit the quality metrics of the proposed model. The quality metrics include accuracy, precision, F1 score, recall and AUC. The Inception-V3 dominates the six Deep Transfer-learning models, according to the assessment results, with a training accuracy of 84.79% and a loss function of 2.4%. The performance of Cubic SVM was superior to that of the other SVM classifiers, with an AUC score of 0.99, precision of 0.983, recall of 0.8977, accuracy of 95.8%, and F1 score of 0.9384. Cosine KNN fared better than the other KNN classifiers with an AUC score of 0.95, precision of 0.974, recall of 0.777, accuracy of 90.8%, and F1 score of 0.864. Wide NN fared better than the other NN classifiers with an AUC score of 0.98, precision of 0.975, recall of 0.907, accuracy of 95.5%, and F1 score of 0.939. According to the findings, SVM classifiers topped other classifiers in terms of performance indicators like accuracy, precision, recall, F1-score, and AUC. The SVM classifiers reported better mean optimal scores compared to other classifiers. The performance assessment metrics uncover that the proposed methodology can aid in preliminary COVID diagnosis.
Collapse
Affiliation(s)
| | - Lahari Pl
- Dept. of ECE, SRM University, AP, India
| | | |
Collapse
|
31
|
Li G, Togo R, Ogawa T, Haseyama M. Boosting automatic COVID-19 detection performance with self-supervised learning and batch knowledge ensembling. Comput Biol Med 2023; 158:106877. [PMID: 37019015 PMCID: PMC10063457 DOI: 10.1016/j.compbiomed.2023.106877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 03/15/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023]
Abstract
PROBLEM Detecting COVID-19 from chest X-ray (CXR) images has become one of the fastest and easiest methods for detecting COVID-19. However, the existing methods usually use supervised transfer learning from natural images as a pretraining process. These methods do not consider the unique features of COVID-19 and the similar features between COVID-19 and other pneumonia. AIM In this paper, we want to design a novel high-accuracy COVID-19 detection method that uses CXR images, which can consider the unique features of COVID-19 and the similar features between COVID-19 and other pneumonia. METHODS Our method consists of two phases. One is self-supervised learning-based pertaining; the other is batch knowledge ensembling-based fine-tuning. Self-supervised learning-based pretraining can learn distinguished representations from CXR images without manually annotated labels. On the other hand, batch knowledge ensembling-based fine-tuning can utilize category knowledge of images in a batch according to their visual feature similarities to improve detection performance. Unlike our previous implementation, we introduce batch knowledge ensembling into the fine-tuning phase, reducing the memory used in self-supervised learning and improving COVID-19 detection accuracy. RESULTS On two public COVID-19 CXR datasets, namely, a large dataset and an unbalanced dataset, our method exhibited promising COVID-19 detection performance. Our method maintains high detection accuracy even when annotated CXR training images are reduced significantly (e.g., using only 10% of the original dataset). In addition, our method is insensitive to changes in hyperparameters. CONCLUSION The proposed method outperforms other state-of-the-art COVID-19 detection methods in different settings. Our method can reduce the workloads of healthcare providers and radiologists.
Collapse
Affiliation(s)
- Guang Li
- Graduate School of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-Ku, Sapporo, 060-0814, Japan.
| | - Ren Togo
- Faculty of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-Ku, Sapporo, 060-0814, Japan.
| | - Takahiro Ogawa
- Faculty of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-Ku, Sapporo, 060-0814, Japan.
| | - Miki Haseyama
- Faculty of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-Ku, Sapporo, 060-0814, Japan.
| |
Collapse
|
32
|
Ullah Z, Usman M, Gwak J. MTSS-AAE: Multi-task semi-supervised adversarial autoencoding for COVID-19 detection based on chest X-ray images. EXPERT SYSTEMS WITH APPLICATIONS 2023; 216:119475. [PMID: 36619348 PMCID: PMC9810379 DOI: 10.1016/j.eswa.2022.119475] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 07/28/2022] [Accepted: 12/22/2022] [Indexed: 06/12/2023]
Abstract
Efficient diagnosis of COVID-19 plays an important role in preventing the spread of the disease. There are three major modalities to diagnose COVID-19 which include polymerase chain reaction tests, computed tomography scans, and chest X-rays (CXRs). Among these, diagnosis using CXRs is the most economical approach; however, it requires extensive human expertise to diagnose COVID-19 in CXRs, which may deprive it of cost-effectiveness. The computer-aided diagnosis with deep learning has the potential to perform accurate detection of COVID-19 in CXRs without human intervention while preserving its cost-effectiveness. Many efforts have been made to develop a highly accurate and robust solution. However, due to the limited amount of labeled data, existing solutions are evaluated on a small set of test dataset. In this work, we proposed a solution to this problem by using a multi-task semi-supervised learning (MTSSL) framework that utilized auxiliary tasks for which adequate data is publicly available. Specifically, we utilized Pneumonia, Lung Opacity, and Pleural Effusion as additional tasks using the ChesXpert dataset. We illustrated that the primary task of COVID-19 detection, for which only limited labeled data is available, can be improved by using this additional data. We further employed an adversarial autoencoder (AAE), which has a strong capability to learn powerful and discriminative features, within our MTSSL framework to maximize the benefit of multi-task learning. In addition, the supervised classification networks in combination with the unsupervised AAE empower semi-supervised learning, which includes a discriminative part in the unsupervised AAE training pipeline. The generalization of our framework is improved due to this semi-supervised learning and thus it leads to enhancement in COVID-19 detection performance. The proposed model is rigorously evaluated on the largest publicly available COVID-19 dataset and experimental results show that the proposed model attained state-of-the-art performance.
Collapse
Affiliation(s)
- Zahid Ullah
- Department of Software, Korea National University of Transportation, Chungju 27469, South Korea
| | - Muhammad Usman
- Department of Computer Science and Engineering, Seoul National University, Seoul 08826, South Korea
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju 27469, South Korea
- Department of Biomedical Engineering, Korea National University of Transportation, Chungju 27469, South Korea
- Department of AI Robotics Engineering, Korea National University of Transportation, Chungju 27469, South Korea
- Department of IT Energy Convergence (BK21 FOUR), Korea National University of Transportation, Chungju 27469, South Korea
| |
Collapse
|
33
|
Pramanik R, Banerjee B, Efimenko G, Kaplun D, Sarkar R. Monkeypox detection from skin lesion images using an amalgamation of CNN models aided with Beta function-based normalization scheme. PLoS One 2023; 18:e0281815. [PMID: 37027356 PMCID: PMC10081766 DOI: 10.1371/journal.pone.0281815] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 02/01/2023] [Indexed: 04/08/2023] Open
Abstract
We have recently been witnessing that our society is starting to heal from the impacts of COVID-19. The economic, social and cultural impacts of a pandemic cannot be ignored and we should be properly equipped to deal with similar situations in future. Recently, Monkeypox has been concerning the international health community with its lethal impacts for a probable pandemic. In such situations, having appropriate protocols and methodologies to deal with the outbreak efficiently is of paramount interest to the world. Early diagnosis and treatment stand as the only viable option to tackle such problems. To this end, in this paper, we propose an ensemble learning-based framework to detect the presence of the Monkeypox virus from skin lesion images. We first consider three pre-trained base learners, namely Inception V3, Xception and DenseNet169 to fine-tune on a target Monkeypox dataset. Further, we extract probabilities from these deep models to feed into the ensemble framework. To combine the outcomes, we propose a Beta function-based normalization scheme of probabilities to learn an efficient aggregation of complementary information obtained from the base learners followed by the sum rule-based ensemble. The framework is extensively evaluated on a publicly available Monkeypox skin lesion dataset using a five-fold cross-validation setup to evaluate its effectiveness. The model achieves an average of 93.39%, 88.91%, 96.78% and 92.35% accuracy, precision, recall and F1 scores, respectively. The supporting source codes are presented in https://github.com/BihanBanerjee/MonkeyPox.
Collapse
Affiliation(s)
- Rishav Pramanik
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, West Bengal, India
| | - Bihan Banerjee
- Department of Computer Science and Engineering, University Institute of Technology, Burdwan, India
| | - George Efimenko
- Department of Automation and Control Processes, Saint Petersburg Electrotechnical University "LETI", Saint Petersburg, Russian Federation
| | - Dmitrii Kaplun
- Department of Automation and Control Processes, Saint Petersburg Electrotechnical University "LETI", Saint Petersburg, Russian Federation
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, West Bengal, India
| |
Collapse
|
34
|
Khattab R, Abdelmaksoud IR, Abdelrazek S. Deep Convolutional Neural Networks for Detecting COVID-19 Using Medical Images: A Survey. NEW GENERATION COMPUTING 2023; 41:343-400. [PMID: 37229176 PMCID: PMC10071474 DOI: 10.1007/s00354-023-00213-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
Coronavirus Disease 2019 (COVID-19), which is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2), surprised the world in December 2019 and has threatened the lives of millions of people. Countries all over the world closed worship places and shops, prevented gatherings, and implemented curfews to stand against the spread of COVID-19. Deep Learning (DL) and Artificial Intelligence (AI) can have a great role in detecting and fighting this disease. Deep learning can be used to detect COVID-19 symptoms and signs from different imaging modalities, such as X-Ray, Computed Tomography (CT), and Ultrasound Images (US). This could help in identifying COVID-19 cases as a first step to curing them. In this paper, we reviewed the research studies conducted from January 2020 to September 2022 about deep learning models that were used in COVID-19 detection. This paper clarified the three most common imaging modalities (X-Ray, CT, and US) in addition to the DL approaches that are used in this detection and compared these approaches. This paper also provided the future directions of this field to fight COVID-19 disease.
Collapse
Affiliation(s)
- Rana Khattab
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Islam R. Abdelmaksoud
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Samir Abdelrazek
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| |
Collapse
|
35
|
Farhan AMQ, Yang S. Automatic lung disease classification from the chest X-ray images using hybrid deep learning algorithm. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-27. [PMID: 37362647 PMCID: PMC10030349 DOI: 10.1007/s11042-023-15047-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/30/2022] [Accepted: 02/27/2023] [Indexed: 06/28/2023]
Abstract
The chest X-ray images provide vital information about the congestion cost-effectively. We propose a novel Hybrid Deep Learning Algorithm (HDLA) framework for automatic lung disease classification from chest X-ray images. The model consists of steps including pre-processing of chest X-ray images, automatic feature extraction, and detection. In a pre-processing step, our goal is to improve the quality of raw chest X-ray images using the combination of optimal filtering without data loss. The robust Convolutional Neural Network (CNN) is proposed using the pre-trained model for automatic lung feature extraction. We employed the 2D CNN model for the optimum feature extraction in minimum time and space requirements. The proposed 2D CNN model ensures robust feature learning with highly efficient 1D feature estimation from the input pre-processed image. As the extracted 1D features have suffered from significant scale variations, we optimized them using min-max scaling. We classify the CNN features using the different machine learning classifiers such as AdaBoost, Support Vector Machine (SVM), Random Forest (RM), Backpropagation Neural Network (BNN), and Deep Neural Network (DNN). The experimental results claim that the proposed model improves the overall accuracy by 3.1% and reduces the computational complexity by 16.91% compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Abobaker Mohammed Qasem Farhan
- School of information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Shangming Yang
- School of information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
36
|
Shaheed K, Szczuko P, Abbas Q, Hussain A, Albathan M. Computer-Aided Diagnosis of COVID-19 from Chest X-ray Images Using Hybrid-Features and Random Forest Classifier. Healthcare (Basel) 2023; 11:healthcare11060837. [PMID: 36981494 PMCID: PMC10047954 DOI: 10.3390/healthcare11060837] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/16/2023] Open
Abstract
In recent years, a lot of attention has been paid to using radiology imaging to automatically find COVID-19. (1) Background: There are now a number of computer-aided diagnostic schemes that help radiologists and doctors perform diagnostic COVID-19 tests quickly, accurately, and consistently. (2) Methods: Using chest X-ray images, this study proposed a cutting-edge scheme for the automatic recognition of COVID-19 and pneumonia. First, a pre-processing method based on a Gaussian filter and logarithmic operator is applied to input chest X-ray (CXR) images to improve the poor-quality images by enhancing the contrast, reducing the noise, and smoothing the image. Second, robust features are extracted from each enhanced chest X-ray image using a Convolutional Neural Network (CNNs) transformer and an optimal collection of grey-level co-occurrence matrices (GLCM) that contain features such as contrast, correlation, entropy, and energy. Finally, based on extracted features from input images, a random forest machine learning classifier is used to classify images into three classes, such as COVID-19, pneumonia, or normal. The predicted output from the model is combined with Gradient-weighted Class Activation Mapping (Grad-CAM) visualisation for diagnosis. (3) Results: Our work is evaluated using public datasets with three different train–test splits (70–30%, 80–20%, and 90–10%) and achieved an average accuracy, F1 score, recall, and precision of 97%, 96%, 96%, and 96%, respectively. A comparative study shows that our proposed method outperforms existing and similar work. The proposed approach can be utilised to screen COVID-19-infected patients effectively. (4) Conclusions: A comparative study with the existing methods is also performed. For performance evaluation, metrics such as accuracy, sensitivity, and F1-measure are calculated. The performance of the proposed method is better than that of the existing methodologies, and it can thus be used for the effective diagnosis of the disease.
Collapse
Affiliation(s)
- Kashif Shaheed
- Department of Multimedia Systems, Faculty of Electronics, Telecommunication and Informatics, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Piotr Szczuko
- Department of Multimedia Systems, Faculty of Electronics, Telecommunication and Informatics, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad 44000, Pakistan
| | - Mubarak Albathan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
- Correspondence: ; Tel.: +966-503451575
| |
Collapse
|
37
|
Moussaid A, Zrira N, Benmiloud I, Farahat Z, Karmoun Y, Benzidia Y, Mouline S, El Abdi B, Bourkadi JE, Ngote N. On the Implementation of a Post-Pandemic Deep Learning Algorithm Based on a Hybrid CT-Scan/X-ray Images Classification Applied to Pneumonia Categories. Healthcare (Basel) 2023; 11:healthcare11050662. [PMID: 36900667 PMCID: PMC10000749 DOI: 10.3390/healthcare11050662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 02/11/2023] [Accepted: 02/12/2023] [Indexed: 03/12/2023] Open
Abstract
The identification and characterization of lung diseases is one of the most interesting research topics in recent years. They require accurate and rapid diagnosis. Although lung imaging techniques have many advantages for disease diagnosis, the interpretation of medial lung images has always been a major problem for physicians and radiologists due to diagnostic errors. This has encouraged the use of modern artificial intelligence techniques such as deep learning. In this paper, a deep learning architecture based on EfficientNetB7, known as the most advanced architecture among convolutional networks, has been constructed for classification of medical X-ray and CT images of lungs into three classes namely: common pneumonia, coronavirus pneumonia and normal cases. In terms of accuracy, the proposed model is compared with recent pneumonia detection techniques. The results provided robust and consistent features to this system for pneumonia detection with predictive accuracy according to the three classes mentioned above for both imaging modalities: radiography at 99.81% and CT at 99.88%. This work implements an accurate computer-aided system for the analysis of radiographic and CT medical images. The results of the classification are promising and will certainly improve the diagnosis and decision making of lung diseases that keep appearing over time.
Collapse
Affiliation(s)
- Abdelghani Moussaid
- MECAtronique Team, CPS2E Laboratory, National Superior School of Mines Rabat, Rabat 53000, Morocco
- ISITS-Maintenance Biomédicale-/Rabat, Abulcasis International University of Health Sciences, Rabat 10000, Morocco
- Correspondence:
| | - Nabila Zrira
- ADOS Team, LISTD Laboratory, National Superior School of Mines Rabat, Rabat 53000, Morocco
| | - Ibtissam Benmiloud
- MECAtronique Team, CPS2E Laboratory, National Superior School of Mines Rabat, Rabat 53000, Morocco
| | - Zineb Farahat
- SSDT Team, LISTD Laboratory, National Superior School of Mines Rabat, Rabat 53000, Morocco
- Medical Simulation Center/Rabat of the Cheikh Zaid Foundation, Rabat 10000, Morocco
| | - Youssef Karmoun
- ISITS-Maintenance Biomédicale-/Rabat, Abulcasis International University of Health Sciences, Rabat 10000, Morocco
| | - Yasmine Benzidia
- ISITS-Maintenance Biomédicale-/Rabat, Abulcasis International University of Health Sciences, Rabat 10000, Morocco
| | - Soumaya Mouline
- Cheikh Zaïd International University Hospital, B.P. 6533, Rabat 10000, Morocco
| | - Bahia El Abdi
- ISITS-Maintenance Biomédicale-/Rabat, Abulcasis International University of Health Sciences, Rabat 10000, Morocco
| | - Jamal Eddine Bourkadi
- Faculty of Medicine and Pharmacy, Mohammed V University, B.P. 6203, Rabat 10000, Morocco
| | - Nabil Ngote
- MECAtronique Team, CPS2E Laboratory, National Superior School of Mines Rabat, Rabat 53000, Morocco
- ISITS-Maintenance Biomédicale-/Rabat, Abulcasis International University of Health Sciences, Rabat 10000, Morocco
| |
Collapse
|
38
|
Deep Transfer Learning Techniques-Based Automated Classification and Detection of Pulmonary Fibrosis from Chest CT Images. Processes (Basel) 2023. [DOI: 10.3390/pr11020443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Pulmonary Fibrosis (PF) is a non-curable chronic lung disease. Therefore, a quick and accurate PF diagnosis is imperative. In the present study, we aim to compare the performance of the six state-of-the-art Deep Transfer Learning techniques to classify patients accurately and perform abnormality localization in Computer Tomography (CT) scan images. A total of 2299 samples comprising normal and PF-positive CT images were preprocessed. The preprocessed images were split into training (75%), validation (15%), and test data (10%). These transfer learning models were trained and validated by optimizing the hyperparameters, such as the learning rate and the number of epochs. The optimized architectures have been evaluated with different performance metrics to demonstrate the consistency of the optimized model. At epoch 26, using an optimized learning rate of 0.0000625, the ResNet50v2 model achieved the highest training and validation accuracy (training = 99.92%, validation = 99.22%) and minimum loss (training = 0.00428, validation = 0.00683) for CT images. The experimental evaluation on the independent testing data confirms that optimized ResNet50v2 outperformed every other optimized architecture under consideration achieving a perfect score of 1.0 in each of the standard performance measures such as accuracy, precision, recall, F1-score, Mathew Correlation Coefficient (MCC), Area under the Receiver Operating Characteristic (ROC-AUC) curve, and the Area under the Precision recall (AUC_PR) curve. Therefore, we can propose that the optimized ResNet50v2 is a reliable diagnostic model for automatically classifying PF-positive patients using chest CT images.
Collapse
|
39
|
Interpretable Differential Diagnosis of Non-COVID Viral Pneumonia, Lung Opacity and COVID-19 Using Tuned Transfer Learning and Explainable AI. Healthcare (Basel) 2023; 11:healthcare11030410. [PMID: 36766986 PMCID: PMC9914430 DOI: 10.3390/healthcare11030410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 01/20/2023] [Accepted: 01/28/2023] [Indexed: 02/04/2023] Open
Abstract
The coronavirus epidemic has spread to virtually every country on the globe, inflicting enormous health, financial, and emotional devastation, as well as the collapse of healthcare systems in some countries. Any automated COVID detection system that allows for fast detection of the COVID-19 infection might be highly beneficial to the healthcare service and people around the world. Molecular or antigen testing along with radiology X-ray imaging is now utilized in clinics to diagnose COVID-19. Nonetheless, due to a spike in coronavirus and hospital doctors' overwhelming workload, developing an AI-based auto-COVID detection system with high accuracy has become imperative. On X-ray images, the diagnosis of COVID-19, non-COVID-19 non-COVID viral pneumonia, and other lung opacity can be challenging. This research utilized artificial intelligence (AI) to deliver high-accuracy automated COVID-19 detection from normal chest X-ray images. Further, this study extended to differentiate COVID-19 from normal, lung opacity and non-COVID viral pneumonia images. We have employed three distinct pre-trained models that are Xception, VGG19, and ResNet50 on a benchmark dataset of 21,165 X-ray images. Initially, we formulated the COVID-19 detection problem as a binary classification problem to classify COVID-19 from normal X-ray images and gained 97.5%, 97.5%, and 93.3% accuracy for Xception, VGG19, and ResNet50 respectively. Later we focused on developing an efficient model for multi-class classification and gained an accuracy of 75% for ResNet50, 92% for VGG19, and finally 93% for Xception. Although Xception and VGG19's performances were identical, Xception proved to be more efficient with its higher precision, recall, and f-1 scores. Finally, we have employed Explainable AI on each of our utilized model which adds interpretability to our study. Furthermore, we have conducted a comprehensive comparison of the model's explanations and the study revealed that Xception is more precise in indicating the actual features that are responsible for a model's predictions.This addition of explainable AI will benefit the medical professionals greatly as they will get to visualize how a model makes its prediction and won't have to trust our developed machine-learning models blindly.
Collapse
|
40
|
Kanjanasurat I, Tenghongsakul K, Purahong B, Lasakul A. CNN-RNN Network Integration for the Diagnosis of COVID-19 Using Chest X-ray and CT Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:1356. [PMID: 36772394 PMCID: PMC9919640 DOI: 10.3390/s23031356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 01/07/2023] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
The 2019 coronavirus disease (COVID-19) has rapidly spread across the globe. It is crucial to identify positive cases as rapidly as humanely possible to provide appropriate treatment for patients and prevent the pandemic from spreading further. Both chest X-ray and computed tomography (CT) images are capable of accurately diagnosing COVID-19. To distinguish lung illnesses (i.e., COVID-19 and pneumonia) from normal cases using chest X-ray and CT images, we combined convolutional neural network (CNN) and recurrent neural network (RNN) models by replacing the fully connected layers of CNN with a version of RNN. In this framework, the attributes of CNNs were utilized to extract features and those of RNNs to calculate dependencies and classification base on extracted features. CNN models VGG19, ResNet152V2, and DenseNet121 were combined with long short-term memory (LSTM) and gated recurrent unit (GRU) RNN models, which are convenient to develop because these networks are all available as features on many platforms. The proposed method is evaluated using a large dataset totaling 16,210 X-ray and CT images (5252 COVID-19 images, 6154 pneumonia images, and 4804 normal images) were taken from several databases, which had various image sizes, brightness levels, and viewing angles. Their image quality was enhanced via normalization, gamma correction, and contrast-limited adaptive histogram equalization. The ResNet152V2 with GRU model achieved the best architecture with an accuracy of 93.37%, an F1 score of 93.54%, a precision of 93.73%, and a recall of 93.47%. From the experimental results, the proposed method is highly effective in distinguishing lung diseases. Furthermore, both CT and X-ray images can be used as input for classification, allowing for the rapid and easy detection of COVID-19.
Collapse
Affiliation(s)
| | - Kasi Tenghongsakul
- School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
| | - Boonchana Purahong
- School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
| | - Attasit Lasakul
- School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
| |
Collapse
|
41
|
Optimization of facial skin temperature-based anomaly detection model considering diurnal variation. ARTIFICIAL LIFE AND ROBOTICS 2023. [DOI: 10.1007/s10015-023-00853-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
42
|
Ait Nasser A, Akhloufi MA. A Review of Recent Advances in Deep Learning Models for Chest Disease Detection Using Radiography. Diagnostics (Basel) 2023; 13:159. [PMID: 36611451 PMCID: PMC9818166 DOI: 10.3390/diagnostics13010159] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/21/2022] [Accepted: 12/26/2022] [Indexed: 01/05/2023] Open
Abstract
Chest X-ray radiography (CXR) is among the most frequently used medical imaging modalities. It has a preeminent value in the detection of multiple life-threatening diseases. Radiologists can visually inspect CXR images for the presence of diseases. Most thoracic diseases have very similar patterns, which makes diagnosis prone to human error and leads to misdiagnosis. Computer-aided detection (CAD) of lung diseases in CXR images is among the popular topics in medical imaging research. Machine learning (ML) and deep learning (DL) provided techniques to make this task more efficient and faster. Numerous experiments in the diagnosis of various diseases proved the potential of these techniques. In comparison to previous reviews our study describes in detail several publicly available CXR datasets for different diseases. It presents an overview of recent deep learning models using CXR images to detect chest diseases such as VGG, ResNet, DenseNet, Inception, EfficientNet, RetinaNet, and ensemble learning methods that combine multiple models. It summarizes the techniques used for CXR image preprocessing (enhancement, segmentation, bone suppression, and data-augmentation) to improve image quality and address data imbalance issues, as well as the use of DL models to speed-up the diagnosis process. This review also discusses the challenges present in the published literature and highlights the importance of interpretability and explainability to better understand the DL models' detections. In addition, it outlines a direction for researchers to help develop more effective models for early and automatic detection of chest diseases.
Collapse
Affiliation(s)
| | - Moulay A. Akhloufi
- Perception, Robotics and Intelligent Machines Research Group (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1C 3E9, Canada
| |
Collapse
|
43
|
Akhter Y, Singh R, Vatsa M. AI-based radiodiagnosis using chest X-rays: A review. Front Big Data 2023; 6:1120989. [PMID: 37091458 PMCID: PMC10116151 DOI: 10.3389/fdata.2023.1120989] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 01/06/2023] [Indexed: 04/25/2023] Open
Abstract
Chest Radiograph or Chest X-ray (CXR) is a common, fast, non-invasive, relatively cheap radiological examination method in medical sciences. CXRs can aid in diagnosing many lung ailments such as Pneumonia, Tuberculosis, Pneumoconiosis, COVID-19, and lung cancer. Apart from other radiological examinations, every year, 2 billion CXRs are performed worldwide. However, the availability of the workforce to handle this amount of workload in hospitals is cumbersome, particularly in developing and low-income nations. Recent advances in AI, particularly in computer vision, have drawn attention to solving challenging medical image analysis problems. Healthcare is one of the areas where AI/ML-based assistive screening/diagnostic aid can play a crucial part in social welfare. However, it faces multiple challenges, such as small sample space, data privacy, poor quality samples, adversarial attacks and most importantly, the model interpretability for reliability on machine intelligence. This paper provides a structured review of the CXR-based analysis for different tasks, lung diseases and, in particular, the challenges faced by AI/ML-based systems for diagnosis. Further, we provide an overview of existing datasets, evaluation metrics for different[][15mm][0mm]Q5 tasks and patents issued. We also present key challenges and open problems in this research domain.
Collapse
|
44
|
Alaiad AI, Mugdadi EA, Hmeidi II, Obeidat N, Abualigah L. Predicting the Severity of COVID-19 from Lung CT Images Using Novel Deep Learning. J Med Biol Eng 2023; 43:135-146. [PMID: 37077696 PMCID: PMC10010231 DOI: 10.1007/s40846-023-00783-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 02/16/2023] [Indexed: 04/21/2023]
Abstract
Purpose Coronavirus 2019 (COVID-19) had major social, medical, and economic impacts globally. The study aims to develop a deep-learning model that can predict the severity of COVID-19 in patients based on CT images of their lungs. Methods COVID-19 causes lung infections, and qRT-PCR is an essential tool used to detect virus infection. However, qRT-PCR is inadequate for detecting the severity of the disease and the extent to which it affects the lung. In this paper, we aim to determine the severity level of COVID-19 by studying lung CT scans of people diagnosed with the virus. Results We used images from King Abdullah University Hospital in Jordan; we collected our dataset from 875 cases with 2205 CT images. A radiologist classified the images into four levels of severity: normal, mild, moderate, and severe. We used various deep-learning algorithms to predict the severity of lung diseases. The results show that the best deep-learning algorithm used is Resnet101, with an accuracy score of 99.5% and a data loss rate of 0.03%. Conclusion The proposed model assisted in diagnosing and treating COVID-19 patients and helped improve patient outcomes.
Collapse
Affiliation(s)
- Ahmad Imwafak Alaiad
- Computer Information System, Jordan University of Science and Technology, Irbid, Jordan
| | - Esraa Ahmad Mugdadi
- Computer Information System, Jordan University of Science and Technology, Irbid, Jordan
| | - Ismail Ibrahim Hmeidi
- Computer Information System, Jordan University of Science and Technology, Irbid, Jordan
| | - Naser Obeidat
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid, Jordan
| | - Laith Abualigah
- Computer Science Department, Prince Hussein Bin Abdullah Faculty for Information Technology, Al al-Bayt University, Mafraq, 25113 Jordan
- College of Engineering, Yuan Ze University, Taoyuan, Taiwan
- Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman, 19328 Jordan
- Faculty of Information Technology, Middle East University, Amman, 11831 Jordan
- Applied Science Research Center, Applied Science Private University, Amman, 11931 Jordan
- School of Computer Sciences, Universiti Sains Malaysia, 11800 Pulau Pinang, Malaysia
| |
Collapse
|
45
|
Nawaz S, Rasheed S, Sami W, Hussain L, Aldweesh A, Tag eldin E, Ahmad Salaria U, Shahbaz Khan M. Deep Learning ResNet101 Deep Features of Portable Chest X-Ray Accurately Classify COVID-19 Lung Infection. COMPUTERS, MATERIALS & CONTINUA 2023; 75:5213-5228. [DOI: 10.32604/cmc.2023.037543] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/30/2023] [Indexed: 01/14/2025]
|
46
|
Deep Learning for Detecting COVID-19 Using Medical Images. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 10:bioengineering10010019. [PMID: 36671590 PMCID: PMC9854504 DOI: 10.3390/bioengineering10010019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
The global spread of COVID-19 (also known as SARS-CoV-2) is a major international public health crisis [...].
Collapse
|
47
|
Lanjewar MG, Shaikh AY, Parab J. Cloud-based COVID-19 disease prediction system from X-Ray images using convolutional neural network on smartphone. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:1-30. [PMID: 36467434 PMCID: PMC9684956 DOI: 10.1007/s11042-022-14232-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 11/01/2022] [Accepted: 11/04/2022] [Indexed: 06/17/2023]
Abstract
COVID-19 has engulfed over 200 nations through human-to-human transmission, either directly or indirectly. Reverse Transcription-polymerase Chain Reaction (RT-PCR) has been endorsed as a standard COVID-19 diagnostic procedure but has caveats such as low sensitivity, the need for a skilled workforce, and is time-consuming. Coronaviruses show significant manifestation in Chest X-Ray (CX-Ray) images and, thus, can be a viable option for an alternate COVID-19 diagnostic strategy. An automatic COVID-19 detection system can be developed to detect the disease, thus reducing strain on the healthcare system. This paper discusses a real-time Convolutional Neural Network (CNN) based system for COVID-19 illness prediction from CX-Ray images on the cloud. The implemented CNN model displays exemplary results, with training accuracy being 99.94% and validation accuracy reaching 98.81%. The confusion matrix was utilized to assess the models' outcome and achieved 99% precision, 98% recall, 99% F1 score, 100% training area under the curve (AUC) and 98.3% validation AUC. The same CX-Ray dataset was also employed to predict the COVID-19 disease with deep Convolution Neural Networks (DCNN), such as ResNet50, VGG19, InceptonV3, and Xception. The prediction outcome demonstrated that the present CNN was more capable than the DCNN models. The efficient CNN model was deployed to the Platform as a Service (PaaS) cloud.
Collapse
Affiliation(s)
- Madhusudan G. Lanjewar
- School of Physical and Applied Sciences, Goa University, Taleigao Plateau, Goa, 403206 India
| | - Arman Yusuf Shaikh
- School of Physical and Applied Sciences, Goa University, Taleigao Plateau, Goa, 403206 India
| | - Jivan Parab
- School of Physical and Applied Sciences, Goa University, Taleigao Plateau, Goa, 403206 India
| |
Collapse
|
48
|
Sun W, Pang Y, Zhang G. CCT: Lightweight compact convolutional transformer for lung disease CT image classification. Front Physiol 2022; 13:1066999. [DOI: 10.3389/fphys.2022.1066999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 10/25/2022] [Indexed: 11/06/2022] Open
Abstract
Computed tomography (CT) imaging results are an important criterion for the diagnosis of lung disease. CT images can clearly show the characteristics of lung lesions. Early and accurate detection of lung diseases helps clinicians to improve patient care effectively. Therefore, in this study, we used a lightweight compact convolutional transformer (CCT) to build a prediction model for lung disease classification using chest CT images. We added a position offset term and changed the attention mechanism of the transformer encoder to an axial attention mechanism module. As a result, the classification performance of the model was improved in terms of height and width. We show that the model effectively classifies COVID-19, community pneumonia, and normal conditions on the CC-CCII dataset. The proposed model outperforms other comparable models in the test set, achieving an accuracy of 98.5% and a sensitivity of 98.6%. The results show that our method achieves a larger field of perception on CT images, which positively affects the classification of CT images. Thus, the method can provide adequate assistance to clinicians.
Collapse
|
49
|
Hamza A, Attique Khan M, Wang SH, Alhaisoni M, Alharbi M, Hussein HS, Alshazly H, Kim YJ, Cha J. COVID-19 classification using chest X-ray images based on fusion-assisted deep Bayesian optimization and Grad-CAM visualization. Front Public Health 2022; 10:1046296. [PMID: 36408000 PMCID: PMC9672507 DOI: 10.3389/fpubh.2022.1046296] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 10/12/2022] [Indexed: 11/06/2022] Open
Abstract
The COVID-19 virus's rapid global spread has caused millions of illnesses and deaths. As a result, it has disastrous consequences for people's lives, public health, and the global economy. Clinical studies have revealed a link between the severity of COVID-19 cases and the amount of virus present in infected people's lungs. Imaging techniques such as computed tomography (CT) and chest x-rays can detect COVID-19 (CXR). Manual inspection of these images is a difficult process, so computerized techniques are widely used. Deep convolutional neural networks (DCNNs) are a type of machine learning that is frequently used in computer vision applications, particularly in medical imaging, to detect and classify infected regions. These techniques can assist medical personnel in the detection of patients with COVID-19. In this article, a Bayesian optimized DCNN and explainable AI-based framework is proposed for the classification of COVID-19 from the chest X-ray images. The proposed method starts with a multi-filter contrast enhancement technique that increases the visibility of the infected part. Two pre-trained deep models, namely, EfficientNet-B0 and MobileNet-V2, are fine-tuned according to the target classes and then trained by employing Bayesian optimization (BO). Through BO, hyperparameters have been selected instead of static initialization. Features are extracted from the trained model and fused using a slicing-based serial fusion approach. The fused features are classified using machine learning classifiers for the final classification. Moreover, visualization is performed using a Grad-CAM that highlights the infected part in the image. Three publically available COVID-19 datasets are used for the experimental process to obtain improved accuracies of 98.8, 97.9, and 99.4%, respectively.
Collapse
Affiliation(s)
- Ameer Hamza
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | | | - Shui-Hua Wang
- Department of Mathematics, University of Leicester, Leicester, United Kingdom
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Meshal Alharbi
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Hany S. Hussein
- Electrical Engineering Department, College of Engineering, King Khalid University, Abha, Saudi Arabia
- Electrical Engineering Department, Faculty of Engineering, Aswan University, Aswan, Egypt
| | - Hammam Alshazly
- Faculty of Computers and Information, South Valley University, Qena, Egypt
| | - Ye Jin Kim
- Department of Computer Science, Hanyang University, Seoul, South Korea
| | - Jaehyuk Cha
- Department of Computer Science, Hanyang University, Seoul, South Korea
| |
Collapse
|
50
|
An efficient lung disease classification from X-ray images using hybrid Mask-RCNN and BiDLSTM. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|