1
|
Kabir MM, Mridha M, Rahman A, Hamid MA, Monowar MM. Detection of COVID-19, pneumonia, and tuberculosis from radiographs using AI-driven knowledge distillation. Heliyon 2024; 10:e26801. [PMID: 38444490 PMCID: PMC10912466 DOI: 10.1016/j.heliyon.2024.e26801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 01/30/2024] [Accepted: 02/20/2024] [Indexed: 03/07/2024] Open
Abstract
Chest radiography is an essential diagnostic tool for respiratory diseases such as COVID-19, pneumonia, and tuberculosis because it accurately depicts the structures of the chest. However, accurate detection of these diseases from radiographs is a complex task that requires the availability of medical imaging equipment and trained personnel. Conventional deep learning models offer a viable automated solution for this task. However, the high complexity of these models often poses a significant obstacle to their practical deployment within automated medical applications, including mobile apps, web apps, and cloud-based platforms. This study addresses and resolves this dilemma by reducing the complexity of neural networks using knowledge distillation techniques (KDT). The proposed technique trains a neural network on an extensive collection of chest X-ray images and propagates the knowledge to a smaller network capable of real-time detection. To create a comprehensive dataset, we have integrated three popular chest radiograph datasets with chest radiographs for COVID-19, pneumonia, and tuberculosis. Our experiments show that this knowledge distillation approach outperforms conventional deep learning methods in terms of computational complexity and performance for real-time respiratory disease detection. Specifically, our system achieves an impressive average accuracy of 0.97, precision of 0.94, and recall of 0.97.
Collapse
Affiliation(s)
- Md Mohsin Kabir
- Department of Computer Science & Engineering, Bangladesh University of Business & Technology, Dhaka-1216, Bangladesh
| | - M.F. Mridha
- Department of Computer Science, American International University-Bangladesh, Dhaka-1229, Bangladesh
| | - Ashifur Rahman
- Department of Computer Science & Engineering, Bangladesh University of Business & Technology, Dhaka-1216, Bangladesh
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah-21589, Kingdom of Saudi Arabia
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah-21589, Kingdom of Saudi Arabia
| |
Collapse
|
2
|
Haque SBU, Zafar A. Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:308-338. [PMID: 38343214 DOI: 10.1007/s10278-023-00916-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/23/2023] [Accepted: 10/08/2023] [Indexed: 03/02/2024]
Abstract
In the realm of medical diagnostics, the utilization of deep learning techniques, notably in the context of radiology images, has emerged as a transformative force. The significance of artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), lies in their capacity to rapidly and accurately diagnose diseases from radiology images. This capability has been particularly vital during the COVID-19 pandemic, where rapid and precise diagnosis played a pivotal role in managing the spread of the virus. DL models, trained on vast datasets of radiology images, have showcased remarkable proficiency in distinguishing between normal and COVID-19-affected cases, offering a ray of hope amidst the crisis. However, as with any technological advancement, vulnerabilities emerge. Deep learning-based diagnostic models, although proficient, are not immune to adversarial attacks. These attacks, characterized by carefully crafted perturbations to input data, can potentially disrupt the models' decision-making processes. In the medical context, such vulnerabilities could have dire consequences, leading to misdiagnoses and compromised patient care. To address this, we propose a two-phase defense framework that combines advanced adversarial learning and adversarial image filtering techniques. We use a modified adversarial learning algorithm to enhance the model's resilience against adversarial examples during the training phase. During the inference phase, we apply JPEG compression to mitigate perturbations that cause misclassification. We evaluate our approach on three models based on ResNet-50, VGG-16, and Inception-V3. These models perform exceptionally in classifying radiology images (X-ray and CT) of lung regions into normal, pneumonia, and COVID-19 pneumonia categories. We then assess the vulnerability of these models to three targeted adversarial attacks: fast gradient sign method (FGSM), projected gradient descent (PGD), and basic iterative method (BIM). The results show a significant drop in model performance after the attacks. However, our defense framework greatly improves the models' resistance to adversarial attacks, maintaining high accuracy on adversarial examples. Importantly, our framework ensures the reliability of the models in diagnosing COVID-19 from clean images.
Collapse
Affiliation(s)
- Sheikh Burhan Ul Haque
- Department of Computer Science, Aligarh Muslim University, Uttar Pradesh, Aligarh, 202002, India.
| | - Aasim Zafar
- Department of Computer Science, Aligarh Muslim University, Uttar Pradesh, Aligarh, 202002, India
| |
Collapse
|
3
|
He Y, Gao Z, Li Y, Wang Z. A lightweight multi-modality medical image semantic segmentation network base on the novel UNeXt and Wave-MLP. Comput Med Imaging Graph 2024; 111:102311. [PMID: 38035411 DOI: 10.1016/j.compmedimag.2023.102311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/06/2023] [Accepted: 11/03/2023] [Indexed: 12/02/2023]
Abstract
Medical images sometimes contain diseased regions that are different sizes and. shapes, which makes it difficult to accurately segment these areas or their edges. However, directly coupling CNN and MLP to construct global and local dependency. models may also cause significant computational complexity issues. In this paper, a. unique, lightweight UNeXt network segmentation model for medical images based on. dynamic aggregation tokens was proposed. Firstly, the Wave Block module in Wave-MLP was introduced to replace the Tok-MLP module in UNeXt. The phase term in Wave Block can dynamically aggregate tokens, improving the segmentation accuracy of the model. Secondly, an AG attention gate module is added at the skip connection to suppress irrelevant feature representations in the sampling path of the encoding. network, thereby reducing computational costs and paying attention to noise and artifacts. Finally, the Focal Tversky Loss was added to handle both binary and multiple classification jobs. Quantitative and qualitative experiments were conducted on two public datasets: COVID-19 CT and BraTS 2018 MRI. The Dice score, Precision score, recall score, and Iou score of the proposed model on the COVID-19 dataset were 0.928, 0.867, 0.916, and 0.940, respectively. On BraTS 2018, the Dice scores of the ET, WT, and TC categories were 0.933, 0.925, and 0.918, respectively, and the HD scores were 1.595, 2.348, and 1.549, respectively. At the same time, the model is lightweight and has a considerably decreased training time with GFLOPs and Params of 0.52 and 0.76, respectively. The proposed lightweight model is superior to other existing methods in terms of segmentation accuracy and computing complexity according to experimental data.
Collapse
Affiliation(s)
- Yi He
- School of Computer and Information Engineering, Heilongjiang University of Science and Technology, Heilongjiang 150022, PR China
| | - Zhijun Gao
- School of Computer and Information Engineering, Heilongjiang University of Science and Technology, Heilongjiang 150022, PR China.
| | - Yi Li
- School of Computer and Information Engineering, Heilongjiang University of Science and Technology, Heilongjiang 150022, PR China
| | - Zhiming Wang
- School of Computer and Information Engineering, Heilongjiang University of Science and Technology, Heilongjiang 150022, PR China
| |
Collapse
|
4
|
Kushol R, Luk CC, Dey A, Benatar M, Briemberg H, Dionne A, Dupré N, Frayne R, Genge A, Gibson S, Graham SJ, Korngut L, Seres P, Welsh RC, Wilman AH, Zinman L, Kalra S, Yang YH. SF2Former: Amyotrophic lateral sclerosis identification from multi-center MRI data using spatial and frequency fusion transformer. Comput Med Imaging Graph 2023; 108:102279. [PMID: 37573646 DOI: 10.1016/j.compmedimag.2023.102279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 07/17/2023] [Accepted: 07/22/2023] [Indexed: 08/15/2023]
Abstract
Amyotrophic Lateral Sclerosis (ALS) is a complex neurodegenerative disorder characterized by motor neuron degeneration. Significant research has begun to establish brain magnetic resonance imaging (MRI) as a potential biomarker to diagnose and monitor the state of the disease. Deep learning has emerged as a prominent class of machine learning algorithms in computer vision and has shown successful applications in various medical image analysis tasks. However, deep learning methods applied to neuroimaging have not achieved superior performance in classifying ALS patients from healthy controls due to insignificant structural changes correlated with pathological features. Thus, a critical challenge in deep models is to identify discriminative features from limited training data. To address this challenge, this study introduces a framework called SF2Former, which leverages the power of the vision transformer architecture to distinguish ALS subjects from the control group by exploiting the long-range relationships among image features. Additionally, spatial and frequency domain information is combined to enhance the network's performance, as MRI scans are initially captured in the frequency domain and then converted to the spatial domain. The proposed framework is trained using a series of consecutive coronal slices and utilizes pre-trained weights from ImageNet through transfer learning. Finally, a majority voting scheme is employed on the coronal slices of each subject to generate the final classification decision. The proposed architecture is extensively evaluated with multi-modal neuroimaging data (i.e., T1-weighted, R2*, FLAIR) using two well-organized versions of the Canadian ALS Neuroimaging Consortium (CALSNIC) multi-center datasets. The experimental results demonstrate the superiority of the proposed strategy in terms of classification accuracy compared to several popular deep learning-based techniques.
Collapse
Affiliation(s)
- Rafsanjany Kushol
- Department of Computing Science, University of Alberta, Edmonton, AB, Canada.
| | - Collin C Luk
- Division of Neurology, Department of Medicine, University of Alberta, Edmonton, AB, Canada; Department of Clinical Neurosciences, Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - Avyarthana Dey
- Division of Neurology, Department of Medicine, University of Alberta, Edmonton, AB, Canada
| | - Michael Benatar
- Department of Neurology, University of Miami, Miller School of Medicine, Miami, FL, USA
| | - Hannah Briemberg
- Department of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Annie Dionne
- Axe Neurosciences, CHU de Québec, Université Laval, Québec, QC, Canada; Department of Medicine, Faculty of Medicine, Université Laval, Quebec City, QC, Canada
| | - Nicolas Dupré
- Axe Neurosciences, CHU de Québec, Université Laval, Québec, QC, Canada; Department of Medicine, Faculty of Medicine, Université Laval, Quebec City, QC, Canada
| | - Richard Frayne
- Departments of Radiology and Clinical Neurosciences, Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - Angela Genge
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
| | - Summer Gibson
- Department of Neurology, University of Utah, Salt Lake City, UT, USA
| | - Simon J Graham
- Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON, Canada
| | - Lawrence Korngut
- Department of Clinical Neurosciences, Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - Peter Seres
- Departments of Biomedical Engineering and Radiology and Diagnostic Imaging, University of Alberta, Edmonton, AB, Canada
| | - Robert C Welsh
- Department of Psychiatry, University of Utah, Salt Lake City, UT, USA
| | - Alan H Wilman
- Departments of Biomedical Engineering and Radiology and Diagnostic Imaging, University of Alberta, Edmonton, AB, Canada
| | - Lorne Zinman
- Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON, Canada; Division of Neurology, Department of Medicine, University of Toronto, Toronto, ON, Canada
| | - Sanjay Kalra
- Department of Computing Science, University of Alberta, Edmonton, AB, Canada; Division of Neurology, Department of Medicine, University of Alberta, Edmonton, AB, Canada
| | - Yee-Hong Yang
- Department of Computing Science, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
5
|
Mamalakis M, Garg P, Nelson T, Lee J, Swift AJ, Wild JM, Clayton RH. Artificial Intelligence framework with traditional computer vision and deep learning approaches for optimal automatic segmentation of left ventricle with scar. Artif Intell Med 2023; 143:102610. [PMID: 37673578 DOI: 10.1016/j.artmed.2023.102610] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 05/17/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Automatic segmentation of the cardiac left ventricle with scars remains a challenging and clinically significant task, as it is essential for patient diagnosis and treatment pathways. This study aimed to develop a novel framework and cost function to achieve optimal automatic segmentation of the left ventricle with scars using LGE-MRI images. To ensure the generalization of the framework, an unbiased validation protocol was established using out-of-distribution (OOD) internal and external validation cohorts, and intra-observation and inter-observer variability ground truths. The framework employs a combination of traditional computer vision techniques and deep learning, to achieve optimal segmentation results. The traditional approach uses multi-atlas techniques, active contours, and k-means methods, while the deep learning approach utilizes various deep learning techniques and networks. The study found that the traditional computer vision technique delivered more accurate results than deep learning, except in cases where there was breath misalignment error. The optimal solution of the framework achieved robust and generalized results with Dice scores of 82.8 ± 6.4% and 72.1 ± 4.6% in the internal and external OOD cohorts, respectively. The developed framework offers a high-performance solution for automatic segmentation of the left ventricle with scars using LGE-MRI. Unlike existing state-of-the-art approaches, it achieves unbiased results across different hospitals and vendors without the need for training or tuning in hospital cohorts. This framework offers a valuable tool for experts to accomplish the task of fully automatic segmentation of the left ventricle with scars based on a single-modality cardiac scan.
Collapse
Affiliation(s)
- Michail Mamalakis
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK.
| | - Pankaj Garg
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Tom Nelson
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Justin Lee
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Andrew J Swift
- Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK; Department of Infection, Immunity & Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - James M Wild
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Polaris, Imaging Sciences, Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - Richard H Clayton
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK.
| |
Collapse
|
6
|
Gopatoti A, Vijayalakshmi P. MTMC-AUR2CNet: Multi-textural multi-class attention recurrent residual convolutional neural network for COVID-19 classification using chest X-ray images. Biomed Signal Process Control 2023; 85:104857. [PMID: 36968651 PMCID: PMC10027978 DOI: 10.1016/j.bspc.2023.104857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 02/13/2023] [Accepted: 03/11/2023] [Indexed: 03/24/2023]
Abstract
Coronavirus disease (COVID-19) has infected over 603 million confirmed cases as of September 2022, and its rapid spread has raised concerns worldwide. More than 6.4 million fatalities in confirmed patients have been reported. According to reports, the COVID-19 virus causes lung damage and rapidly mutates before the patient receives any diagnosis-specific medicine. Daily increasing COVID-19 cases and the limited number of diagnosis tool kits encourage the use of deep learning (DL) models to assist health care practitioners using chest X-ray (CXR) images. The CXR is a low radiation radiography tool available in hospitals to diagnose COVID-19 and combat this spread. We propose a Multi-Textural Multi-Class (MTMC) UNet-based Recurrent Residual Convolutional Neural Network (MTMC-UR2CNet) and MTMC-UR2CNet with attention mechanism (MTMC-AUR2CNet) for multi-class lung lobe segmentation of CXR images. The lung lobe segmentation output of MTMC-UR2CNet and MTMC-AUR2CNet are mapped individually with their input CXRs to generate the region of interest (ROI). The multi-textural features are extracted from the ROI of each proposed MTMC network. The extracted multi-textural features from ROI are fused and are trained to the Whale optimization algorithm (WOA) based DeepCNN classifier on classifying the CXR images into normal (healthy), COVID-19, viral pneumonia, and lung opacity. The experimental result shows that the MTMC-AUR2CNet has superior performance in multi-class lung lobe segmentation of CXR images with an accuracy of 99.47%, followed by MTMC-UR2CNet with an accuracy of 98.39%. Also, MTMC-AUR2CNet improves the multi-textural multi-class classification accuracy of the WOA-based DeepCNN classifier to 97.60% compared to MTMC-UR2CNet.
Collapse
Affiliation(s)
- Anandbabu Gopatoti
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
- Centre for Research, Anna University, Chennai, Tamil Nadu, India
| | - P Vijayalakshmi
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
7
|
Feyisa DW, Ayano YM, Debelee TG, Schwenker F. Weak Localization of Radiographic Manifestations in Pulmonary Tuberculosis from Chest X-ray: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:6781. [PMID: 37571564 PMCID: PMC10422452 DOI: 10.3390/s23156781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 08/13/2023]
Abstract
Pulmonary tuberculosis (PTB) is a bacterial infection that affects the lung. PTB remains one of the infectious diseases with the highest global mortalities. Chest radiography is a technique that is often employed in the diagnosis of PTB. Radiologists identify the severity and stage of PTB by inspecting radiographic features in the patient's chest X-ray (CXR). The most common radiographic features seen on CXRs include cavitation, consolidation, masses, pleural effusion, calcification, and nodules. Identifying these CXR features will help physicians in diagnosing a patient. However, identifying these radiographic features for intricate disorders is challenging, and the accuracy depends on the radiologist's experience and level of expertise. So, researchers have proposed deep learning (DL) techniques to detect and mark areas of tuberculosis infection in CXRs. DL models have been proposed in the literature because of their inherent capacity to detect diseases and segment the manifestation regions from medical images. However, fully supervised semantic segmentation requires several pixel-by-pixel labeled images. The annotation of such a large amount of data by trained physicians has some challenges. First, the annotation requires a significant amount of time. Second, the cost of hiring trained physicians is expensive. In addition, the subjectivity of medical data poses a difficulty in having standardized annotation. As a result, there is increasing interest in weak localization techniques. Therefore, in this review, we identify methods employed in the weakly supervised segmentation and localization of radiographic manifestations of pulmonary tuberculosis from chest X-rays. First, we identify the most commonly used public chest X-ray datasets for tuberculosis identification. Following that, we discuss the approaches for weakly localizing tuberculosis radiographic manifestations in chest X-rays. The weakly supervised localization of PTB can highlight the region of the chest X-ray image that contributed the most to the DL model's classification output and help pinpoint the diseased area. Finally, we discuss the limitations and challenges of weakly supervised techniques in localizing TB manifestations regions in chest X-ray images.
Collapse
Affiliation(s)
- Degaga Wolde Feyisa
- Ethiopian Artificial Intelligence Institute, Addis Ababa P.O. Box 40782, Ethiopia; (D.W.F.); (Y.M.A.); (T.G.D.)
| | - Yehualashet Megersa Ayano
- Ethiopian Artificial Intelligence Institute, Addis Ababa P.O. Box 40782, Ethiopia; (D.W.F.); (Y.M.A.); (T.G.D.)
| | - Taye Girma Debelee
- Ethiopian Artificial Intelligence Institute, Addis Ababa P.O. Box 40782, Ethiopia; (D.W.F.); (Y.M.A.); (T.G.D.)
- Department of Electrical and Computer Engineering, Addis Ababa Science and Technology University, Addis Ababa P.O. Box 120611, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, Ulm University, 89069 Ulm, Germany
| |
Collapse
|
8
|
Iqbal A, Usman M, Ahmed Z. Tuberculosis chest X-ray detection using CNN-based hybrid segmentation and classification approach. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
9
|
Oh J, Park C, Lee H, Rim B, Kim Y, Hong M, Lyu J, Han S, Choi S. OView-AI Supporter for Classifying Pneumonia, Pneumothorax, Tuberculosis, Lung Cancer Chest X-ray Images Using Multi-Stage Superpixels Classification. Diagnostics (Basel) 2023; 13:diagnostics13091519. [PMID: 37174910 PMCID: PMC10177540 DOI: 10.3390/diagnostics13091519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 04/20/2023] [Accepted: 04/20/2023] [Indexed: 05/15/2023] Open
Abstract
The deep learning approach has recently attracted much attention for its outstanding performance to assist in clinical diagnostic tasks, notably in computer-aided solutions. Computer-aided solutions are being developed using chest radiography to identify lung diseases. A chest X-ray image is one of the most often utilized diagnostic imaging modalities in computer-aided solutions since it produces non-invasive standard-of-care data. However, the accurate identification of a specific illness in chest X-ray images still poses a challenge due to their high inter-class similarities and low intra-class variant abnormalities, especially given the complex nature of radiographs and the complex anatomy of the chest. In this paper, we proposed a deep-learning-based solution to classify four lung diseases (pneumonia, pneumothorax, tuberculosis, and lung cancer) and healthy lungs using chest X-ray images. In order to achieve a high performance, the EfficientNet B7 model with the pre-trained weights of ImageNet trained by Noisy Student was used as a backbone model, followed by our proposed fine-tuned layers and hyperparameters. Our study achieved an average test accuracy of 97.42%, sensitivity of 95.93%, and specificity of 99.05%. Additionally, our findings were utilized as diagnostic supporting software in OView-AI system (computer-aided application). We conducted 910 clinical trials and achieved an AUC confidence interval (95% CI) of the diagnostic results in the OView-AI system of 97.01%, sensitivity of 95.68%, and specificity of 99.34%.
Collapse
Affiliation(s)
- Joonho Oh
- Department of Mechanical System Engineering, Chosun University, Gwangju 61452, Republic of Korea
- OTOM, Co., Ltd., Gwangju 61042, Republic of Korea
| | - Chanho Park
- Department of Radiology, Soonchunhyang University Cheonan Hospital, Cheonan 31151, Republic of Korea
| | - Hongchang Lee
- Haewootech Co., Ltd., Busan 46742, Republic of Korea
| | | | - Younggyu Kim
- OTOM, Co., Ltd., Gwangju 61042, Republic of Korea
| | - Min Hong
- Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Republic of Korea
| | - Jiwon Lyu
- Division of Respiratory Medicine, Department of Internal Medicine, Soonchunhyang University Cheonan Hospital, Cheonan 31151, Republic of Korea
| | - Suha Han
- Department of Nursing, Soonchunhyang University Cheonan Hospital, Cheonan 31151, Republic of Korea
| | - Seongjun Choi
- Department of Otolaryngology-Head and Neck Surgery, Cheonan Hospital, Soonchunhyang University College of Medicine, Cheonan 31151, Republic of Korea
| |
Collapse
|
10
|
Mamalakis M, Dwivedi K, Sharkey M, Alabed S, Kiely D, Swift AJ. A transparent artificial intelligence framework to assess lung disease in pulmonary hypertension. Sci Rep 2023; 13:3812. [PMID: 36882484 PMCID: PMC9990015 DOI: 10.1038/s41598-023-30503-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 02/24/2023] [Indexed: 03/09/2023] Open
Abstract
Recent studies have recognized the importance of characterizing the extent of lung disease in pulmonary hypertension patients by using Computed Tomography. The trustworthiness of an artificial intelligence system is linked with the depth of the evaluation in functional, operational, usability, safety and validation dimensions. The safety and validation of an artificial tool is linked to the uncertainty estimation of the model's prediction. On the other hand, the functionality, operation and usability can be achieved by explainable deep learning approaches which can verify the learning patterns and use of the network from a generalized point of view. We developed an artificial intelligence framework to map the 3D anatomical models of patients with lung disease in pulmonary hypertension. To verify the trustworthiness of the framework we studied the uncertainty estimation of the network's prediction, and we explained the learning patterns of the network. Therefore, a new generalized technique combining local explainable and interpretable dimensionality reduction approaches (PCA-GradCam, PCA-Shape) was developed. Our open-source software framework was evaluated in unbiased validation datasets achieving accurate, robust and generalized results.
Collapse
Affiliation(s)
- Michail Mamalakis
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Beech Hill Rd, Sheffield, S10 2RX, UK.
- Department of Computer Science, University of Sheffield, 211 Portobello, Sheffield, S1 4DP, UK.
- Insigneo Institute for in silico Medicine, University of Sheffield, The Pam Liversidge Building, Sheffield, S1 3JD, UK.
| | - Krit Dwivedi
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Beech Hill Rd, Sheffield, S10 2RX, UK
- Insigneo Institute for in silico Medicine, University of Sheffield, The Pam Liversidge Building, Sheffield, S1 3JD, UK
| | - Michael Sharkey
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Beech Hill Rd, Sheffield, S10 2RX, UK
| | - Samer Alabed
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Beech Hill Rd, Sheffield, S10 2RX, UK
- Insigneo Institute for in silico Medicine, University of Sheffield, The Pam Liversidge Building, Sheffield, S1 3JD, UK
| | - David Kiely
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Beech Hill Rd, Sheffield, S10 2RX, UK
- Department of Cardiology, University of Sheffield, Sheffield Teaching Hospitals Sheffield, Sheffield, S5 7AU, UK
| | - Andrew J Swift
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Beech Hill Rd, Sheffield, S10 2RX, UK.
- Insigneo Institute for in silico Medicine, University of Sheffield, The Pam Liversidge Building, Sheffield, S1 3JD, UK.
| |
Collapse
|
11
|
Balan E, Saraniya O. Novel neural network architecture using sharpened cosine similarity for robust classification of Covid-19, pneumonia and tuberculosis diseases from X-rays. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-222840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
COVID-19 is a rapidly proliferating transmissible virus that substantially impacts the world population. Consequently, there is an increasing demand for fast testing, diagnosis, and treatment. However, there is a growing need for quick testing, diagnosis, and treatment. In order to treat infected individuals, stop the spread of the disease, and cure severe pneumonia, early covid-19 detection is crucial. Along with covid-19, various pneumonia etiologies, including tuberculosis, provide additional difficulties for the medical system. In this study, covid-19, pneumonia, tuberculosis, and other specific diseases are categorized using Sharpened Cosine Similarity Network (SCS-Net) rather than dot products in neural networks. In order to benchmark the SCS-Net, the model’s performance is evaluated on binary class (covid-19 and normal), and four-class (tuberculosis, covid-19, pneumonia, and normal) based X-ray images. The proposed SCS-Net for distinguishing various lung disorders has been successfully validated. In multiclass classification, the proposed SCS-Net succeeded with an accuracy of 94.05% and a Cohen’s kappa score of 90.70% ; in binary class, it achieved an accuracy of 96.67% and its Cohen’s kappa score of 93.70%. According to our investigation, SCS in deep neural networks significantly lowers the test error with lower divergence. SCS significantly increases classification accuracy in neural networks and speeds up training.
Collapse
Affiliation(s)
- Elakkiya Balan
- Department of Electronics and Communication Engineering, Sri Venkateswara College of Engineering, Chennai, Tamil Nadu, India
| | - O. Saraniya
- Department of Electronics and Communication Engineering, Government College of Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
12
|
Sun H, Ren G, Teng X, Song L, Li K, Yang J, Hu X, Zhan Y, Wan SBN, Wong MFE, Chan KK, Tsang HCH, Xu L, Wu TC, Kong FM(S, Wang YXJ, Qin J, Chan WCL, Ying M, Cai J. Artificial intelligence-assisted multistrategy image enhancement of chest X-rays for COVID-19 classification. Quant Imaging Med Surg 2023; 13:394-416. [PMID: 36620146 PMCID: PMC9816729 DOI: 10.21037/qims-22-610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 09/17/2022] [Indexed: 11/13/2022]
Abstract
Background The coronavirus disease 2019 (COVID-19) led to a dramatic increase in the number of cases of patients with pneumonia worldwide. In this study, we aimed to develop an AI-assisted multistrategy image enhancement technique for chest X-ray (CXR) images to improve the accuracy of COVID-19 classification. Methods Our new classification strategy consisted of 3 parts. First, the improved U-Net model with a variational encoder segmented the lung region in the CXR images processed by histogram equalization. Second, the residual net (ResNet) model with multidilated-rate convolution layers was used to suppress the bone signals in the 217 lung-only CXR images. A total of 80% of the available data were allocated for training and validation. The other 20% of the remaining data were used for testing. The enhanced CXR images containing only soft tissue information were obtained. Third, the neural network model with a residual cascade was used for the super-resolution reconstruction of low-resolution bone-suppressed CXR images. The training and testing data consisted of 1,200 and 100 CXR images, respectively. To evaluate the new strategy, improved visual geometry group (VGG)-16 and ResNet-18 models were used for the COVID-19 classification task of 2,767 CXR images. The accuracy of the multistrategy enhanced CXR images was verified through comparative experiments with various enhancement images. In terms of quantitative verification, 8-fold cross-validation was performed on the bone suppression model. In terms of evaluating the COVID-19 classification, the CXR images obtained by the improved method were used to train 2 classification models. Results Compared with other methods, the CXR images obtained based on the proposed model had better performance in the metrics of peak signal-to-noise ratio and root mean square error. The super-resolution CXR images of bone suppression obtained based on the neural network model were also anatomically close to the real CXR images. Compared with the initial CXR images, the classification accuracy rates of the internal and external testing data on the VGG-16 model increased by 5.09% and 12.81%, respectively, while the values increased by 3.51% and 18.20%, respectively, for the ResNet-18 model. The numerical results were better than those of the single-enhancement, double-enhancement, and no-enhancement CXR images. Conclusions The multistrategy enhanced CXR images can help to classify COVID-19 more accurately than the other existing methods.
Collapse
Affiliation(s)
- Hongfei Sun
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China;,School of Automation, Northwestern Polytechnical University, Xi’an, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Xinzhi Teng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Liming Song
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Kang Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi’an, China
| | - Xiaofei Hu
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Yuefu Zhan
- Department of Radiology, Hainan Women and Children’s Medical Center, Hainan, China
| | - Shiu Bun Nelson Wan
- Department of Radiology, Pamela Youde Nethersole Eastern Hospital, Hong Kong, China
| | - Man Fung Esther Wong
- Department of Radiology, Pamela Youde Nethersole Eastern Hospital, Hong Kong, China
| | - King Kwong Chan
- Department of Radiology and Imaging, Queen Elizabeth Hospital, Hong Kong, China
| | | | - Lu Xu
- Department of Radiology and Imaging, Queen Elizabeth Hospital, Hong Kong, China
| | - Tak Chiu Wu
- Department of Medicine, Queen Elizabeth Hospital, Hong Kong, China
| | | | - Yi Xiang J. Wang
- Deparment of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Wing Chi Lawrence Chan
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Michael Ying
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
13
|
Islam Bhuiyan MR, Azam S, Montaha S, Jim RI, Karim A, Khan IU, Brady M, Hasan MZ, De Boer F, Mukta MSH. Deep learning-based analysis of COVID-19 X-ray images: Incorporating clinical significance and assessing misinterpretation. Digit Health 2023; 9:20552076231215915. [PMID: 38025114 PMCID: PMC10668574 DOI: 10.1177/20552076231215915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 11/06/2023] [Indexed: 12/01/2023] Open
Abstract
COVID-19, pneumonia, and tuberculosis have had a significant effect on recent global health. Since 2019, COVID-19 has been a major factor underlying the increase in respiratory-related terminal illness. Early-stage interpretation and identification of these diseases from X-ray images is essential to aid medical specialists in diagnosis. In this study, (COV-X-net19) a convolutional neural network model is developed and customized with a soft attention mechanism to classify lung diseases into four classes: normal, COVID-19, pneumonia, and tuberculosis using chest X-ray images. Image preprocessing is carried out by adjusting optimal parameters to preprocess the images before undertaking training of the classification models. Moreover, the proposed model is optimized by experimenting with different architectural structures and hyperparameters to further boost performance. The performance of the proposed model is compared with eight state-of-the-art transfer learning models for a comparative evaluation. Results suggest that the COV-X-net19 outperforms other models with a testing accuracy of 95.19%, precision of 96.49% and F1-score of 95.13%. Another novel approach of this study is to find out the probable reason behind image misclassification by analyzing the handcrafted imaging features with statistical evaluation. A statistical analysis known as analysis of variance test is performed, to identify at which point the model can identify a class accurately, and at which point the model cannot identify the class. The potential features responsible for the misclassification are also found. Moreover, Random Forest Feature importance technique and Minimum Redundancy Maximum Relevance technique are also explored. The methods and findings of this study can benefit in the clinical perspective in early detection and enable a better understanding of the cause of misclassification.
Collapse
Affiliation(s)
- Md. Rahad Islam Bhuiyan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, Australia
| | - Sidratul Montaha
- Department of Computer Science, University of Calgary, Calgary, Canada
| | - Risul Islam Jim
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh
| | - Asif Karim
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, Australia
| | - Inam Ullah Khan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh
| | - Mark Brady
- School of Law, Faculty of Arts and Society, Charles Darwin University, Casuarina, NT, Australia
| | - Md. Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh
| | - Friso De Boer
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, Australia
| | - Md. Saddam Hossain Mukta
- Department of Computer Science and Engineering, United International University (UIU), Dhaka, Bangladesh
| |
Collapse
|
14
|
Lung Diseases Detection Using Various Deep Learning Algorithms. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:3563696. [PMID: 36776955 PMCID: PMC9918362 DOI: 10.1155/2023/3563696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 08/17/2022] [Accepted: 11/24/2022] [Indexed: 02/05/2023]
Abstract
The primary objective of this proposed framework work is to detect and classify various lung diseases such as pneumonia, tuberculosis, and lung cancer from standard X-ray images and Computerized Tomography (CT) scan images with the help of volume datasets. We implemented three deep learning models namely Sequential, Functional & Transfer models and trained them on open-source training datasets. To augment the patient's treatment, deep learning techniques are promising and successful domains that extend the machine learning domain where CNNs are trained to extract features and offers great potential from datasets of images in biomedical application. Our primary aim is to validate our models as a new direction to address the problem on the datasets and then to compare their performance with other existing models. Our models were able to reach higher levels of accuracy for possible solutions and provide effectiveness to humankind for faster detection of diseases and serve as best performing models. The conventional networks have poor performance for tilted, rotated, and other abnormal orientation and have poor learning framework. The results demonstrated that the proposed framework with a sequential model outperforms other existing methods in terms of an F1 score of 98.55%, accuracy of 98.43%, recall of 96.33% for pneumonia and for tuberculosis F1 score of 97.99%, accuracy of 99.4%, and recall of 98.88%. In addition, the functional model for cancer outperformed with an accuracy of 99.9% and specificity of 99.89% and paves way to less number of trained parameters, leading to less computational overhead and less expensive than existing pretrained models. In our work, we implemented a state-of-the art CNN with various models to classify lung diseases accurately.
Collapse
|
15
|
Diagnostic Accuracy of the Artificial Intelligence Methods in Medical Imaging for Pulmonary Tuberculosis: A Systematic Review and Meta-Analysis. J Clin Med 2022; 12:jcm12010303. [PMID: 36615102 PMCID: PMC9820940 DOI: 10.3390/jcm12010303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 12/21/2022] [Accepted: 12/24/2022] [Indexed: 01/03/2023] Open
Abstract
Tuberculosis (TB) remains one of the leading causes of death among infectious diseases worldwide. Early screening and diagnosis of pulmonary tuberculosis (PTB) is crucial in TB control, and tend to benefit from artificial intelligence. Here, we aimed to evaluate the diagnostic efficacy of a variety of artificial intelligence methods in medical imaging for PTB. We searched MEDLINE and Embase with the OVID platform to identify trials published update to November 2022 that evaluated the effectiveness of artificial-intelligence-based software in medical imaging of patients with PTB. After data extraction, the quality of studies was assessed using quality assessment of diagnostic accuracy studies 2 (QUADAS-2). Pooled sensitivity and specificity were estimated using a bivariate random-effects model. In total, 3987 references were initially identified and 61 studies were finally included, covering a wide range of 124,959 individuals. The pooled sensitivity and the specificity were 91% (95% confidence interval (CI), 89-93%) and 65% (54-75%), respectively, in clinical trials, and 94% (89-96%) and 95% (91-97%), respectively, in model-development studies. These findings have demonstrated that artificial-intelligence-based software could serve as an accurate tool to diagnose PTB in medical imaging. However, standardized reporting guidance regarding AI-specific trials and multicenter clinical trials is urgently needed to truly transform this cutting-edge technology into clinical practice.
Collapse
|
16
|
Mostafa FA, Elrefaei LA, Fouda MM, Hossam A. A Survey on AI Techniques for Thoracic Diseases Diagnosis Using Medical Images. Diagnostics (Basel) 2022; 12:diagnostics12123034. [PMID: 36553041 PMCID: PMC9777249 DOI: 10.3390/diagnostics12123034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/20/2022] [Accepted: 11/22/2022] [Indexed: 12/12/2022] Open
Abstract
Thoracic diseases refer to disorders that affect the lungs, heart, and other parts of the rib cage, such as pneumonia, novel coronavirus disease (COVID-19), tuberculosis, cardiomegaly, and fracture. Millions of people die every year from thoracic diseases. Therefore, early detection of these diseases is essential and can save many lives. Earlier, only highly experienced radiologists examined thoracic diseases, but recent developments in image processing and deep learning techniques are opening the door for the automated detection of these diseases. In this paper, we present a comprehensive review including: types of thoracic diseases; examination types of thoracic images; image pre-processing; models of deep learning applied to the detection of thoracic diseases (e.g., pneumonia, COVID-19, edema, fibrosis, tuberculosis, chronic obstructive pulmonary disease (COPD), and lung cancer); transfer learning background knowledge; ensemble learning; and future initiatives for improving the efficacy of deep learning models in applications that detect thoracic diseases. Through this survey paper, researchers may be able to gain an overall and systematic knowledge of deep learning applications in medical thoracic images. The review investigates a performance comparison of various models and a comparison of various datasets.
Collapse
Affiliation(s)
- Fatma A. Mostafa
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
| | - Lamiaa A. Elrefaei
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, College of Science and Engineering, Idaho State University, Pocatello, ID 83209, USA
- Correspondence:
| | - Aya Hossam
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
| |
Collapse
|
17
|
Shibu George G, Raj Mishra P, Sinha P, Ranjan Prusty M. COVID-19 Detection on Chest X-Ray Images Using Homomorphic Transformation and VGG Inspired Deep Convolutional Neural Network. Biocybern Biomed Eng 2022; 43:1-16. [PMID: 36447948 PMCID: PMC9684127 DOI: 10.1016/j.bbe.2022.11.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 11/01/2022] [Accepted: 11/18/2022] [Indexed: 11/25/2022]
Abstract
COVID-19 had caused the whole world to come to a standstill. The current detection methods are time consuming as well as costly. Using Chest X-rays (CXRs) is a solution to this problem, however, manual examination of CXRs is a cumbersome and difficult process needing specialization in the domain. Most of existing methods used for this application involve the usage of pretrained models such as VGG19, ResNet, DenseNet, Xception, and EfficeintNet which were trained on RGB image datasets. X-rays are fundamentally single channel images, hence using RGB trained model is not appropriate since it increases the operations by involving three channels instead of one. A way of using pretrained model for grayscale images is by replicating the one channel image data to three channel which introduces redundancy and another way is by altering the input layer of pretrained model to take in one channel image data, which comprises the weights in the forward layers that were trained on three channel images which weakens the use of pre-trained weights in a transfer learning approach. A novel approach for identification of COVID-19 using CXRs, Contrast Limited Adaptive Histogram Equalization (CLAHE) along with Homomorphic Transformation Filter which is used to process the pixel data in images and extract features from the CXRs is suggested in this paper. These processed images are then provided as input to a VGG inspired deep Convolutional Neural Network (CNN) model which takes one channel image data as input (grayscale images) to categorize CXRs into three class labels, namely, No-Findings, COVID-19, and Pneumonia. Evaluation of the suggested model is done with the help of two publicly available datasets; one to obtain COVID-19 and No-Finding images and the other to obtain Pneumonia CXRs. The dataset comprises 6750 images in total; 2250 images for each class. Results obtained show that the model has achieved 96.56% for multi-class classification and 98.06% accuracy for binary classification using 5-fold stratified cross validation (CV) method. This result is competitive and up to the mark when compared with the performance shown by existing approaches for COVID-19 classification.
Collapse
Affiliation(s)
- Gerosh Shibu George
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu 600127, India
| | - Pratyush Raj Mishra
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu 600127, India
| | - Panav Sinha
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu 600127, India
| | - Manas Ranjan Prusty
- Centre for Cyber Physical Systems, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu 600127, India
| |
Collapse
|
18
|
Ray S, Banerjee A, Swift A, Fanstone JW, Mamalakis M, Vorselaars B, Wilkie C, Cole J, Mackenzie LS, Weeks S. A robust COVID-19 mortality prediction calculator based on Lymphocyte count, Urea, C-Reactive Protein, Age and Sex (LUCAS) with chest X-rays. Sci Rep 2022; 12:18220. [PMID: 36309547 PMCID: PMC9617052 DOI: 10.1038/s41598-022-21803-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 10/04/2022] [Indexed: 01/08/2023] Open
Abstract
There have been numerous risk tools developed to enable triaging of SARS-CoV-2 positive patients with diverse levels of complexity. Here we presented a simplified risk-tool based on minimal parameters and chest X-ray (CXR) image data that predicts the survival of adult SARS-CoV-2 positive patients at hospital admission. We analysed the NCCID database of patient blood variables and CXR images from 19 hospitals across the UK using multivariable logistic regression. The initial dataset was non-randomly split between development and internal validation dataset with 1434 and 310 SARS-CoV-2 positive patients, respectively. External validation of the final model was conducted on 741 Accident and Emergency (A&E) admissions with suspected SARS-CoV-2 infection from a separate NHS Trust. The LUCAS mortality score included five strongest predictors (Lymphocyte count, Urea, C-reactive protein, Age, Sex), which are available at any point of care with rapid turnaround of results. Our simple multivariable logistic model showed high discrimination for fatal outcome with the area under the receiving operating characteristics curve (AUC-ROC) in development cohort 0.765 (95% confidence interval (CI): 0.738-0.790), in internal validation cohort 0.744 (CI: 0.673-0.808), and in external validation cohort 0.752 (CI: 0.713-0.787). The discriminatory power of LUCAS increased slightly when including the CXR image data. LUCAS can be used to obtain valid predictions of mortality in patients within 60 days of SARS-CoV-2 RT-PCR results into low, moderate, high, or very high risk of fatality.
Collapse
Affiliation(s)
- Surajit Ray
- School of Mathematics and Statistics, University of Glasgow, Glasgow, G12 8QQ, UK
| | - Abhirup Banerjee
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, OX3 7DQ, UK
| | - Andrew Swift
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, S10 2RX, UK
| | | | - Michail Mamalakis
- School of Computer Science, University of Sheffield, 211 Portobello, Sheffield City Centre, Sheffield, S1 4DP, UK
| | - Bart Vorselaars
- School of Mathematics and Physics, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Craig Wilkie
- School of Mathematics and Statistics, University of Glasgow, Glasgow, G12 8QQ, UK
| | - Joby Cole
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, S10 2RX, UK
| | - Louise S Mackenzie
- School of Applied Sciences, University of Brighton, Brighton, BN2 4AT, UK.
| | - Simonne Weeks
- School of Applied Sciences, University of Brighton, Brighton, BN2 4AT, UK
| |
Collapse
|
19
|
EVAE-Net: An Ensemble Variational Autoencoder Deep Learning Network for COVID-19 Classification Based on Chest X-ray Images. Diagnostics (Basel) 2022; 12:diagnostics12112569. [DOI: 10.3390/diagnostics12112569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/13/2022] [Accepted: 10/18/2022] [Indexed: 11/16/2022] Open
Abstract
The COVID-19 pandemic has had a significant impact on many lives and the economies of many countries since late December 2019. Early detection with high accuracy is essential to help break the chain of transmission. Several radiological methodologies, such as CT scan and chest X-ray, have been employed in diagnosing and monitoring COVID-19 disease. Still, these methodologies are time-consuming and require trial and error. Machine learning techniques are currently being applied by several studies to deal with COVID-19. This study exploits the latent embeddings of variational autoencoders combined with ensemble techniques to propose three effective EVAE-Net models to detect COVID-19 disease. Two encoders are trained on chest X-ray images to generate two feature maps. The feature maps are concatenated and passed to either a combined or individual reparameterization phase to generate latent embeddings by sampling from a distribution. The latent embeddings are concatenated and passed to a classification head for classification. The COVID-19 Radiography Dataset from Kaggle is the source of chest X-ray images. The performances of the three models are evaluated. The proposed model shows satisfactory performance, with the best model achieving 99.19% and 98.66% accuracy on four classes and three classes, respectively.
Collapse
|
20
|
Comprehensive Survey of Machine Learning Systems for COVID-19 Detection. J Imaging 2022; 8:jimaging8100267. [PMID: 36286361 PMCID: PMC9604704 DOI: 10.3390/jimaging8100267] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 09/11/2022] [Accepted: 09/20/2022] [Indexed: 01/14/2023] Open
Abstract
The last two years are considered the most crucial and critical period of the COVID-19 pandemic affecting most life aspects worldwide. This virus spreads quickly within a short period, increasing the fatality rate associated with the virus. From a clinical perspective, several diagnosis methods are carried out for early detection to avoid virus propagation. However, the capabilities of these methods are limited and have various associated challenges. Consequently, many studies have been performed for COVID-19 automated detection without involving manual intervention and allowing an accurate and fast decision. As is the case with other diseases and medical issues, Artificial Intelligence (AI) provides the medical community with potential technical solutions that help doctors and radiologists diagnose based on chest images. In this paper, a comprehensive review of the mentioned AI-based detection solution proposals is conducted. More than 200 papers are reviewed and analyzed, and 145 articles have been extensively examined to specify the proposed AI mechanisms with chest medical images. A comprehensive examination of the associated advantages and shortcomings is illustrated and summarized. Several findings are concluded as a result of a deep analysis of all the previous works using machine learning for COVID-19 detection, segmentation, and classification.
Collapse
|
21
|
Anilkumar B, Srividya K, Mary Sowjanya A. Covid-19 classification using sigmoid based hyper-parameter modified DNN for CT scans and chest X-rays. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:12513-12536. [PMID: 36157352 PMCID: PMC9485800 DOI: 10.1007/s11042-022-13783-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 07/22/2022] [Accepted: 09/05/2022] [Indexed: 06/16/2023]
Abstract
Coronavirus disease (COVID-19) is an infectious disease caused by the SARS-CoV-2 virus. Diagnosis of Computed Tomography (CT), and Chest X-rays (CXR) contains the problem of overfitting, earlier diagnosis, and mode collapse. In this work, we predict the classification of the Corona in CT and CXR images. Initially, the images of the dataset are pre-processed using the function of an adaptive Gaussian filter for de-nosing the image. Once the image is pre-processed it goes to Sigmoid Based Hyper-Parameter Modified DNN(SHMDNN). The hyperparameter modification makes use of the optimization algorithm of adaptive grey wolf optimization (AGWO). Finally, classification takes place and classifies the CT and CXR images into 3 categories namely normal, Pneumonia, and COVID-19 images. Better accuracy of 99.9% is reached when compared to different DNN networks.
Collapse
Affiliation(s)
- B Anilkumar
- Department of ECE, GMR Institute of Technology, Rajam, India
| | - K Srividya
- Department of CSE, GMR Institute of Technology, Rajam, India
| | - A Mary Sowjanya
- Department of CS&SE, Andhra University College of Engineering, Visakhapatnam, India
| |
Collapse
|
22
|
Usman M, Zia T, Tariq A. Analyzing Transfer Learning of Vision Transformers for Interpreting Chest Radiography. J Digit Imaging 2022; 35:1445-1462. [PMID: 35819537 PMCID: PMC9274969 DOI: 10.1007/s10278-022-00666-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 05/28/2022] [Accepted: 06/03/2022] [Indexed: 12/01/2022] Open
Abstract
Limited availability of medical imaging datasets is a vital limitation when using “data hungry” deep learning to gain performance improvements. Dealing with the issue, transfer learning has become a de facto standard, where a pre-trained convolution neural network (CNN), typically on natural images (e.g., ImageNet), is finetuned on medical images. Meanwhile, pre-trained transformers, which are self-attention-based models, have become de facto standard in natural language processing (NLP) and state of the art in image classification due to their powerful transfer learning abilities. Inspired by the success of transformers in NLP and image classification, large-scale transformers (such as vision transformer) are trained on natural images. Based on these recent developments, this research aims to explore the efficacy of pre-trained natural image transformers for medical images. Specifically, we analyze pre-trained vision transformer on CheXpert and pediatric pneumonia dataset. We use CNN standard models including VGGNet and ResNet as baseline models. By examining the acquired representations and results, we discover that transfer learning from the pre-trained vision transformer shows improved results as compared to pre-trained CNN which demonstrates a greater transfer ability of the transformers in medical imaging.
Collapse
Affiliation(s)
- Mohammad Usman
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tehseen Zia
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan.,Medical Imaging and Diagnostic Center, National Center for Artificial Intelligence, Islamabad, Pakistan
| | - Ali Tariq
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
23
|
Reina Reina A, Barrera JM, Valdivieso B, Gas ME, Maté A, Trujillo JC. Machine learning model from a Spanish cohort for prediction of SARS-COV-2 mortality risk and critical patients. Sci Rep 2022; 12:5723. [PMID: 35388055 PMCID: PMC8986770 DOI: 10.1038/s41598-022-09613-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 03/24/2022] [Indexed: 12/11/2022] Open
Abstract
Patients affected by SARS-COV-2 have collapsed healthcare systems around the world. Consequently, different challenges arise regarding the prediction of hospital needs, optimization of resources, diagnostic triage tools and patient evolution, as well as tools that allow us to analyze which are the factors that determine the severity of patients. Currently, it is widely accepted that one of the problems since the pandemic appeared was to detect (i) who patients were about to need Intensive Care Unit (ICU) and (ii) who ones were about not overcome the disease. These critical patients collapsed Hospitals to the point that many surgeries around the world had to be cancelled. Therefore, the aim of this paper is to provide a Machine Learning (ML) model that helps us to prevent when a patient is about to be critical. Although we are in the era of data, regarding the SARS-COV-2 patients, there are currently few tools and solutions that help medical professionals to predict the evolution of patients in order to improve their treatment and the needs of critical resources at hospitals. Moreover, most of these tools have been created from small populations and/or Chinese populations, which carries a high risk of bias. In this paper, we present a model, based on ML techniques, based on 5378 Spanish patients’ data from which a quality cohort of 1201 was extracted to train the model. Our model is capable of predicting the probability of death of patients with SARS-COV-2 based on age, sex and comorbidities of the patient. It also allows what-if analysis, with the inclusion of comorbidities that the patient may develop during the SARS-COV-2 infection. For the training of the model, we have followed an agnostic approach. We explored all the active comorbidities during the SARS-COV-2 infection of the patients with the objective that the model weights the effect of each comorbidity on the patient’s evolution according to the data available. The model has been validated by using stratified cross-validation with k = 5 to prevent class imbalance. We obtained robust results, presenting a high hit rate, with 84.16% accuracy, 83.33% sensitivity, and an Area Under the Curve (AUC) of 0.871. The main advantage of our model, in addition to its high success rate, is that it can be used with medical records in order to predict their diagnosis, allowing the critical population to be identified in advance. Furthermore, it uses the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD 9-CM) standard. In this sense, we should also emphasize that those hospitals using other encodings can add an intermediate layer business to business (B2B) with the aim of making transformations to the same international format.
Collapse
Affiliation(s)
- Alejandro Reina Reina
- Lucentia Department of Software and Computing Systems, University of Alicante, Carretera San Vicente del Raspeig s/n, 03690, Alicante, Spain. .,Lucentia Lab, Av. Pintor Pérez Gil, 16, 03540, Alicante, Spain.
| | - José M Barrera
- Lucentia Department of Software and Computing Systems, University of Alicante, Carretera San Vicente del Raspeig s/n, 03690, Alicante, Spain.,Lucentia Lab, Av. Pintor Pérez Gil, 16, 03540, Alicante, Spain
| | - Bernardo Valdivieso
- The University and Polytechnic La Fe Hospital of Valencia, Avenida Fernando Abril Martorell, 106 Torre H 7a planta, 46026, Valencia, Spain
| | - María-Eugenia Gas
- The University and Polytechnic La Fe Hospital of Valencia, Avenida Fernando Abril Martorell, 106 Torre H 7a planta, 46026, Valencia, Spain
| | - Alejandro Maté
- Lucentia Department of Software and Computing Systems, University of Alicante, Carretera San Vicente del Raspeig s/n, 03690, Alicante, Spain.,Lucentia Lab, Av. Pintor Pérez Gil, 16, 03540, Alicante, Spain
| | - Juan C Trujillo
- Lucentia Department of Software and Computing Systems, University of Alicante, Carretera San Vicente del Raspeig s/n, 03690, Alicante, Spain.,Lucentia Lab, Av. Pintor Pérez Gil, 16, 03540, Alicante, Spain
| |
Collapse
|