1
|
Jaiswal A, Fervers P, Meng F, Zhang H, Móré D, Giannakis A, Wailzer J, Bucher AM, Maintz D, Kottlors J, Shahzad R, Persigehl T. Performance of AI Approaches for COVID-19 Diagnosis Using Chest CT Scans: The Impact of Architecture and Dataset. ROFO-FORTSCHR RONTG 2025. [PMID: 40300640 DOI: 10.1055/a-2577-3928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2025]
Abstract
AI is emerging as a promising tool for diagnosing COVID-19 based on chest CT scans. The aim of this study was the comparison of AI models for COVID-19 diagnosis. Therefore, we: (1) trained three distinct AI models for classifying COVID-19 and non-COVID-19 pneumonia (nCP) using a large, clinically relevant CT dataset, (2) evaluated the models' performance using an independent test set, and (3) compared the models both algorithmically and experimentally.In this multicenter multi-vendor study, we collected n=1591 chest CT scans of COVID-19 (n=762) and nCP (n=829) patients from China and Germany. In Germany, the data was collected from three RACOON sites. We trained and validated three COVID-19 AI models with different architectures: COVNet based on 2D-CNN, DeCoVnet based on 3D-CNN, and AD3D-MIL based on 3D-CNN with attention module. 991 CT scans were used for training the AI models using 5-fold cross-validation. 600 CT scans from 6 different centers were used for independent testing. The models' performance was evaluated using accuracy (Acc), sensitivity (Se), and specificity (Sp).The average validation accuracy of the COVNet, DeCoVnet, and AD3D-MIL models over the 5 folds was 80.9%, 82.0%, and 84.3%, respectively. On the independent test set with n=600 CT scans, COVNet yielded Acc=76.6%, Se=67.8%, Sp=85.7%; DeCoVnet provided Acc=75.1%, Se=61.2%, Sp=89.7%; and AD3D-MIL achieved Acc=73.9%, Se=57.7%, Sp=90.8%.The classification performance of the evaluated AI models is highly dependent on the training data rather than the architecture itself. Our results demonstrate a high specificity and moderate sensitivity. The AI classification models should not be used unsupervised but could potentially assist radiologists in COVID-19 and nCP identification. · This study compares AI approaches for diagnosing COVID-19 in chest CT scans, which is essential for further optimizing the delivery of healthcare and for pandemic preparedness.. · Our experiments using a multicenter, multi-vendor, diverse dataset show that the training data is the key factor in determining the diagnostic performance.. · The AI models should not be used unsupervised but as a tool to assist radiologists.. · Jaiswal A, Fervers P, Meng F et al. Performance of AI Approaches for COVID-19 Diagnosis Using Chest CT Scans: The Impact of Architecture and Dataset. Rofo 2025; DOI 10.1055/a-2577-3928.
Collapse
Affiliation(s)
- Astha Jaiswal
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Philipp Fervers
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Fanyang Meng
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| | - Huimao Zhang
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| | - Dorottya Móré
- Department of Diagnostic and Interventional Radiology, University Hospital Heidelberg, University of Heidelberg, Heidelberg, Germany
| | - Athanasios Giannakis
- Department of Diagnostic and Interventional Radiology, University Hospital Heidelberg, University of Heidelberg, Heidelberg, Germany
| | - Jasmin Wailzer
- Institute for Diagnostic and Interventional Radiology, Frankfurt University Hospital, Frankfurt, Germany
| | - Andreas Michael Bucher
- Institute for Diagnostic and Interventional Radiology, Frankfurt University Hospital, Frankfurt, Germany
| | - David Maintz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Jonathan Kottlors
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Rahil Shahzad
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Philips Healthcare, Innovative Technologies, Aachen, Germany
| | - Thorsten Persigehl
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
2
|
Shah HP, Naqvi ASAH, Rajput P, Ambra H, Venkatesh H, Saleem J, Saravanan S, Wanjari M, Mittal G. Artificial intelligence-based deep learning algorithms for ground-glass opacity nodule detection: A review. NARRA J 2025; 5:e1361. [PMID: 40352244 PMCID: PMC12059966 DOI: 10.52225/narra.v5i1.1361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 01/30/2025] [Indexed: 05/14/2025]
Abstract
Ground-glass opacities (GGOs) are hazy opacities on chest computed tomography (CT) scans that can indicate various lung diseases, including early COVID-19, pneumonia, and lung cancer. Artificial intelligence (AI) is a promising tool for analyzing medical images, such as chest CT scans. The aim of this study was to evaluate AI models' performance in detecting GGO nodules using metrics like accuracy, sensitivity, specificity, F1 score, area under the curve (AUC) and precision. We designed a search strategy to include reports focusing on deep learning algorithms applied to high-resolution CT scans. The search was performed on PubMed, Google Scholar, Scopus, and ScienceDirect to identify studies published between 2016 and 2024. Quality appraisal of included studies was conducted using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool, assessing the risk of bias and applicability concerns across four domains. Two reviewers independently screened studies reporting the diagnostic ability of AI-assisted CT scans in early GGO detection, where the review results were synthesized qualitatively. Out of 5,247 initially identified records, we found 18 studies matching the inclusion criteria of this study. Among evaluated models, DenseNet achieved the highest accuracy of 99.48%, though its sensitivity and specificity were not reported. WOANet showed an accuracy of 98.78%, with a sensitivity of 98.37% and high specificity of 99.19%, excelling particularly in specificity without compromising sensitivity. In conclusion, AI models can potentially detect GGO on chest CT scans. Future research should focus on developing hybrid models that integrate various AI approaches to improve clinical applicability.
Collapse
Affiliation(s)
| | | | | | | | - Harrini Venkatesh
- Sri Ramachandra Institute of Higher Education and Research (SRIHER), Chennai, India
| | | | | | - Mayur Wanjari
- Department of Research and Development, Datta Meghe Institute of Higher Education and Research, Wardha, India
| | - Gaurav Mittal
- Mahatma Gandhi Institute of Medical Sciences, Sevagram, India
| |
Collapse
|
3
|
Niu C, Lyu Q, Carothers CD, Kaviani P, Tan J, Yan P, Kalra MK, Whitlow CT, Wang G. Medical multimodal multitask foundation model for lung cancer screening. Nat Commun 2025; 16:1523. [PMID: 39934138 PMCID: PMC11814333 DOI: 10.1038/s41467-025-56822-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Accepted: 01/31/2025] [Indexed: 02/13/2025] Open
Abstract
Lung cancer screening (LCS) reduces mortality and involves vast multimodal data such as text, tables, and images. Fully mining such big data requires multitasking; otherwise, occult but important features may be overlooked, adversely affecting clinical management and healthcare quality. Here we propose a medical multimodal-multitask foundation model (M3FM) for three-dimensional low-dose computed tomography (CT) LCS. After curating a multimodal multitask dataset of 49 clinical data types, 163,725 chest CT series, and 17 tasks involved in LCS, we develop a scalable multimodal question-answering model architecture for synergistic multimodal multitasking. M3FM consistently outperforms the state-of-the-art models, improving lung cancer risk and cardiovascular disease mortality risk prediction by up to 20% and 10% respectively. M3FM processes multiscale high-dimensional images, handles various combinations of multimodal data, identifies informative data elements, and adapts to out-of-distribution tasks with minimal data. In this work, we show that M3FM advances various LCS tasks through large-scale multimodal and multitask learning.
Collapse
Affiliation(s)
- Chuang Niu
- Department of Biomedical Engineering, School of Engineering, Biomedical Imaging Center, Center for Computational Innovations, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 8th Street, Troy, 12180, NY, USA
| | - Qing Lyu
- Department of Radiology, Wake Forest University School of Medicine, Winston-Salem, 27103, NC, USA
| | - Christopher D Carothers
- Department of Biomedical Engineering, School of Engineering, Biomedical Imaging Center, Center for Computational Innovations, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 8th Street, Troy, 12180, NY, USA
| | - Parisa Kaviani
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, White 270-E, 55 Fruit Street, Boston, 02114, MA, USA
| | - Josh Tan
- Department of Radiology, Wake Forest University School of Medicine, Winston-Salem, 27103, NC, USA
| | - Pingkun Yan
- Department of Biomedical Engineering, School of Engineering, Biomedical Imaging Center, Center for Computational Innovations, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 8th Street, Troy, 12180, NY, USA
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, White 270-E, 55 Fruit Street, Boston, 02114, MA, USA.
| | - Christopher T Whitlow
- Department of Radiology, Wake Forest University School of Medicine, Winston-Salem, 27103, NC, USA.
| | - Ge Wang
- Department of Biomedical Engineering, School of Engineering, Biomedical Imaging Center, Center for Computational Innovations, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 8th Street, Troy, 12180, NY, USA.
| |
Collapse
|
4
|
Pham NT, Ko J, Shah M, Rakkiyappan R, Woo HG, Manavalan B. Leveraging deep transfer learning and explainable AI for accurate COVID-19 diagnosis: Insights from a multi-national chest CT scan study. Comput Biol Med 2025; 185:109461. [PMID: 39631112 DOI: 10.1016/j.compbiomed.2024.109461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Revised: 11/03/2024] [Accepted: 11/19/2024] [Indexed: 12/07/2024]
Abstract
The COVID-19 pandemic has emerged as a global health crisis, impacting millions worldwide. Although chest computed tomography (CT) scan images are pivotal in diagnosing COVID-19, their manual interpretation by radiologists is time-consuming and potentially subjective. Automated computer-aided diagnostic (CAD) frameworks offer efficient and objective solutions. However, machine or deep learning methods often face challenges in their reproducibility due to underlying biases and methodological flaws. To address these issues, we propose XCT-COVID, an explainable, transferable, and reproducible CAD framework based on deep transfer learning to predict COVID-19 infection from CT scan images accurately. This is the first study to develop three distinct models within a unified framework by leveraging a previously unexplored large dataset and two widely used smaller datasets. We employed five known convolutional neural network architectures, both with and without pretrained weights, on the larger dataset. We optimized hyperparameters through extensive grid search and 5-fold cross-validation (CV), significantly enhancing the model performance. Experimental results from the larger dataset showed that the VGG16 architecture (XCT-COVID-L) with pretrained weights consistently outperformed other architectures, achieving the best performance, on both 5-fold CV and independent test. When evaluated with the external datasets, XCT-COVID-L performed well with data with similar distributions, demonstrating its transferability. However, its performance significantly decreased on smaller datasets with lower-quality images. To address this, we developed other models, XCT-COVID-S1 and XCT-COVID-S2, specifically for the smaller datasets, outperforming existing methods. Moreover, eXplainable Artificial Intelligence (XAI) analyses were employed to interpret the models' functionalities. For prediction and reproducibility purposes, the implementation of XCT-COVID is publicly accessible at https://github.com/cbbl-skku-org/XCT-COVID/.
Collapse
Affiliation(s)
- Nhat Truong Pham
- Department of Integrative Biotechnology, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon, 16419, Gyeonggi-do, Republic of Korea
| | - Jinsol Ko
- Department of Physiology, Ajou University School of Medicine, Suwon, 16499, Republic of Korea; Department of Biomedical Science, Graduate School, Ajou University, Suwon, Republic of Korea
| | - Masaud Shah
- Department of Physiology, Ajou University School of Medicine, Suwon, 16499, Republic of Korea
| | - Rajan Rakkiyappan
- Department of Mathematics, Bharathiar University, Coimbatore, 641046, Tamil Nadu, India
| | - Hyun Goo Woo
- Department of Physiology, Ajou University School of Medicine, Suwon, 16499, Republic of Korea; Department of Biomedical Science, Graduate School, Ajou University, Suwon, Republic of Korea; Ajou Translational Omics Center (ATOC), Ajou University Medical Center, Republic of Korea.
| | - Balachandran Manavalan
- Department of Integrative Biotechnology, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon, 16419, Gyeonggi-do, Republic of Korea.
| |
Collapse
|
5
|
Ghafoori M, Hamidi M, Modegh RG, Aziz-Ahari A, Heydari N, Tavafizadeh Z, Pournik O, Emdadi S, Samimi S, Mohseni A, Khaleghi M, Dashti H, Rabiee HR. Predicting survival of Iranian COVID-19 patients infected by various variants including omicron from CT Scan images and clinical data using deep neural networks. Heliyon 2023; 9:e21965. [PMID: 38058649 PMCID: PMC10696006 DOI: 10.1016/j.heliyon.2023.e21965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 10/26/2023] [Accepted: 11/01/2023] [Indexed: 12/08/2023] Open
Abstract
Purpose: The rapid spread of the COVID-19 omicron variant virus has resulted in an overload of hospitals around the globe. As a result, many patients are deprived of hospital facilities, increasing mortality rates. Therefore, mortality rates can be reduced by efficiently assigning facilities to higher-risk patients. Therefore, it is crucial to estimate patients' survival probability based on their conditions at the time of admission so that the minimum required facilities can be provided, allowing more opportunities to be available for those who need them. Although radiologic findings in chest computerized tomography scans show various patterns, considering the individual risk factors and other underlying diseases, it is difficult to predict patient prognosis through routine clinical or statistical analysis. Method: In this study, a deep neural network model is proposed for predicting survival based on simple clinical features, blood tests, axial computerized tomography scan images of lungs, and the patients' planned treatment. The model's architecture combines a Convolutional Neural Network and a Long Short Term Memory network. The model was trained using 390 survivors and 108 deceased patients from the Rasoul Akram Hospital and evaluated 109 surviving and 36 deceased patients infected by the omicron variant. Results: The proposed model reached an accuracy of 87.5% on the test data, indicating survival prediction possibility. The accuracy was significantly higher than the accuracy achieved by classical machine learning methods without considering computerized tomography scan images (p-value <= 4E-5). The images were also replaced with hand-crafted features related to the ratio of infected lung lobes used in classical machine-learning models. The highest-performing model reached an accuracy of 84.5%, which was considerably higher than the models trained on mere clinical information (p-value <= 0.006). However, the performance was still significantly less than the deep model (p-value <= 0.016). Conclusion: The proposed deep model achieved a higher accuracy than classical machine learning methods trained on features other than computerized tomography scan images. This proves the images contain extra information. Meanwhile, Artificial Intelligence methods with multimodal inputs can be more reliable and accurate than computerized tomography severity scores.
Collapse
Affiliation(s)
- Mahyar Ghafoori
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Mehrab Hamidi
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Rassa Ghavami Modegh
- Data science and Machine learning Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Alireza Aziz-Ahari
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Neda Heydari
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Zeynab Tavafizadeh
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Omid Pournik
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Sasan Emdadi
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Saeed Samimi
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Amir Mohseni
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Mohammadreza Khaleghi
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Hamed Dashti
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Hamid R. Rabiee
- Data science and Machine learning Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| |
Collapse
|
6
|
Xu W, Nie L, Chen B, Ding W. Dual-stream EfficientNet with adversarial sample augmentation for COVID-19 computer aided diagnosis. Comput Biol Med 2023; 165:107451. [PMID: 37696184 DOI: 10.1016/j.compbiomed.2023.107451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 08/17/2023] [Accepted: 09/04/2023] [Indexed: 09/13/2023]
Abstract
Though a series of computer aided measures have been taken for the rapid and definite diagnosis of 2019 coronavirus disease (COVID-19), they generally fail to achieve high enough accuracy, including the recently popular deep learning-based methods. The main reasons are that: (a) they generally focus on improving the model structures while ignoring important information contained in the medical image itself; (b) the existing small-scale datasets have difficulty in meeting the training requirements of deep learning. In this paper, a dual-stream network based on the EfficientNet is proposed for the COVID-19 diagnosis based on CT scans. The dual-stream network takes into account the important information in both spatial and frequency domains of CT scans. Besides, Adversarial Propagation (AdvProp) technology is used to address the insufficient training data usually faced by the deep learning-based computer aided diagnosis and also the overfitting issue. Feature Pyramid Network (FPN) is utilized to fuse the dual-stream features. Experimental results on the public dataset COVIDx CT-2A demonstrate that the proposed method outperforms the existing 12 deep learning-based methods for COVID-19 diagnosis, achieving an accuracy of 0.9870 for multi-class classification, and 0.9958 for binary classification. The source code is available at https://github.com/imagecbj/covid-efficientnet.
Collapse
Affiliation(s)
- Weijie Xu
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Lina Nie
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Beijing Chen
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China; Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science and Technology, Nanjing, 210044, China.
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019, China
| |
Collapse
|
7
|
Baccarelli E, Scarpiniti M, Momenzadeh A. Twinned Residual Auto-Encoder (TRAE)-A new DL architecture for denoising super-resolution and task-aware feature learning from COVID-19 CT images. EXPERT SYSTEMS WITH APPLICATIONS 2023; 225:120104. [PMID: 37090446 PMCID: PMC10106117 DOI: 10.1016/j.eswa.2023.120104] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 03/21/2023] [Accepted: 04/08/2023] [Indexed: 05/03/2023]
Abstract
The detection of the COronaVIrus Disease 2019 (COVID-19) from Computed Tomography (CT) scans has become a very important task in modern medical diagnosis. Unfortunately, typical resolutions of state-of-the-art CT scans are still not adequate for reliable and accurate automatic detection of COVID-19 disease. Motivated by this consideration, in this paper, we propose a novel architecture that jointly affords the Single-Image Super-Resolution (SISR) and the reliable classification problems from Low Resolution (LR) and noisy CT scans. Specifically, the proposed architecture is based on a couple of Twinned Residual Auto-Encoders (TRAE), which exploits the feature vectors and the SR images recovered by a Master AE for performing transfer learning and then improves the training of a "twinned" Follower AE. In addition, we also develop a Task-Aware (TA) version of the basic TRAE architecture, namely the TA-TRAE, which further utilizes the set of feature vectors generated by the Follower AE for the joint training of an additional auxiliary classifier, so to perform automated medical diagnosis on the basis of the available LR input images without human support. Experimental results and comparisons with a number of state-of-the-art CNN/GAN/CycleGAN benchmark SISR architectures, performed by considering × 2 , × 4 , and × 8 super-resolution (i.e., upscaling) factors, support the effectiveness of the proposed TRAE/TA-TRAE architectures. In particular, the detection accuracy attained by the proposed architectures outperforms the corresponding ones of the implemented CNN, GAN and CycleGAN baselines up to 9.0%, 6.5%, and 6.0% at upscaling factors as high as × 8 .
Collapse
Affiliation(s)
- Enzo Baccarelli
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Michele Scarpiniti
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Alireza Momenzadeh
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| |
Collapse
|
8
|
Santosh KC, GhoshRoy D, Nakarmi S. A Systematic Review on Deep Structured Learning for COVID-19 Screening Using Chest CT from 2020 to 2022. Healthcare (Basel) 2023; 11:2388. [PMID: 37685422 PMCID: PMC10486542 DOI: 10.3390/healthcare11172388] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The emergence of the COVID-19 pandemic in Wuhan in 2019 led to the discovery of a novel coronavirus. The World Health Organization (WHO) designated it as a global pandemic on 11 March 2020 due to its rapid and widespread transmission. Its impact has had profound implications, particularly in the realm of public health. Extensive scientific endeavors have been directed towards devising effective treatment strategies and vaccines. Within the healthcare and medical imaging domain, the application of artificial intelligence (AI) has brought significant advantages. This study delves into peer-reviewed research articles spanning the years 2020 to 2022, focusing on AI-driven methodologies for the analysis and screening of COVID-19 through chest CT scan data. We assess the efficacy of deep learning algorithms in facilitating decision making processes. Our exploration encompasses various facets, including data collection, systematic contributions, emerging techniques, and encountered challenges. However, the comparison of outcomes between 2020 and 2022 proves intricate due to shifts in dataset magnitudes over time. The initiatives aimed at developing AI-powered tools for the detection, localization, and segmentation of COVID-19 cases are primarily centered on educational and training contexts. We deliberate on their merits and constraints, particularly in the context of necessitating cross-population train/test models. Our analysis encompassed a review of 231 research publications, bolstered by a meta-analysis employing search keywords (COVID-19 OR Coronavirus) AND chest CT AND (deep learning OR artificial intelligence OR medical imaging) on both the PubMed Central Repository and Web of Science platforms.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab, Vermillion, SD 57069, USA
| | - Debasmita GhoshRoy
- School of Automation, Banasthali Vidyapith, Tonk 304022, Rajasthan, India;
| | - Suprim Nakarmi
- Department of Computer Science, University of South Dakota, Vermillion, SD 57069, USA;
| |
Collapse
|
9
|
Wu Y, Dai Q, Lu H. COVID-19 diagnosis utilizing wavelet-based contrastive learning with chest CT images. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS : AN INTERNATIONAL JOURNAL SPONSORED BY THE CHEMOMETRICS SOCIETY 2023; 236:104799. [PMID: 36883063 PMCID: PMC9981271 DOI: 10.1016/j.chemolab.2023.104799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 02/20/2023] [Accepted: 03/01/2023] [Indexed: 06/18/2023]
Abstract
The pandemic caused by the coronavirus disease 2019 (COVID-19) has continuously wreaked havoc on human health. Computer-aided diagnosis (CAD) system based on chest computed tomography (CT) has been a hotspot option for COVID-19 diagnosis. However, due to the high cost of data annotation in the medical field, it happens that the number of unannotated data is much larger than the annotated data. Meanwhile, having a highly accurate CAD system always requires a large amount of labeled data training. To solve this problem while meeting the needs, this paper presents an automated and accurate COVID-19 diagnosis system using few labeled CT images. The overall framework of this system is based on the self-supervised contrastive learning (SSCL). Based on the framework, our enhancement of our system can be summarized as follows. 1) We integrated a two-dimensional discrete wavelet transform with contrastive learning to fully use all the features from the images. 2) We use the recently proposed COVID-Net as the encoder, with a redesign to target the specificity of the task and learning efficiency. 3) A new pretraining strategy based on contrastive learning is applied for broader generalization ability. 4) An additional auxiliary task is exerted to promote performance during classification. The final experimental result of our system attained 93.55%, 91.59%, 96.92% and 94.18% for accuracy, recall, precision, and F1-score respectively. By comparing results with the existing schemes, we demonstrate the performance enhancement and superiority of our proposed system.
Collapse
Affiliation(s)
- Yanfu Wu
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, PR China
| | - Qun Dai
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, PR China
| | - Han Lu
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, PR China
| |
Collapse
|
10
|
Lee MH, Shomanov A, Kudaibergenova M, Viderman D. Deep Learning Methods for Interpretation of Pulmonary CT and X-ray Images in Patients with COVID-19-Related Lung Involvement: A Systematic Review. J Clin Med 2023; 12:jcm12103446. [PMID: 37240552 DOI: 10.3390/jcm12103446] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/25/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
SARS-CoV-2 is a novel virus that has been affecting the global population by spreading rapidly and causing severe complications, which require prompt and elaborate emergency treatment. Automatic tools to diagnose COVID-19 could potentially be an important and useful aid. Radiologists and clinicians could potentially rely on interpretable AI technologies to address the diagnosis and monitoring of COVID-19 patients. This paper aims to provide a comprehensive analysis of the state-of-the-art deep learning techniques for COVID-19 classification. The previous studies are methodically evaluated, and a summary of the proposed convolutional neural network (CNN)-based classification approaches is presented. The reviewed papers have presented a variety of CNN models and architectures that were developed to provide an accurate and quick automatic tool to diagnose the COVID-19 virus based on presented CT scan or X-ray images. In this systematic review, we focused on the critical components of the deep learning approach, such as network architecture, model complexity, parameter optimization, explainability, and dataset/code availability. The literature search yielded a large number of studies over the past period of the virus spread, and we summarized their past efforts. State-of-the-art CNN architectures, with their strengths and weaknesses, are discussed with respect to diverse technical and clinical evaluation metrics to safely implement current AI studies in medical practice.
Collapse
Affiliation(s)
- Min-Ho Lee
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Adai Shomanov
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Madina Kudaibergenova
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Dmitriy Viderman
- School of Medicine, Nazarbayev University, 5/1 Kerey and Zhanibek Khandar Str., Astana 010000, Kazakhstan
| |
Collapse
|
11
|
Motta PC, Cortez PC, Silva BRS, Yang G, de Albuquerque VHC. Automatic COVID-19 and Common-Acquired Pneumonia Diagnosis Using Chest CT Scans. Bioengineering (Basel) 2023; 10:529. [PMID: 37237599 PMCID: PMC10215490 DOI: 10.3390/bioengineering10050529] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 04/22/2023] [Accepted: 04/24/2023] [Indexed: 05/28/2023] Open
Abstract
Even with over 80% of the population being vaccinated against COVID-19, the disease continues to claim victims. Therefore, it is crucial to have a secure Computer-Aided Diagnostic system that can assist in identifying COVID-19 and determining the necessary level of care. This is especially important in the Intensive Care Unit to monitor disease progression or regression in the fight against this epidemic. To accomplish this, we merged public datasets from the literature to train lung and lesion segmentation models with five different distributions. We then trained eight CNN models for COVID-19 and Common-Acquired Pneumonia classification. If the examination was classified as COVID-19, we quantified the lesions and assessed the severity of the full CT scan. To validate the system, we used Resnetxt101 Unet++ and Mobilenet Unet for lung and lesion segmentation, respectively, achieving accuracy of 98.05%, F1-score of 98.70%, precision of 98.7%, recall of 98.7%, and specificity of 96.05%. This was accomplished in just 19.70 s per full CT scan, with external validation on the SPGC dataset. Finally, when classifying these detected lesions, we used Densenet201 and achieved accuracy of 90.47%, F1-score of 93.85%, precision of 88.42%, recall of 100.0%, and specificity of 65.07%. The results demonstrate that our pipeline can correctly detect and segment lesions due to COVID-19 and Common-Acquired Pneumonia in CT scans. It can differentiate these two classes from normal exams, indicating that our system is efficient and effective in identifying the disease and assessing the severity of the condition.
Collapse
Affiliation(s)
- Pedro Crosara Motta
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza 60455-970, Brazil; (P.C.M.); (P.C.C.); (B.R.S.S.)
| | - Paulo César Cortez
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza 60455-970, Brazil; (P.C.M.); (P.C.C.); (B.R.S.S.)
| | - Bruno R. S. Silva
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza 60455-970, Brazil; (P.C.M.); (P.C.C.); (B.R.S.S.)
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
| | - Victor Hugo C. de Albuquerque
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza 60455-970, Brazil; (P.C.M.); (P.C.C.); (B.R.S.S.)
| |
Collapse
|
12
|
Akinyelu AA, Bah B. COVID-19 Diagnosis in Computerized Tomography (CT) and X-ray Scans Using Capsule Neural Network. Diagnostics (Basel) 2023; 13:diagnostics13081484. [PMID: 37189585 DOI: 10.3390/diagnostics13081484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/14/2023] [Accepted: 04/14/2023] [Indexed: 05/17/2023] Open
Abstract
This study proposes a deep-learning-based solution (named CapsNetCovid) for COVID-19 diagnosis using a capsule neural network (CapsNet). CapsNets are robust for image rotations and affine transformations, which is advantageous when processing medical imaging datasets. This study presents a performance analysis of CapsNets on standard images and their augmented variants for binary and multi-class classification. CapsNetCovid was trained and evaluated on two COVID-19 datasets of CT images and X-ray images. It was also evaluated on eight augmented datasets. The results show that the proposed model achieved classification accuracy, precision, sensitivity, and F1-score of 99.929%, 99.887%, 100%, and 99.319%, respectively, for the CT images. It also achieved a classification accuracy, precision, sensitivity, and F1-score of 94.721%, 93.864%, 92.947%, and 93.386%, respectively, for the X-ray images. This study presents a comparative analysis between CapsNetCovid, CNN, DenseNet121, and ResNet50 in terms of their ability to correctly identify randomly transformed and rotated CT and X-ray images without the use of data augmentation techniques. The analysis shows that CapsNetCovid outperforms CNN, DenseNet121, and ResNet50 when trained and evaluated on CT and X-ray images without data augmentation. We hope that this research will aid in improving decision making and diagnostic accuracy of medical professionals when diagnosing COVID-19.
Collapse
Affiliation(s)
- Andronicus A Akinyelu
- Research Centre, African Institute for Mathematical Sciences (AIMS) South Africa, Cape Town 7945, South Africa
- Department of Computer Science and Informatics, University of the Free State, Phuthaditjhaba 9866, South Africa
| | - Bubacarr Bah
- Research Centre, African Institute for Mathematical Sciences (AIMS) South Africa, Cape Town 7945, South Africa
- Department of Mathematical Sciences, Stellenbosch University, Cape Town 7945, South Africa
| |
Collapse
|
13
|
Lou L, Liang H, Wang Z. Deep-Learning-Based COVID-19 Diagnosis and Implementation in Embedded Edge-Computing Device. Diagnostics (Basel) 2023; 13:diagnostics13071329. [PMID: 37046553 PMCID: PMC10093656 DOI: 10.3390/diagnostics13071329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 03/24/2023] [Accepted: 03/28/2023] [Indexed: 04/07/2023] Open
Abstract
The rapid spread of coronavirus disease 2019 (COVID-19) has posed enormous challenges to the global public health system. To deal with the COVID-19 pandemic crisis, the more accurate and convenient diagnosis of patients needs to be developed. This paper proposes a deep-learning-based COVID-19 detection method and evaluates its performance on embedded edge-computing devices. By adding an attention module and mixed loss into the original VGG19 model, the method can effectively reduce the parameters of the model and increase the classification accuracy. The improved model was first trained and tested on the PC X86 GPU platform using a large dataset (COVIDx CT-2A) and a medium dataset (integrated CT scan); the weight parameters of the model were reduced by around six times compared to the original model, but it still approximately achieved 98.80%and 97.84% accuracy, outperforming most existing methods. The trained model was subsequently transferred to embedded NVIDIA Jetson devices (TX2, Nano), where it achieved 97% accuracy at a 0.6−1 FPS inference speed using the NVIDIA TensorRT engine. The experimental results demonstrate that the proposed method is practicable and convenient; it can be used on a low-cost medical edge-computing terminal. The source code is available on GitHub for researchers.
Collapse
Affiliation(s)
- Lu Lou
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China
| | - Hong Liang
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China
| | - Zhengxia Wang
- School of Computer Science and Technology, Hainan University, Haikou 570100, China
| |
Collapse
|
14
|
Fan X, Feng X. SELDNet: Sequenced encoder and lightweight decoder network for COVID-19 infection region segmentation. DISPLAYS 2023; 77:102395. [PMID: 36818573 PMCID: PMC9927817 DOI: 10.1016/j.displa.2023.102395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Revised: 01/29/2023] [Accepted: 02/06/2023] [Indexed: 06/18/2023]
Abstract
Segmenting regions of lung infection from computed tomography (CT) images shows excellent potential for rapid and accurate quantifying of Coronavirus disease 2019 (COVID-19) infection and determining disease development and treatment approaches. However, a number of challenges remain, including the complexity of imaging features and their variability with disease progression, as well as the high similarity to other lung diseases, which makes feature extraction difficult. To answer the above challenges, we propose a new sequence encoder and lightweight decoder network for medical image segmentation model (SELDNet). (i) Construct sequence encoders and lightweight decoders based on Transformer and deep separable convolution, respectively, to achieve different fine-grained feature extraction. (ii) Design a semantic association module based on cross-attention mechanism between encoder and decoder to enhance the fusion of different levels of semantics. The experimental results showed that the network can effectively achieve segmentation of COVID-19 infected regions. The dice of the segmentation result was 79.1%, the sensitivity was 76.3%, and the specificity was 96.7%. Compared with several state-of-the-art image segmentation models, our proposed SELDNet model achieves better results in the segmentation task of COVID-19 infected regions.
Collapse
Affiliation(s)
- Xiaole Fan
- College of Software, Taiyuan University of Technology, Taiyuan 030024, China
| | - Xiufang Feng
- College of Software, Taiyuan University of Technology, Taiyuan 030024, China
| |
Collapse
|
15
|
Song J, Ebadi A, Florea A, Xi P, Tremblay S, Wong A. COVID-Net USPro: An Explainable Few-Shot Deep Prototypical Network for COVID-19 Screening Using Point-of-Care Ultrasound. SENSORS (BASEL, SWITZERLAND) 2023; 23:2621. [PMID: 36904833 PMCID: PMC10007046 DOI: 10.3390/s23052621] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 02/20/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
As the Coronavirus Disease 2019 (COVID-19) continues to impact many aspects of life and the global healthcare systems, the adoption of rapid and effective screening methods to prevent the further spread of the virus and lessen the burden on healthcare providers is a necessity. As a cheap and widely accessible medical image modality, point-of-care ultrasound (POCUS) imaging allows radiologists to identify symptoms and assess severity through visual inspection of the chest ultrasound images. Combined with the recent advancements in computer science, applications of deep learning techniques in medical image analysis have shown promising results, demonstrating that artificial intelligence-based solutions can accelerate the diagnosis of COVID-19 and lower the burden on healthcare professionals. However, the lack of large, well annotated datasets poses a challenge in developing effective deep neural networks, especially in the case of rare diseases and new pandemics. To address this issue, we present COVID-Net USPro, an explainable few-shot deep prototypical network that is designed to detect COVID-19 cases from very few ultrasound images. Through intensive quantitative and qualitative assessments, the network not only demonstrates high performance in identifying COVID-19 positive cases, using an explainability component, but it is also shown that the network makes decisions based on the actual representative patterns of the disease. Specifically, COVID-Net USPro achieves 99.55% overall accuracy, 99.93% recall, and 99.83% precision for COVID-19-positive cases when trained with only five shots. In addition to the quantitative performance assessment, our contributing clinician with extensive experience in POCUS interpretation verified the analytic pipeline and results, ensuring that the network's decisions are based on clinically relevant image patterns integral to COVID-19 diagnosis. We believe that network explainability and clinical validation are integral components for the successful adoption of deep learning in the medical field. As part of the COVID-Net initiative, and to promote reproducibility and foster further innovation, the network is open-sourced and available to the public.
Collapse
Affiliation(s)
- Jessy Song
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Ashkan Ebadi
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Digital Technologies Research Centre, National Research Council Canada, Toronto, ON M5T 3J1, Canada
| | - Adrian Florea
- Department of Emergency Medicine, McGill University, Montreal, QC H4A 3J1, Canada
| | - Pengcheng Xi
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Digital Technologies Research Centre, National Research Council Canada, Ottawa, ON K1A 0R6, Canada
| | - Stéphane Tremblay
- Digital Technologies Research Centre, National Research Council Canada, Ottawa, ON K1A 0R6, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Waterloo Artificial Intelligence Institute, Waterloo, ON N2L 3G1, Canada
| |
Collapse
|
16
|
Aslani S, Jacob J. Utilisation of deep learning for COVID-19 diagnosis. Clin Radiol 2023; 78:150-157. [PMID: 36639173 PMCID: PMC9831845 DOI: 10.1016/j.crad.2022.11.006] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 01/12/2023]
Abstract
The COVID-19 pandemic that began in 2019 has resulted in millions of deaths worldwide. Over this period, the economic and healthcare consequences of COVID-19 infection in survivors of acute COVID-19 infection have become apparent. During the course of the pandemic, computer analysis of medical images and data have been widely used by the medical research community. In particular, deep-learning methods, which are artificial intelligence (AI)-based approaches, have been frequently employed. This paper provides a review of deep-learning-based AI techniques for COVID-19 diagnosis using chest radiography and computed tomography. Thirty papers published from February 2020 to March 2022 that used two-dimensional (2D)/three-dimensional (3D) deep convolutional neural networks combined with transfer learning for COVID-19 detection were reviewed. The review describes how deep-learning methods detect COVID-19, and several limitations of the proposed methods are highlighted.
Collapse
Affiliation(s)
- S Aslani
- Centre for Medical Image Computing and Department of Respiratory Medicine, University College London, London, UK.
| | - J Jacob
- Centre for Medical Image Computing and Department of Respiratory Medicine, University College London, London, UK
| |
Collapse
|
17
|
Xu Y, Lam HK, Jia G, Jiang J, Liao J, Bao X. Improving COVID-19 CT classification of CNNs by learning parameter-efficient representation. Comput Biol Med 2023; 152:106417. [PMID: 36543003 PMCID: PMC9750504 DOI: 10.1016/j.compbiomed.2022.106417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 11/22/2022] [Accepted: 12/04/2022] [Indexed: 12/23/2022]
Abstract
The COVID-19 pandemic continues to spread rapidly over the world and causes a tremendous crisis in global human health and the economy. Its early detection and diagnosis are crucial for controlling the further spread. Many deep learning-based methods have been proposed to assist clinicians in automatic COVID-19 diagnosis based on computed tomography imaging. However, challenges still remain, including low data diversity in existing datasets, and unsatisfied detection resulting from insufficient accuracy and sensitivity of deep learning models. To enhance the data diversity, we design augmentation techniques of incremental levels and apply them to the largest open-access benchmark dataset, COVIDx CT-2A. Meanwhile, similarity regularization (SR) derived from contrastive learning is proposed in this study to enable CNNs to learn more parameter-efficient representations, thus improve the accuracy and sensitivity of CNNs. The results on seven commonly used CNNs demonstrate that CNN performance can be improved stably through applying the designed augmentation and SR techniques. In particular, DenseNet121 with SR achieves an average test accuracy of 99.44% in three trials for three-category classification, including normal, non-COVID-19 pneumonia, and COVID-19 pneumonia. The achieved precision, sensitivity, and specificity for the COVID-19 pneumonia category are 98.40%, 99.59%, and 99.50%, respectively. These statistics suggest that our method has surpassed the existing state-of-the-art methods on the COVIDx CT-2A dataset. Source code is available at https://github.com/YujiaKCL/COVID-CT-Similarity-Regularization.
Collapse
Affiliation(s)
- Yujia Xu
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Hak-Keung Lam
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Guangyu Jia
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Jian Jiang
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Junkai Liao
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Xinqi Bao
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| |
Collapse
|
18
|
Banerjee A, Sarkar A, Roy S, Singh PK, Sarkar R. COVID-19 chest X-ray detection through blending ensemble of CNN snapshots. Biomed Signal Process Control 2022; 78:104000. [PMID: 35855489 PMCID: PMC9283670 DOI: 10.1016/j.bspc.2022.104000] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/23/2022] [Accepted: 07/11/2022] [Indexed: 12/04/2022]
Abstract
The novel COVID-19 pandemic, has effectively turned out to be one of the deadliest events in modern history, with unprecedented loss of human life, major economic and financial setbacks and has set the entire world back quite a few decades. However, detection of the COVID-19 virus has become increasingly difficult due to the mutating nature of the virus, and the rise in asymptomatic cases. To counteract this and contribute to the research efforts for a more accurate screening of COVID-19, we have planned this work. Here, we have proposed an ensemble methodology for deep learning models to solve the task of COVID-19 detection from chest X-rays (CXRs) to assist Computer-Aided Detection (CADe) for medical practitioners. We leverage the strategy of transfer learning for Convolutional Neural Networks (CNNs), widely adopted in recent literature, and further propose an efficient ensemble network for their combination. The DenseNet-201 architecture has been trained only once to generate multiple snapshots, offering diverse information about the extracted features from CXRs. We follow the strategy of decision-level fusion to combine the decision scores using the blending algorithm through a Random Forest (RF) meta-learner. Experimental results confirm the efficacy of the proposed ensemble method, as shown through impressive results upon two open access COVID-19 CXR datasets - the largest COVID-X dataset, as well as a smaller scale dataset. On the large COVID-X dataset, the proposed model has achieved an accuracy score of 94.55% and on the smaller dataset by Chowdhury et al., the proposed model has achieved a 98.13% accuracy score.
Collapse
Affiliation(s)
- Avinandan Banerjee
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata 700106, West Bengal, India
| | - Arya Sarkar
- Department of Computer Science, University of Engineering and Management, University Area, Plot No. III - B/5, New Town, Action Area - III, Kolkata 700160, West Bengal, India
| | - Sayantan Roy
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata 700106, West Bengal, India
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata 700106, West Bengal, India
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, 188, Raja S.C. Mallick Road, Kolkata 700032, West Bengal, India
| |
Collapse
|
19
|
Lee JRH, Pavlova M, Famouri M, Wong A. Cancer-Net SCa: tailored deep neural network designs for detection of skin cancer from dermoscopy images. BMC Med Imaging 2022; 22:143. [PMID: 35945505 PMCID: PMC9364616 DOI: 10.1186/s12880-022-00871-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 07/26/2022] [Indexed: 11/25/2022] Open
Abstract
Background Skin cancer continues to be the most frequently diagnosed form of cancer in the U.S., with not only significant effects on health and well-being but also significant economic costs associated with treatment. A crucial step to the treatment and management of skin cancer is effective early detection with key screening approaches such as dermoscopy examinations, leading to stronger recovery prognoses. Motivated by the advances of deep learning and inspired by the open source initiatives in the research community, in this study we introduce Cancer-Net SCa, a suite of deep neural network designs tailored for the detection of skin cancer from dermoscopy images that is open source and available to the general public. To the best of the authors’ knowledge, Cancer-Net SCa comprises the first machine-driven design of deep neural network architectures tailored specifically for skin cancer detection, one of which leverages attention condensers for an efficient self-attention design. Results We investigate and audit the behaviour of Cancer-Net SCa in a responsible and transparent manner through explainability-driven performance validation. All the proposed designs achieved improved accuracy when compared to the ResNet-50 architecture while also achieving significantly reduced architectural and computational complexity. In addition, when evaluating the decision making process of the networks, it can be seen that diagnostically relevant critical factors are leveraged rather than irrelevant visual indicators and imaging artifacts. Conclusion The proposed Cancer-Net SCa designs achieve strong skin cancer detection performance on the International Skin Imaging Collaboration (ISIC) dataset, while providing a strong balance between computation and architectural efficiency and accuracy. While Cancer-Net SCa is not a production-ready screening solution, the hope is that the release of Cancer-Net SCa in open source, open access form will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon them.
Collapse
Affiliation(s)
- James Ren Hou Lee
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.
| | - Maya Pavlova
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.,DarwinAI Corp, Waterloo, Canada
| | | | - Alexander Wong
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, Canada.,DarwinAI Corp, Waterloo, Canada
| |
Collapse
|
20
|
Gomes R, Kamrowski C, Langlois J, Rozario P, Dircks I, Grottodden K, Martinez M, Tee WZ, Sargeant K, LaFleur C, Haley M. A Comprehensive Review of Machine Learning Used to Combat COVID-19. Diagnostics (Basel) 2022; 12:1853. [PMID: 36010204 PMCID: PMC9406981 DOI: 10.3390/diagnostics12081853] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/22/2022] [Accepted: 07/26/2022] [Indexed: 12/19/2022] Open
Abstract
Coronavirus disease (COVID-19) has had a significant impact on global health since the start of the pandemic in 2019. As of June 2022, over 539 million cases have been confirmed worldwide with over 6.3 million deaths as a result. Artificial Intelligence (AI) solutions such as machine learning and deep learning have played a major part in this pandemic for the diagnosis and treatment of COVID-19. In this research, we review these modern tools deployed to solve a variety of complex problems. We explore research that focused on analyzing medical images using AI models for identification, classification, and tissue segmentation of the disease. We also explore prognostic models that were developed to predict health outcomes and optimize the allocation of scarce medical resources. Longitudinal studies were conducted to better understand COVID-19 and its effects on patients over a period of time. This comprehensive review of the different AI methods and modeling efforts will shed light on the role that AI has played and what path it intends to take in the fight against COVID-19.
Collapse
Affiliation(s)
- Rahul Gomes
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Connor Kamrowski
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Jordan Langlois
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Papia Rozario
- Department of Geography and Anthropology, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA;
| | - Ian Dircks
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Keegan Grottodden
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Matthew Martinez
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Wei Zhong Tee
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Kyle Sargeant
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Corbin LaFleur
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Mitchell Haley
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| |
Collapse
|
21
|
Pavlova M, Terhljan N, Chung AG, Zhao A, Surana S, Aboutalebi H, Gunraj H, Sabri A, Alaref A, Wong A. COVID-Net CXR-2: An Enhanced Deep Convolutional Neural Network Design for Detection of COVID-19 Cases From Chest X-ray Images. Front Med (Lausanne) 2022; 9:861680. [PMID: 35755067 PMCID: PMC9226387 DOI: 10.3389/fmed.2022.861680] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 05/12/2022] [Indexed: 01/08/2023] Open
Abstract
As the COVID-19 pandemic devastates globally, the use of chest X-ray (CXR) imaging as a complimentary screening strategy to RT-PCR testing continues to grow given its routine clinical use for respiratory complaint. As part of the COVID-Net open source initiative, we introduce COVID-Net CXR-2, an enhanced deep convolutional neural network design for COVID-19 detection from CXR images built using a greater quantity and diversity of patients than the original COVID-Net. We also introduce a new benchmark dataset composed of 19,203 CXR images from a multinational cohort of 16,656 patients from at least 51 countries, making it the largest, most diverse COVID-19 CXR dataset in open access form. The COVID-Net CXR-2 network achieves sensitivity and positive predictive value of 95.5 and 97.0%, respectively, and was audited in a transparent and responsible manner. Explainability-driven performance validation was used during auditing to gain deeper insights in its decision-making behavior and to ensure clinically relevant factors are leveraged for improving trust in its usage. Radiologist validation was also conducted, where select cases were reviewed and reported on by two board-certified radiologists with over 10 and 19 years of experience, respectively, and showed that the critical factors leveraged by COVID-Net CXR-2 are consistent with radiologist interpretations.
Collapse
Affiliation(s)
- Maya Pavlova
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Naomi Terhljan
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Audrey G. Chung
- Waterloo AI Institute, University of Waterloo, Waterloo, ON, Canada
- DarwinAI Corp., Waterloo, ON, Canada
| | - Andy Zhao
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Siddharth Surana
- Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, Canada
| | - Hossein Aboutalebi
- Waterloo AI Institute, University of Waterloo, Waterloo, ON, Canada
- Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, Canada
| | - Hayden Gunraj
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Ali Sabri
- Department of Radiology, McMaster University, Hamilton, ON, Canada
- Niagara Health System, St. Catharines, ON, Canada
| | - Amer Alaref
- Department of Diagnostic Imaging, Northern Ontario School of Medicine, Thunder Bay, ON, Canada
- Department of Diagnostic Radiology, Thunder Bay Regional Health Sciences Centre, Thunder Bay, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo AI Institute, University of Waterloo, Waterloo, ON, Canada
- DarwinAI Corp., Waterloo, ON, Canada
| |
Collapse
|
22
|
Hassan H, Ren Z, Zhou C, Khan MA, Pan Y, Zhao J, Huang B. Supervised and weakly supervised deep learning models for COVID-19 CT diagnosis: A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106731. [PMID: 35286874 PMCID: PMC8897838 DOI: 10.1016/j.cmpb.2022.106731] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 01/28/2022] [Accepted: 03/03/2022] [Indexed: 05/05/2023]
Abstract
Artificial intelligence (AI) and computer vision (CV) methods become reliable to extract features from radiological images, aiding COVID-19 diagnosis ahead of the pathogenic tests and saving critical time for disease management and control. Thus, this review article focuses on cascading numerous deep learning-based COVID-19 computerized tomography (CT) imaging diagnosis research, providing a baseline for future research. Compared to previous review articles on the topic, this study pigeon-holes the collected literature very differently (i.e., its multi-level arrangement). For this purpose, 71 relevant studies were found using a variety of trustworthy databases and search engines, including Google Scholar, IEEE Xplore, Web of Science, PubMed, Science Direct, and Scopus. We classify the selected literature in multi-level machine learning groups, such as supervised and weakly supervised learning. Our review article reveals that weak supervision has been adopted extensively for COVID-19 CT diagnosis compared to supervised learning. Weakly supervised (conventional transfer learning) techniques can be utilized effectively for real-time clinical practices by reusing the sophisticated features rather than over-parameterizing the standard models. Few-shot and self-supervised learning are the recent trends to address data scarcity and model efficacy. The deep learning (artificial intelligence) based models are mainly utilized for disease management and control. Therefore, it is more appropriate for readers to comprehend the related perceptive of deep learning approaches for the in-progress COVID-19 CT diagnosis research.
Collapse
Affiliation(s)
- Haseeb Hassan
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China; College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
| | - Zhaoyu Ren
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China
| | - Chengmin Zhou
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China
| | - Muazzam A Khan
- Department of Computer Sciences, Quaid-i-Azam University, Islamabad, Pakistan
| | - Yi Pan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Jian Zhao
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China.
| | - Bingding Huang
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China.
| |
Collapse
|
23
|
Scarpiniti M, Sarv Ahrabi S, Baccarelli E, Piazzo L, Momenzadeh A. A novel unsupervised approach based on the hidden features of Deep Denoising Autoencoders for COVID-19 disease detection. EXPERT SYSTEMS WITH APPLICATIONS 2022; 192:116366. [PMID: 34937995 PMCID: PMC8675154 DOI: 10.1016/j.eswa.2021.116366] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 10/15/2021] [Accepted: 11/30/2021] [Indexed: 05/02/2023]
Abstract
Chest imaging can represent a powerful tool for detecting the Coronavirus disease 2019 (COVID-19). Among the available technologies, the chest Computed Tomography (CT) scan is an effective approach for reliable and early detection of the disease. However, it could be difficult to rapidly identify by human inspection anomalous area in CT images belonging to the COVID-19 disease. Hence, it becomes necessary the exploitation of suitable automatic algorithms able to quick and precisely identify the disease, possibly by using few labeled input data, because large amounts of CT scans are not usually available for the COVID-19 disease. The method proposed in this paper is based on the exploitation of the compact and meaningful hidden representation provided by a Deep Denoising Convolutional Autoencoder (DDCAE). Specifically, the proposed DDCAE, trained on some target CT scans in an unsupervised way, is used to build up a robust statistical representation generating a target histogram. A suitable statistical distance measures how this target histogram is far from a companion histogram evaluated on an unknown test scan: if this distance is greater of a threshold, the test image is labeled as anomaly, i.e. the scan belongs to a patient affected by COVID-19 disease. Some experimental results and comparisons with other state-of-the-art methods show the effectiveness of the proposed approach reaching a top accuracy of 100% and similar high values for other metrics. In conclusion, by using a statistical representation of the hidden features provided by DDCAEs, the developed architecture is able to differentiate COVID-19 from normal and pneumonia scans with high reliability and at low computational cost.
Collapse
Affiliation(s)
- Michele Scarpiniti
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Sima Sarv Ahrabi
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Enzo Baccarelli
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Alireza Momenzadeh
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| |
Collapse
|
24
|
Nazir T, Nawaz M, Javed A, Malik KM, Saudagar AKJ, Khan MB, Abul Hasanat MH, AlTameem A, AlKathami M. COVID-DAI: A novel framework for COVID-19 detection and infection growth estimation using computed tomography images. Microsc Res Tech 2022; 85:2313-2330. [PMID: 35194866 PMCID: PMC9088346 DOI: 10.1002/jemt.24088] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 02/01/2022] [Accepted: 02/14/2022] [Indexed: 12/18/2022]
Abstract
The COVID‐19 pandemic is spreading at a fast pace around the world and has a high mortality rate. Since there is no proper treatment of COVID‐19 and its multiple variants, for example, Alpha, Beta, Gamma, and Delta, being more infectious in nature are affecting millions of people, further complicates the detection process, so, victims are at the risk of death. However, timely and accurate diagnosis of this deadly virus can not only save the patients from life loss but can also prevent them from the complex treatment procedures. Accurate segmentation and classification of COVID‐19 is a tedious job due to the extensive variations in its shape and similarity with other diseases like Pneumonia. Furthermore, the existing techniques have hardly focused on the infection growth estimation over time which can assist the doctors to better analyze the condition of COVID‐19‐affected patients. In this work, we tried to overcome the shortcomings of existing studies by proposing a model capable of segmenting, classifying the COVID‐19 from computed tomography images, and predicting its behavior over a certain period. The framework comprises four main steps: (i) data preparation, (ii) segmentation, (iii) infection growth estimation, and (iv) classification. After performing the pre‐processing step, we introduced the DenseNet‐77 based UNET approach. Initially, the DenseNet‐77 is used at the Encoder module of the UNET model to calculate the deep keypoints which are later segmented to show the coronavirus region. Then, the infection growth estimation of COVID‐19 per patient is estimated using the blob analysis. Finally, we employed the DenseNet‐77 framework as an end‐to‐end network to classify the input images into three classes namely healthy, COVID‐19‐affected, and pneumonia images. We evaluated the proposed model over the COVID‐19‐20 and COVIDx CT‐2A datasets for segmentation and classification tasks, respectively. Furthermore, unlike existing techniques, we performed a cross‐dataset evaluation to show the generalization ability of our method. The quantitative and qualitative evaluation confirms that our method is robust to both COVID‐19 segmentation and classification and can accurately predict the infection growth in a certain time frame.
Collapse
Affiliation(s)
- Tahira Nazir
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Ali Javed
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Khalid Mahmood Malik
- Department of Computer Science and Engineering, Oakland University, Rochester, Michigan, USA
| | - Abdul Khader Jilani Saudagar
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Muhammad Badruddin Khan
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Mozaherul Hoque Abul Hasanat
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Abdullah AlTameem
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Mohammad AlKathami
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| |
Collapse
|