1
|
Lindroth H, Nalaie K, Raghu R, Ayala IN, Busch C, Bhattacharyya A, Moreno Franco P, Diedrich DA, Pickering BW, Herasevich V. Applied Artificial Intelligence in Healthcare: A Review of Computer Vision Technology Application in Hospital Settings. J Imaging 2024; 10:81. [PMID: 38667979 PMCID: PMC11050909 DOI: 10.3390/jimaging10040081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/08/2024] [Accepted: 03/11/2024] [Indexed: 04/28/2024] Open
Abstract
Computer vision (CV), a type of artificial intelligence (AI) that uses digital videos or a sequence of images to recognize content, has been used extensively across industries in recent years. However, in the healthcare industry, its applications are limited by factors like privacy, safety, and ethical concerns. Despite this, CV has the potential to improve patient monitoring, and system efficiencies, while reducing workload. In contrast to previous reviews, we focus on the end-user applications of CV. First, we briefly review and categorize CV applications in other industries (job enhancement, surveillance and monitoring, automation, and augmented reality). We then review the developments of CV in the hospital setting, outpatient, and community settings. The recent advances in monitoring delirium, pain and sedation, patient deterioration, mechanical ventilation, mobility, patient safety, surgical applications, quantification of workload in the hospital, and monitoring for patient events outside the hospital are highlighted. To identify opportunities for future applications, we also completed journey mapping at different system levels. Lastly, we discuss the privacy, safety, and ethical considerations associated with CV and outline processes in algorithm development and testing that limit CV expansion in healthcare. This comprehensive review highlights CV applications and ideas for its expanded use in healthcare.
Collapse
Affiliation(s)
- Heidi Lindroth
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Center for Aging Research, Regenstrief Institute, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
- Center for Health Innovation and Implementation Science, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Keivan Nalaie
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Roshini Raghu
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Ivan N. Ayala
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Charles Busch
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- College of Engineering, University of Wisconsin-Madison, Madison, WI 53705, USA
| | | | - Pablo Moreno Franco
- Department of Transplantation Medicine, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Daniel A. Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| |
Collapse
|
2
|
Kaleem S, Sohail A, Tariq MU, Babar M, Qureshi B. Ensemble learning for multi-class COVID-19 detection from big data. PLoS One 2023; 18:e0292587. [PMID: 37819992 PMCID: PMC10566742 DOI: 10.1371/journal.pone.0292587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 09/25/2023] [Indexed: 10/13/2023] Open
Abstract
Coronavirus disease (COVID-19), which has caused a global pandemic, continues to have severe effects on human lives worldwide. Characterized by symptoms similar to pneumonia, its rapid spread requires innovative strategies for its early detection and management. In response to this crisis, data science and machine learning (ML) offer crucial solutions to complex problems, including those posed by COVID-19. One cost-effective approach to detect the disease is the use of chest X-rays, which is a common initial testing method. Although existing techniques are useful for detecting COVID-19 using X-rays, there is a need for further improvement in efficiency, particularly in terms of training and execution time. This article introduces an advanced architecture that leverages an ensemble learning technique for COVID-19 detection from chest X-ray images. Using a parallel and distributed framework, the proposed model integrates ensemble learning with big data analytics to facilitate parallel processing. This approach aims to enhance both execution and training times, ensuring a more effective detection process. The model's efficacy was validated through a comprehensive analysis of predicted and actual values, and its performance was meticulously evaluated for accuracy, precision, recall, and F-measure, and compared to state-of-the-art models. The work presented here not only contributes to the ongoing fight against COVID-19 but also showcases the wider applicability and potential of ensemble learning techniques in healthcare.
Collapse
Affiliation(s)
- Sarah Kaleem
- Department of Computing and Technology, Iqra University, Islamabad, Pakistan
| | | | - Muhammad Usman Tariq
- Abu Dhabi University, Abu Dhabi, UAE
- Universiti Tun Hussein Onn Malaysia (UTHM), Parit Raja, Malaysia
| | - Muhammad Babar
- Robotics and Internet of Things Lab, Prince Sultan University, Riyadh, Saudi Arabia
| | - Basit Qureshi
- College of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
3
|
Park JH, Moon HS, Jung HI, Hwang J, Choi YH, Kim JE. Deep learning and clustering approaches for dental implant size classification based on periapical radiographs. Sci Rep 2023; 13:16856. [PMID: 37803022 PMCID: PMC10558577 DOI: 10.1038/s41598-023-42385-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 09/09/2023] [Indexed: 10/08/2023] Open
Abstract
This study investigated two artificial intelligence (AI) methods for automatically classifying dental implant diameter and length based on periapical radiographs. The first method, deep learning (DL), involved utilizing the pre-trained VGG16 model and adjusting the fine-tuning degree to analyze image data obtained from periapical radiographs. The second method, clustering analysis, was accomplished by analyzing the implant-specific feature vector derived from three key points coordinates of the dental implant using the k-means++ algorithm and adjusting the weight of the feature vector. DL and clustering model classified dental implant size into nine groups. The performance metrics of AI models were accuracy, sensitivity, specificity, F1-score, positive predictive value, negative predictive value, and area under the receiver operating characteristic curve (AUC-ROC). The final DL model yielded performances above 0.994, 0.950, 0.994, 0.974, 0.952, 0.994, and 0.975, respectively, and the final clustering model yielded performances above 0.983, 0.900, 0.988, 0.923, 0.909, 0.988, and 0.947, respectively. When comparing the AI model before tuning and the final AI model, statistically significant performance improvements were observed in six out of nine groups for DL models and four out of nine groups for clustering models based on AUC-ROC. Two AI models showed reliable classification performances. For clinical applications, AI models require validation on various multicenter data.
Collapse
Affiliation(s)
- Ji-Hyun Park
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea
| | - Hong Seok Moon
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea
| | - Hoi-In Jung
- Department of Preventive Dentistry and Public Oral Health, Yonsei University College of Dentistry, Seoul, 03722, Korea
| | - JaeJoon Hwang
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Dental Research Institute, Pusan National University, Busan, 50612, Korea
| | - Yoon-Ho Choi
- School of Computer Science and Engineering, Pusan National University, Busan, 46241, Korea
| | - Jong-Eun Kim
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
4
|
Nazir S, Dickson DM, Akram MU. Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med 2023; 156:106668. [PMID: 36863192 DOI: 10.1016/j.compbiomed.2023.106668] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 01/12/2023] [Accepted: 02/10/2023] [Indexed: 02/21/2023]
Abstract
Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a 'black box' nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements. This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.
Collapse
Affiliation(s)
- Sajid Nazir
- Department of Computing, Glasgow Caledonian University, Glasgow, UK.
| | - Diane M Dickson
- Department of Podiatry and Radiography, Research Centre for Health, Glasgow Caledonian University, Glasgow, UK
| | - Muhammad Usman Akram
- Computer and Software Engineering Department, National University of Sciences and Technology, Islamabad, Pakistan
| |
Collapse
|
5
|
Kaya Y, Gürsoy E. A MobileNet-based CNN model with a novel fine-tuning mechanism for COVID-19 infection detection. Soft comput 2023; 27:5521-5535. [PMID: 36618761 PMCID: PMC9812349 DOI: 10.1007/s00500-022-07798-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/24/2022] [Indexed: 01/05/2023]
Abstract
COVID-19 is a virus that causes upper respiratory tract and lung infections. The number of cases and deaths increased daily during the pandemic. Once it is vital to diagnose such a disease in a timely manner, the researchers have focused on computer-aided diagnosis systems. Chest X-rays have helped monitor various lung diseases consisting COVID-19. In this study, we proposed a deep transfer learning approach with novel fine-tuning mechanisms to classify COVID-19 from chest X-ray images. We presented one classical and two new fine-tuning mechanisms to increase the model's performance. Two publicly available databases were combined and used for the study, which included 3616 COVID-19 and 1576 normal (healthy) and 4265 pneumonia X-ray images. The models achieved average accuracy rates of 95.62%, 96.10%, and 97.61%, respectively, for 3-class cases with fivefold cross-validation. Numerical results show that the third model reduced 81.92% of the total fine-tuning operations and achieved better results. The proposed approach is quite efficient compared with other state-of-the-art methods of detecting COVID-19.
Collapse
Affiliation(s)
- Yasin Kaya
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, Adana, Turkey
| | - Ercan Gürsoy
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, Adana, Turkey
| |
Collapse
|
6
|
Daniel, Cenggoro TW, Pardamean B. A systematic literature review of machine learning application in COVID-19 medical image classification. PROCEDIA COMPUTER SCIENCE 2023; 216:749-756. [PMID: 36643182 PMCID: PMC9829419 DOI: 10.1016/j.procs.2022.12.192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Detecting COVID-19 as early as possible and quickly is one way to stop the spread of COVID-19. Machine learning development can help to diagnose COVID-19 more quickly and accurately. This report aims to find out how far research has progressed and what lessons can be learned for future research in this sector. By filtering titles, abstracts, and content in the Google Scholar database, this literature review was able to find 19 related papers to answer two research questions, i.e. what medical images are commonly used for COVID-19 classification and what are the methods for COVID-19 classification. According to the findings, chest X-ray were the most commonly used data to categorize COVID-19 and transfer learning techniques were the method used in this study. Researchers also concluded that lung segmentation and use of multimodal data could improve performance.
Collapse
Affiliation(s)
- Daniel
- Computer Science Department, BINUS Graduate Program – Master of Computer Science Program, Bina Nusantara University, Jakarta 11480, Indonesia
| | - Tjeng Wawan Cenggoro
- Computer Science Department, School of Computer Science, Bina Nusantara University, Jakarta 11480, Indonesia,Bioinformatics and Data Science Research Center, Bina Nusantara University, Jakarta 11480, Indonesia
| | - Bens Pardamean
- Computer Science Department, BINUS Graduate Program – Master of Computer Science Program, Bina Nusantara University, Jakarta 11480, Indonesia,Bioinformatics and Data Science Research Center, Bina Nusantara University, Jakarta 11480, Indonesia
| |
Collapse
|
7
|
Wang Y, Hargreaves CA. A Review Study of the Deep Learning Techniques used for the Classification of Chest Radiological Images for COVID-19 Diagnosis. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT DATA INSIGHTS 2022. [PMCID: PMC9294035 DOI: 10.1016/j.jjimei.2022.100100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
In the fight against COVID-19, the immediate and accurate screening of infected patients is of great significance. Chest X-Ray (CXR) and Computed Tomography (CT) screening play an important role in the diagnosis of COVID-19. Studies showed that changes occur in Chest Radiological images before the beginning of COVID-19 symptoms for some patients, and the symptoms of COVID-19 and other lung diseases can be similar in their very early stages. Further, it is crucial to effectively distinguish COVID-19 patients from healthy people, and patients with other lung diseases as soon as possible, otherwise inaccurate diagnosis may expose more people to the coronavirus. Many researchers have developed end-to-end deep learning techniques for the classification of COVID-19 patients without manual feature engineering. In this paper, we review the different deep learning techniques that have been used to analyze Chest X-Ray and Computed Tomography scans to classify patients with Covid-19. In addition, we also summarize the common public datasets, challenges, limitations, and possible future work. This review paper is extremely valuable as it confirms that (1) Deep Learning models are effective in classifying chest X-Ray images provided the training dataset is sufficiently large. (2) Data augmentation and generative adversarial networks (GANs) solve the small training dataset problem. (3) Transfer learning methods greatly enhanced the extraction and selection of features that were important for chest image classification. (4) Hyperparameter tuning was valuable for increasing the deep learning model accuracies to generally more than 97%. Our review study helps new researchers identify the gaps and opportunities for further or new research.
Collapse
|
8
|
Fang L, Wang X. COVID-RDNet: A novel coronavirus pneumonia classification model using the mixed dataset by CT and X-rays images. Biocybern Biomed Eng 2022; 42:977-994. [PMID: 35945982 PMCID: PMC9353669 DOI: 10.1016/j.bbe.2022.07.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 07/10/2022] [Accepted: 07/31/2022] [Indexed: 12/23/2022]
Abstract
Corona virus disease 2019 (COVID-19) testing relies on traditional screening methods, which require a lot of manpower and material resources. Recently, to effectively reduce the damage caused by radiation and enhance effectiveness, deep learning of classifying COVID-19 negative and positive using the mixed dataset by CT and X-rays images have achieved remarkable research results. However, the details presented on CT and X-ray images have pathological diversity and similarity features, thus increasing the difficulty for physicians to judge specific cases. On this basis, this paper proposes a novel coronavirus pneumonia classification model using the mixed dataset by CT and X-rays images. To solve the problem of feature similarity between lung diseases and COVID-19, the extracted features are enhanced by an adaptive region enhancement algorithm. Besides, the depth network based on the residual blocks and the dense blocks is trained and tested. On the one hand, the residual blocks effectively improve the accuracy of the model and the non-linear COVID-19 features are obtained by cross-layer link. On the other hand, the dense blocks effectively improve the robustness of the model by connecting local and abstract information. On mixed X-ray and CT datasets, the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), area under curve (AUC), and accuracy can all reach 0.99. On the basis of respecting patient privacy and ethics, the proposed algorithm using the mixed dataset from real cases can effectively assist doctors in performing the accurate COVID-19 negative and positive classification to determine the infection status of patients.
Collapse
Affiliation(s)
- Lingling Fang
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China
| | - Xin Wang
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China
| |
Collapse
|
9
|
Prediction of All-Cause Mortality Based on Stress/Rest Myocardial Perfusion Imaging (MPI) Using Deep Learning: A Comparison between Image and Frequency Spectra as Input. J Pers Med 2022; 12:jpm12071105. [PMID: 35887602 PMCID: PMC9322556 DOI: 10.3390/jpm12071105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 07/02/2022] [Accepted: 07/04/2022] [Indexed: 11/24/2022] Open
Abstract
Background: Cardiovascular management and risk stratification of patients is an important issue in clinics. Patients who have experienced an adverse cardiac event are concerned for their future and want to know the survival probability. Methods: We trained eight state-of-the-art CNN models using polar maps of myocardial perfusion imaging (MPI), gender, lung/heart ratio, and patient age for 5-year survival prediction after an adverse cardiac event based on a cohort of 862 patients who had experienced adverse cardiac events and stress/rest MPIs. The CNN model outcome is to predict a patient’s survival 5 years after a cardiac event, i.e., two classes, either yes or no. Results: The best accuracy of all the CNN prediction models was 0.70 (median value), which resulted from ResNet-50V2, using image as the input in the baseline experiment. All the CNN models had better performance after using frequency spectra as the input. The accuracy increment was about 7~9%. Conclusions: This is the first trial to use pure rest/stress MPI polar maps and limited clinical data to predict patients’ 5-year survival based on CNN models and deep learning. The study shows the feasibility of using frequency spectra rather than images, which might increase the performance of CNNs.
Collapse
|
10
|
Mehmood M, Alshammari N, Alanazi SA, Basharat A, Ahmad F, Sajjad M, Junaid K. Improved colorization and classification of intracranial tumor expanse in MRI images via hybrid scheme of Pix2Pix-cGANs and NASNet-large. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2022.05.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
11
|
Fang Z, Ye B, Yuan B, Wang T, Zhong S, Li S, Zheng J. Angle prediction model when the imaging plane is tilted about z-axis. THE JOURNAL OF SUPERCOMPUTING 2022; 78:18598-18615. [PMID: 35692867 PMCID: PMC9175174 DOI: 10.1007/s11227-022-04595-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/08/2022] [Indexed: 06/15/2023]
Abstract
Computer Tomography (CT) is a complicated imaging system, requiring highly geometric positioning. We found a special artifact caused by detection plane tilted around z-axis. In short scan cone-beam reconstruction, this kind of geometric deviation result in half circle shaped fuzzy around highlighted particles in reconstructed slices. This artifact is distinct near the slice periphery, but deficient around the slice center. We generated mathematical models, and InceptionV3-R deep network to study the slice artifact features to estimate the detector z-axis tilt angle. The testing results are: mean absolute error of 0.08819 degree, the Root mean square error of 0.15221 degree and R-square of 0.99944. A geometric deviation recover formula was deduced, which can eliminate this artifact efficiently. This research enlarges the CT artifact knowledge hierarchy, and verifies the capability of machine learning in CT geometric deviation artifact recoveries.
Collapse
Affiliation(s)
- Zheng Fang
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| | - Bichao Ye
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| | - Bingan Yuan
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| | - Tingjun Wang
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| | - Shuo Zhong
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| | - Shunren Li
- ASR Technology (Xiamen) Co., Ltd, Xiamen, China
| | - Jianyi Zheng
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| |
Collapse
|
12
|
Santosh KC, Ghosh S, GhoshRoy D. Deep Learning for Covid-19 Screening Using Chest X-Rays in 2020: A Systematic Review. INT J PATTERN RECOGN 2022. [DOI: 10.1142/s0218001422520103] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Artificial Intelligence (AI) has promoted countless contributions in the field of healthcare and medical imaging. In this paper, we thoroughly analyze peer-reviewed research findings/articles on AI-guided tools for Covid-19 analysis/screening using chest X-ray images in the year 2020. We discuss on how far deep learning algorithms help in decision-making. We identify/address data collections, methodical contributions, promising methods, and challenges. However, a fair comparison is not trivial as dataset sizes vary over time, throughout the year 2020. Even though their unprecedented efforts in building AI-guided tools to detect, localize, and segment Covid-19 cases are limited to education and training, we elaborate on their strengths and possible weaknesses when we consider the need of cross-population train/test models. In total, with search keywords: (Covid-19 OR Coronavirus) AND chest x-ray AND deep learning AND artificial intelligence AND medical imaging in both PubMed Central Repository and Web of Science, we systematically reviewed 58 research articles and performed meta-analysis.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab – Computer Science, University of South Dakota, Vermillion, SD 57069, USA
| | - Supriti Ghosh
- 2AI: Applied Artificial Intelligence Research Lab – Computer Science, University of South Dakota, Vermillion, SD 57069, USA
| | | |
Collapse
|
13
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 86] [Impact Index Per Article: 43.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
14
|
Das D, Biswas SK, Bandyopadhyay S. Perspective of AI system for COVID-19 detection using chest images: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:21471-21501. [PMID: 35310889 PMCID: PMC8923339 DOI: 10.1007/s11042-022-11913-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Revised: 09/27/2021] [Accepted: 01/03/2022] [Indexed: 05/11/2023]
Abstract
Coronavirus Disease 2019 (COVID-19) is an evolving communicable disease caused due to Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) which has led to a global pandemic since December 2019. The virus has its origin from bat and is suspected to have transmitted to humans through zoonotic links. The disease shows dynamic symptoms, nature and reaction to the human body thereby challenging the world of medicine. Moreover, it has tremendous resemblance to viral pneumonia or Community Acquired Pneumonia (CAP). Reverse Transcription Polymerase Chain Reaction (RT-PCR) is performed for detection of COVID-19. Nevertheless, RT-PCR is not completely reliable and sometimes unavailable. Therefore, scientists and researchers have suggested analysis and examination of Computing Tomography (CT) scans and Chest X-Ray (CXR) images to identify the features of COVID-19 in patients having clinical manifestation of the disease, using expert systems deploying learning algorithms such as Machine Learning (ML) and Deep Learning (DL). The paper identifies and reviews various chest image features using the aforementioned imaging modalities for reliable and faster detection of COVID-19 than laboratory processes. The paper also reviews and compares the different aspects of ML and DL using chest images, for detection of COVID-19.
Collapse
Affiliation(s)
- Dolly Das
- Department of Computer Science and Engineering, National Institute of Technology Silchar, Assam Silchar, Cachar, India
| | - Saroj Kumar Biswas
- Department of Computer Science and Engineering, National Institute of Technology Silchar, Assam Silchar, Cachar, India
| | - Sivaji Bandyopadhyay
- Department of Computer Science and Engineering, National Institute of Technology Silchar, Assam Silchar, Cachar, India
| |
Collapse
|
15
|
COVID-19 Detection in Chest X-ray Images Using a New Channel Boosted CNN. Diagnostics (Basel) 2022; 12:diagnostics12020267. [PMID: 35204358 PMCID: PMC8871483 DOI: 10.3390/diagnostics12020267] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 01/07/2022] [Accepted: 01/16/2022] [Indexed: 02/01/2023] Open
Abstract
COVID-19 is a respiratory illness that has affected a large population worldwide and continues to have devastating consequences. It is imperative to detect COVID-19 at the earliest opportunity to limit the span of infection. In this work, we developed a new CNN architecture STM-RENet to interpret the radiographic patterns from X-ray images. The proposed STM-RENet is a block-based CNN that employs the idea of split–transform–merge in a new way. In this regard, we have proposed a new convolutional block STM that implements the region and edge-based operations separately, as well as jointly. The systematic use of region and edge implementations in combination with convolutional operations helps in exploring region homogeneity, intensity inhomogeneity, and boundary-defining features. The learning capacity of STM-RENet is further enhanced by developing a new CB-STM-RENet that exploits channel boosting and learns textural variations to effectively screen the X-ray images of COVID-19 infection. The idea of channel boosting is exploited by generating auxiliary channels from the two additional CNNs using Transfer Learning, which are then concatenated to the original channels of the proposed STM-RENet. A significant performance improvement is shown by the proposed CB-STM-RENet in comparison to the standard CNNs on three datasets, especially on the stringent CoV-NonCoV-15k dataset. The good detection rate (97%), accuracy (96.53%), and reasonable F-score (95%) of the proposed technique suggest that it can be adapted to detect COVID-19 infected patients.
Collapse
|
16
|
Gudigar A, Raghavendra U, Nayak S, Ooi CP, Chan WY, Gangavarapu MR, Dharmik C, Samanth J, Kadri NA, Hasikin K, Barua PD, Chakraborty S, Ciaccio EJ, Acharya UR. Role of Artificial Intelligence in COVID-19 Detection. SENSORS (BASEL, SWITZERLAND) 2021; 21:8045. [PMID: 34884045 PMCID: PMC8659534 DOI: 10.3390/s21238045] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 11/26/2021] [Accepted: 11/26/2021] [Indexed: 12/15/2022]
Abstract
The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.
Collapse
Affiliation(s)
- Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Sneha Nayak
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore;
| | - Wai Yee Chan
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur 50603, Malaysia;
| | - Mokshagna Rohit Gangavarapu
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chinmay Dharmik
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nahrizul Adib Kadri
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Prabal Datta Barua
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia;
- School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
| | - Subrata Chakraborty
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Edward J. Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, NY 10032, USA;
| | - U. Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore;
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
- International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 860-8555, Japan
| |
Collapse
|
17
|
Owais M, Baek NR, Park KR. Domain-Adaptive Artificial Intelligence-Based Model for Personalized Diagnosis of Trivial Lesions Related to COVID-19 in Chest Computed Tomography Scans. J Pers Med 2021; 11:1008. [PMID: 34683149 PMCID: PMC8537687 DOI: 10.3390/jpm11101008] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 10/01/2021] [Accepted: 10/02/2021] [Indexed: 01/12/2023] Open
Abstract
BACKGROUND Early and accurate detection of COVID-19-related findings (such as well-aerated regions, ground-glass opacity, crazy paving and linear opacities, and consolidation in lung computed tomography (CT) scan) is crucial for preventive measures and treatment. However, the visual assessment of lung CT scans is a time-consuming process particularly in case of trivial lesions and requires medical specialists. METHOD A recent breakthrough in deep learning methods has boosted the diagnostic capability of computer-aided diagnosis (CAD) systems and further aided health professionals in making effective diagnostic decisions. In this study, we propose a domain-adaptive CAD framework, namely the dilated aggregation-based lightweight network (DAL-Net), for effective recognition of trivial COVID-19 lesions in CT scans. Our network design achieves a fast execution speed (inference time is 43 ms on a single image) with optimal memory consumption (almost 9 MB). To evaluate the performances of the proposed and state-of-the-art models, we considered two publicly accessible datasets, namely COVID-19-CT-Seg (comprising a total of 3520 images of 20 different patients) and MosMed (including a total of 2049 images of 50 different patients). RESULTS Our method exhibits average area under the curve (AUC) up to 98.84%, 98.47%, and 95.51% for COVID-19-CT-Seg, MosMed, and cross-dataset, respectively, and outperforms various state-of-the-art methods. CONCLUSIONS These results demonstrate that deep learning-based models are an effective tool for building a robust CAD solution based on CT data in response to present disaster of COVID-19.
Collapse
Affiliation(s)
| | | | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea; (M.O.); (N.R.B.)
| |
Collapse
|
18
|
Umair M, Khan MS, Ahmed F, Baothman F, Alqahtani F, Alian M, Ahmad J. Detection of COVID-19 Using Transfer Learning and Grad-CAM Visualization on Indigenously Collected X-ray Dataset. SENSORS 2021; 21:s21175813. [PMID: 34502702 PMCID: PMC8434081 DOI: 10.3390/s21175813] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/11/2021] [Accepted: 08/16/2021] [Indexed: 12/13/2022]
Abstract
The COVID-19 outbreak began in December 2019 and has dreadfully affected our lives since then. More than three million lives have been engulfed by this newest member of the corona virus family. With the emergence of continuously mutating variants of this virus, it is still indispensable to successfully diagnose the virus at early stages. Although the primary technique for the diagnosis is the PCR test, the non-contact methods utilizing the chest radiographs and CT scans are always preferred. Artificial intelligence, in this regard, plays an essential role in the early and accurate detection of COVID-19 using pulmonary images. In this research, a transfer learning technique with fine tuning was utilized for the detection and classification of COVID-19. Four pre-trained models i.e., VGG16, DenseNet-121, ResNet-50, and MobileNet were used. The aforementioned deep neural networks were trained using the dataset (available on Kaggle) of 7232 (COVID-19 and normal) chest X-ray images. An indigenous dataset of 450 chest X-ray images of Pakistani patients was collected and used for testing and prediction purposes. Various important parameters, e.g., recall, specificity, F1-score, precision, loss graphs, and confusion matrices were calculated to validate the accuracy of the models. The achieved accuracies of VGG16, ResNet-50, DenseNet-121, and MobileNet are 83.27%, 92.48%, 96.49%, and 96.48%, respectively. In order to display feature maps that depict the decomposition process of an input image into various filters, a visualization of the intermediate activations is performed. Finally, the Grad-CAM technique was applied to create class-specific heatmap images in order to highlight the features extracted in the X-ray images. Various optimizers were used for error minimization purposes. DenseNet-121 outperformed the other three models in terms of both accuracy and prediction.
Collapse
Affiliation(s)
- Muhammad Umair
- Department of Electrical Engineering, HITEC University, Taxila 47080, Pakistan; (M.U.); (M.A.)
| | - Muhammad Shahbaz Khan
- Department of Electrical Engineering, HITEC University, Taxila 47080, Pakistan; (M.U.); (M.A.)
- Correspondence:
| | - Fawad Ahmed
- Department of Biomedical Engineering, HITEC University, Taxila 47080, Pakistan;
| | - Fatmah Baothman
- Faculty of Computing and Information Technology, King Abdul Aziz University, Jeddah 21431, Saudi Arabia;
| | - Fehaid Alqahtani
- Department of Computer Science, King Fahad Naval Academy, Al Jubail 35512, Saudi Arabia;
| | - Muhammad Alian
- Department of Electrical Engineering, HITEC University, Taxila 47080, Pakistan; (M.U.); (M.A.)
| | - Jawad Ahmad
- School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK;
| |
Collapse
|
19
|
Systems Radiology and Personalized Medicine. J Pers Med 2021; 11:jpm11080769. [PMID: 34442413 PMCID: PMC8400747 DOI: 10.3390/jpm11080769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 08/03/2021] [Indexed: 11/17/2022] Open
|
20
|
Montalbo FJP. Diagnosing Covid-19 chest x-rays with a lightweight truncated DenseNet with partial layer freezing and feature fusion. Biomed Signal Process Control 2021; 68:102583. [PMID: 33828610 PMCID: PMC8015405 DOI: 10.1016/j.bspc.2021.102583] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 03/23/2021] [Accepted: 03/26/2021] [Indexed: 12/26/2022]
Abstract
Due to the unforeseen turn of events, our world has undergone another global pandemic from a highly contagious novel coronavirus named COVID-19. The novel virus inflames the lungs similarly to Pneumonia, making it challenging to diagnose. Currently, the common standard to diagnose the virus's presence from an individual is using a molecular real-time Reverse-Transcription Polymerase Chain Reaction (rRT-PCR) test from fluids acquired through nasal swabs. Such a test is difficult to acquire in most underdeveloped countries with a few experts that can perform the test. As a substitute, the widely available Chest X-Ray (CXR) became an alternative to rule out the virus. However, such a method does not come easy as the virus still possesses unknown characteristics that even experienced radiologists and other medical experts find difficult to diagnose through CXRs. Several studies have recently used computer-aided methods to automate and improve such diagnosis of CXRs through Artificial Intelligence (AI) based on computer vision and Deep Convolutional Neural Networks (DCNN), which some require heavy processing costs and other tedious methods to produce. Therefore, this work proposed the Fused-DenseNet-Tiny, a lightweight DCNN model based on a densely connected neural network (DenseNet) truncated and concatenated. The model trained to learn CXR features based on transfer learning, partial layer freezing, and feature fusion. Upon evaluation, the proposed model achieved a remarkable 97.99 % accuracy, with only 1.2 million parameters and a shorter end-to-end structure. It has also shown better performance than some existing studies and other massive state-of-the-art models that diagnosed COVID-19 from CXRs.
Collapse
Key Words
- AP, Average Pooling
- AUC, Area Under the Curve
- BN, Batch Normalization
- BS, Batch Size
- CAD, Computer-Aided Diagnosis
- CCE, Categorical Cross-Entropy
- CNN, Convolutional Neural Networks
- CT, Computer Tomography
- CV, Computer Vision
- CXR, Chest X-Rays
- Chest x-rays
- Computer-aided diagnosis
- Covid-19
- DCNN, Deep Convolutional Neural Networks
- DL, Deep Learning
- DR, Dropout Rate
- Deep learning
- Densely connected neural networks
- GAP, Global Average Pooling
- GRAD-CAM, Gradient-Weighted Class Activation Maps
- JPG, Joint Photographic Group
- LR, Learning Rate
- MP, Max-Pooling
- P-R, Precision-Recall
- PEPX, Projection-Expansion-Projection-Extension
- ROC, Receiver Operating Characteristic
- ReLU, Rectified Linear Unit
- SGD, Stochastic Gradient Descent
- WHO, World Health Organization
- rRT-PCR, real-time Reverse-Transcription Polymerase Chain Reaction
Collapse
|
21
|
Jin W, Dong S, Dong C, Ye X. Hybrid ensemble model for differential diagnosis between COVID-19 and common viral pneumonia by chest X-ray radiograph. Comput Biol Med 2021; 131:104252. [PMID: 33610001 PMCID: PMC7966819 DOI: 10.1016/j.compbiomed.2021.104252] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 01/24/2021] [Accepted: 01/28/2021] [Indexed: 12/16/2022]
Abstract
BACKGROUND Chest X-ray radiography (CXR) has been widely considered as an accessible, feasible, and convenient method to evaluate suspected patients' lung involvement during the COVID-19 pandemic. However, with the escalating number of suspected cases, traditional diagnosis via CXR fails to deliver results within a short period of time. Therefore, it is crucial to employ artificial intelligence (AI) to enhance CXRs for obtaining quick and accurate diagnoses. Previous studies have reported the feasibility of utilizing deep learning methods to screen for COVID-19 using CXR and CT results. However, these models only use a single deep learning network for chest radiograph detection; the accuracy of this approach required further improvement. METHODS In this study, we propose a three-step hybrid ensemble model, including a feature extractor, a feature selector, and a classifier. First, a pre-trained AlexNet with an improved structure extracts the original image features. Then, the ReliefF algorithm is adopted to sort the extracted features, and a trial-and-error approach is used to select the n most important features to reduce the feature dimension. Finally, an SVM classifier provides classification results based on the n selected features. RESULTS Compared to five existing models (InceptionV3: 97.916 ± 0.408%; SqueezeNet: 97.189 ± 0.526%; VGG19: 96.520 ± 1.220%; ResNet50: 97.476 ± 0.513%; ResNet101: 98.241 ± 0.209%), the proposed model demonstrated the best performance in terms of overall accuracy rate (98.642 ± 0.398%). Additionally, compared to the existing models, the proposed model demonstrates a considerable improvement in classification time efficiency (SqueezeNet: 6.602 ± 0.001s; InceptionV3: 12.376 ± 0.002s; ResNet50: 10.952 ± 0.001s; ResNet101: 18.040 ± 0.002s; VGG19: 16.632 ± 0.002s; proposed model: 5.917 ± 0.001s). CONCLUSION The model proposed in this article is practical and effective, and can provide high-precision COVID-19 CXR detection. We demonstrated its suitability to aid medical professionals in distinguishing normal CXRs, viral pneumonia CXRs and COVID-19 CXRs efficiently on small sample sizes.
Collapse
Affiliation(s)
- Weiqiu Jin
- School of Medicine, Shanghai Jiao Tong University, 200025, Shanghai, PR China
| | - Shuqin Dong
- School of Traffic and Transportation Engineering, Central South University, 410075, Hunan, PR China
| | - Changzi Dong
- Department of Bioengineering, School of Engineering and Science, University of Pennsylvania, 19104, Philadelphia, USA
| | - Xiaodan Ye
- Department of Radiology, Shanghai Chest Hospital Shanghai Jiao Tong University, 200030, Shanghai, PR China,Corresponding author
| |
Collapse
|
22
|
Lee KS, Lee E, Choi B, Pyun SB. Automatic Pharyngeal Phase Recognition in Untrimmed Videofluoroscopic Swallowing Study Using Transfer Learning with Deep Convolutional Neural Networks. Diagnostics (Basel) 2021; 11:diagnostics11020300. [PMID: 33668528 PMCID: PMC7918932 DOI: 10.3390/diagnostics11020300] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 02/05/2021] [Accepted: 02/09/2021] [Indexed: 01/29/2023] Open
Abstract
Background: Video fluoroscopic swallowing study (VFSS) is considered as the gold standard diagnostic tool for evaluating dysphagia. However, it is time consuming and labor intensive for the clinician to manually search the recorded long video image frame by frame to identify the instantaneous swallowing abnormality in VFSS images. Therefore, this study aims to present a deep leaning-based approach using transfer learning with a convolutional neural network (CNN) that automatically annotates pharyngeal phase frames in untrimmed VFSS videos such that frames need not be searched manually. Methods: To determine whether the image frame in the VFSS video is in the pharyngeal phase, a single-frame baseline architecture based the deep CNN framework is used and a transfer learning technique with fine-tuning is applied. Results: Compared with all experimental CNN models, that fine-tuned with two blocks of the VGG-16 (VGG16-FT5) model achieved the highest performance in terms of recognizing the frame of pharyngeal phase, that is, the accuracy of 93.20 (±1.25)%, sensitivity of 84.57 (±5.19)%, specificity of 94.36 (±1.21)%, AUC of 0.8947 (±0.0269) and Kappa of 0.7093 (±0.0488). Conclusions: Using appropriate and fine-tuning techniques and explainable deep learning techniques such as grad CAM, this study shows that the proposed single-frame-baseline-architecture-based deep CNN framework can yield high performances in the full automation of VFSS video analysis.
Collapse
Affiliation(s)
- Ki-Sun Lee
- Medical Science Research Center, Ansan Hospital, Korea University College of Medicine, Ansan-si 15355, Korea
- Correspondence: (K.-S.L.); (S.-B.P.)
| | - Eunyoung Lee
- Department of Physical Medicine and Rehabilitation, Anam Hospital, Korea University College of Medicine, Seoul 02841, Korea; (E.L.); (B.C.)
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul 02841, Korea
| | - Bareun Choi
- Department of Physical Medicine and Rehabilitation, Anam Hospital, Korea University College of Medicine, Seoul 02841, Korea; (E.L.); (B.C.)
| | - Sung-Bom Pyun
- Department of Physical Medicine and Rehabilitation, Anam Hospital, Korea University College of Medicine, Seoul 02841, Korea; (E.L.); (B.C.)
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul 02841, Korea
- Brain Convergence Research Center, Korea University College of Medicine, Seoul 02841, Korea
- Correspondence: (K.-S.L.); (S.-B.P.)
| |
Collapse
|
23
|
Ahsan MM, Ahad MT, Soma FA, Paul S, Chowdhury A, Luna SA, Yazdan MMS, Rahman A, Siddique Z, Huebner P. Detecting SARS-CoV-2 From Chest X-Ray Using Artificial Intelligence. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:35501-35513. [PMID: 34976572 PMCID: PMC8675556 DOI: 10.1109/access.2021.3061621] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Accepted: 02/16/2021] [Indexed: 05/19/2023]
Abstract
Chest radiographs (X-rays) combined with Deep Convolutional Neural Network (CNN) methods have been demonstrated to detect and diagnose the onset of COVID-19, the disease caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). However, questions remain regarding the accuracy of those methods as they are often challenged by limited datasets, performance legitimacy on imbalanced data, and have their results typically reported without proper confidence intervals. Considering the opportunity to address these issues, in this study, we propose and test six modified deep learning models, including VGG16, InceptionResNetV2, ResNet50, MobileNetV2, ResNet101, and VGG19 to detect SARS-CoV-2 infection from chest X-ray images. Results are evaluated in terms of accuracy, precision, recall, and f- score using a small and balanced dataset (Study One), and a larger and imbalanced dataset (Study Two). With 95% confidence interval, VGG16 and MobileNetV2 show that, on both datasets, the model could identify patients with COVID-19 symptoms with an accuracy of up to 100%. We also present a pilot test of VGG16 models on a multi-class dataset, showing promising results by achieving 91% accuracy in detecting COVID-19, normal, and Pneumonia patients. Furthermore, we demonstrated that poorly performing models in Study One (ResNet50 and ResNet101) had their accuracy rise from 70% to 93% once trained with the comparatively larger dataset of Study Two. Still, models like InceptionResNetV2 and VGG19's demonstrated an accuracy of 97% on both datasets, which posits the effectiveness of our proposed methods, ultimately presenting a reasonable and accessible alternative to identify patients with COVID-19.
Collapse
Affiliation(s)
- Md Manjurul Ahsan
- School of Industrial and Systems EngineeringThe University of Oklahoma Norman OK 73019 USA
| | - Md Tanvir Ahad
- School of Aerospace and Mechanical EngineeringThe University of Oklahoma Norman OK 73019 USA
| | - Farzana Akter Soma
- Holy Family Red Crescent Medical College & Hospital Dhaka 1000 Bangladesh
| | - Shuva Paul
- School of Electrical and Computer EngineeringGeorgia Institute of Technology Atlanta GA 30332 USA
| | - Ananna Chowdhury
- Z. H. Sikder Women's Medical College & Hospital Dhaka 1212 Bangladesh
| | | | | | - Akhlaqur Rahman
- School of Industrial Automation and Electrical EngineeringEngineering Institute of Technology Melbourne VIC 3000 Australia
| | - Zahed Siddique
- School of Aerospace and Mechanical EngineeringThe University of Oklahoma Norman OK 73019 USA
| | - Pedro Huebner
- School of Industrial and Systems EngineeringThe University of Oklahoma Norman OK 73019 USA
| |
Collapse
|
24
|
What Can COVID-19 Teach Us about Using AI in Pandemics? Healthcare (Basel) 2020; 8:healthcare8040527. [PMID: 33271960 PMCID: PMC7711608 DOI: 10.3390/healthcare8040527] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 11/23/2020] [Accepted: 11/24/2020] [Indexed: 12/17/2022] Open
Abstract
The COVID-19 pandemic put significant strain on societies and their resources, with the healthcare system and workers being particularly affected. Artificial Intelligence (AI) offers the unique possibility of improving the response to a pandemic as it emerges and evolves. Here, we utilize the WHO framework of a pandemic evolution to analyze the various AI applications. Specifically, we analyzed AI from the perspective of all five domains of the WHO pandemic response. To effectively review the current scattered literature, we organized a sample of relevant literature from various professional and popular resources. The article concludes with a consideration of AI’s weaknesses as key factors affecting AI in future pandemic preparedness and response.
Collapse
|