1
|
Haque SBU, Zafar A. Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:308-338. [PMID: 38343214 DOI: 10.1007/s10278-023-00916-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/23/2023] [Accepted: 10/08/2023] [Indexed: 03/02/2024]
Abstract
In the realm of medical diagnostics, the utilization of deep learning techniques, notably in the context of radiology images, has emerged as a transformative force. The significance of artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), lies in their capacity to rapidly and accurately diagnose diseases from radiology images. This capability has been particularly vital during the COVID-19 pandemic, where rapid and precise diagnosis played a pivotal role in managing the spread of the virus. DL models, trained on vast datasets of radiology images, have showcased remarkable proficiency in distinguishing between normal and COVID-19-affected cases, offering a ray of hope amidst the crisis. However, as with any technological advancement, vulnerabilities emerge. Deep learning-based diagnostic models, although proficient, are not immune to adversarial attacks. These attacks, characterized by carefully crafted perturbations to input data, can potentially disrupt the models' decision-making processes. In the medical context, such vulnerabilities could have dire consequences, leading to misdiagnoses and compromised patient care. To address this, we propose a two-phase defense framework that combines advanced adversarial learning and adversarial image filtering techniques. We use a modified adversarial learning algorithm to enhance the model's resilience against adversarial examples during the training phase. During the inference phase, we apply JPEG compression to mitigate perturbations that cause misclassification. We evaluate our approach on three models based on ResNet-50, VGG-16, and Inception-V3. These models perform exceptionally in classifying radiology images (X-ray and CT) of lung regions into normal, pneumonia, and COVID-19 pneumonia categories. We then assess the vulnerability of these models to three targeted adversarial attacks: fast gradient sign method (FGSM), projected gradient descent (PGD), and basic iterative method (BIM). The results show a significant drop in model performance after the attacks. However, our defense framework greatly improves the models' resistance to adversarial attacks, maintaining high accuracy on adversarial examples. Importantly, our framework ensures the reliability of the models in diagnosing COVID-19 from clean images.
Collapse
Affiliation(s)
- Sheikh Burhan Ul Haque
- Department of Computer Science, Aligarh Muslim University, Uttar Pradesh, Aligarh, 202002, India.
| | - Aasim Zafar
- Department of Computer Science, Aligarh Muslim University, Uttar Pradesh, Aligarh, 202002, India
| |
Collapse
|
2
|
Azad R, Kazerouni A, Heidari M, Aghdam EK, Molaei A, Jia Y, Jose A, Roy R, Merhof D. Advances in medical image analysis with vision Transformers: A comprehensive review. Med Image Anal 2024; 91:103000. [PMID: 37883822 DOI: 10.1016/j.media.2023.103000] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 10/28/2023]
Abstract
The remarkable performance of the Transformer architecture in natural language processing has recently also triggered broad interest in Computer Vision. Among other merits, Transformers are witnessed as capable of learning long-range dependencies and spatial correlations, which is a clear advantage over convolutional neural networks (CNNs), which have been the de facto standard in Computer Vision problems so far. Thus, Transformers have become an integral part of modern medical image analysis. In this review, we provide an encyclopedic review of the applications of Transformers in medical imaging. Specifically, we present a systematic and thorough review of relevant recent Transformer literature for different medical image analysis tasks, including classification, segmentation, detection, registration, synthesis, and clinical report generation. For each of these applications, we investigate the novelty, strengths and weaknesses of the different proposed strategies and develop taxonomies highlighting key properties and contributions. Further, if applicable, we outline current benchmarks on different datasets. Finally, we summarize key challenges and discuss different future research directions. In addition, we have provided cited papers with their corresponding implementations in https://github.com/mindflow-institue/Awesome-Transformer.
Collapse
Affiliation(s)
- Reza Azad
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Amirhossein Kazerouni
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Moein Heidari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | | | - Amirali Molaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Yiwei Jia
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Abin Jose
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Rijo Roy
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| |
Collapse
|
3
|
Ghnemat R, Alodibat S, Abu Al-Haija Q. Explainable Artificial Intelligence (XAI) for Deep Learning Based Medical Imaging Classification. J Imaging 2023; 9:177. [PMID: 37754941 PMCID: PMC10532018 DOI: 10.3390/jimaging9090177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 08/19/2023] [Accepted: 08/23/2023] [Indexed: 09/28/2023] Open
Abstract
Recently, deep learning has gained significant attention as a noteworthy division of artificial intelligence (AI) due to its high accuracy and versatile applications. However, one of the major challenges of AI is the need for more interpretability, commonly referred to as the black-box problem. In this study, we introduce an explainable AI model for medical image classification to enhance the interpretability of the decision-making process. Our approach is based on segmenting the images to provide a better understanding of how the AI model arrives at its results. We evaluated our model on five datasets, including the COVID-19 and Pneumonia Chest X-ray dataset, Chest X-ray (COVID-19 and Pneumonia), COVID-19 Image Dataset (COVID-19, Viral Pneumonia, Normal), and COVID-19 Radiography Database. We achieved testing and validation accuracy of 90.6% on a relatively small dataset of 6432 images. Our proposed model improved accuracy and reduced time complexity, making it more practical for medical diagnosis. Our approach offers a more interpretable and transparent AI model that can enhance the accuracy and efficiency of medical diagnosis.
Collapse
Affiliation(s)
- Rawan Ghnemat
- Department of Computer Science, Princess Sumaya University for Technology, Amman 11941, Jordan
| | - Sawsan Alodibat
- Department of Computer Science, Princess Sumaya University for Technology, Amman 11941, Jordan
| | - Qasem Abu Al-Haija
- Department of Cybersecurity, Princess Sumaya University for Technology, Amman 11941, Jordan
| |
Collapse
|
4
|
Shamshad F, Khan S, Zamir SW, Khan MH, Hayat M, Khan FS, Fu H. Transformers in medical imaging: A survey. Med Image Anal 2023; 88:102802. [PMID: 37315483 DOI: 10.1016/j.media.2023.102802] [Citation(s) in RCA: 64] [Impact Index Per Article: 64.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/11/2023] [Accepted: 03/23/2023] [Indexed: 06/16/2023]
Abstract
Following unprecedented success on the natural language tasks, Transformers have been successfully applied to several computer vision problems, achieving state-of-the-art results and prompting researchers to reconsider the supremacy of convolutional neural networks (CNNs) as de facto operators. Capitalizing on these advances in computer vision, the medical imaging field has also witnessed growing interest for Transformers that can capture global context compared to CNNs with local receptive fields. Inspired from this transition, in this survey, we attempt to provide a comprehensive review of the applications of Transformers in medical imaging covering various aspects, ranging from recently proposed architectural designs to unsolved issues. Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, restoration, synthesis, registration, clinical report generation, and other tasks. In particular, for each of these applications, we develop taxonomy, identify application-specific challenges as well as provide insights to solve them, and highlight recent trends. Further, we provide a critical discussion of the field's current state as a whole, including the identification of key challenges, open problems, and outlining promising future directions. We hope this survey will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of Transformer models in medical imaging. Finally, to cope with the rapid development in this field, we intend to regularly update the relevant latest papers and their open-source implementations at https://github.com/fahadshamshad/awesome-transformers-in-medical-imaging.
Collapse
Affiliation(s)
- Fahad Shamshad
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates.
| | - Salman Khan
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates; CECS, Australian National University, Canberra ACT 0200, Australia
| | - Syed Waqas Zamir
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | | | - Munawar Hayat
- Faculty of IT, Monash University, Clayton VIC 3800, Australia
| | - Fahad Shahbaz Khan
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates; Computer Vision Laboratory, Linköping University, Sweden
| | - Huazhu Fu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| |
Collapse
|
5
|
Mozaffari J, Amirkhani A, Shokouhi SB. A survey on deep learning models for detection of COVID-19. Neural Comput Appl 2023; 35:1-29. [PMID: 37362568 PMCID: PMC10224665 DOI: 10.1007/s00521-023-08683-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 05/10/2023] [Indexed: 06/28/2023]
Abstract
The spread of the COVID-19 started back in 2019; and so far, more than 4 million people around the world have lost their lives to this deadly virus and its variants. In view of the high transmissibility of the Corona virus, which has turned this disease into a global pandemic, artificial intelligence can be employed as an effective tool for an earlier detection and treatment of this illness. In this review paper, we evaluate the performance of the deep learning models in processing the X-Ray and CT-Scan images of the Corona patients' lungs and describe the changes made to these models in order to enhance their Corona detection accuracy. To this end, we introduce the famous deep learning models such as VGGNet, GoogleNet and ResNet and after reviewing the research works in which these models have been used for the detection of COVID-19, we compare the performances of the newer models such as DenseNet, CapsNet, MobileNet and EfficientNet. We then present the deep learning techniques of GAN, transfer learning, and data augmentation and examine the statistics of using these techniques. Here, we also describe the datasets introduced since the onset of the COVID-19. These datasets contain the lung images of Corona patients, healthy individuals, and the patients with non-Corona pulmonary diseases. Lastly, we elaborate on the existing challenges in the use of artificial intelligence for COVID-19 detection and the prospective trends of using this method in similar situations and conditions. Supplementary Information The online version contains supplementary material available at 10.1007/s00521-023-08683-x.
Collapse
Affiliation(s)
- Javad Mozaffari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| | - Abdollah Amirkhani
- School of Automotive Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| | - Shahriar B. Shokouhi
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| |
Collapse
|
6
|
Vijayanandh T, Shenbagavalli A. A Hybrid Deep Neural Approach for Segmenting the COVID Affection Area from the Lungs X-Ray Images. NEW GENERATION COMPUTING 2023; 41:1-20. [PMID: 37362548 PMCID: PMC10184644 DOI: 10.1007/s00354-023-00222-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 05/04/2023] [Indexed: 06/28/2023]
Abstract
Nowadays, COVID severity prediction has attracted widely in medical research because of the disease severity. Hence, the image processing application is also utilized to analyze COVID severity identification using lungs X-ray images. Thus, several intelligent schemes were employed to detect the COVID-affected part of the lungs X-ray images. However, the traditional neural approaches reported less severity classification accuracy due to the image complexity score. So, the present study has presented a novel chimp-based Adaboost Severity Analysis (CbASA) implemented in the MATLAB environment. Hence, the lung's X-ray images are utilized to test the working performance of the designed model. All public imaging data sources contain more noisy features, so the noise features are removed in the initial hidden layer of the novel CbASA then the noise-free data is imported into the classification phase. Feature extraction, segmentation, and severity specification have been performed in the classification layer. Finally, the performance of the classification score has been measured and compared with other models. Subsequently, the presented novel CbASA has earned the finest classification outcome.
Collapse
Affiliation(s)
- T. Vijayanandh
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, Chennai, Tamil Nadu 600062 India
| | - A. Shenbagavalli
- Department of Electronics and Communication Engineering, National Engineering College, Kovilpatti, Tamil Nadu 628503 India
| |
Collapse
|
7
|
Lee MH, Shomanov A, Kudaibergenova M, Viderman D. Deep Learning Methods for Interpretation of Pulmonary CT and X-ray Images in Patients with COVID-19-Related Lung Involvement: A Systematic Review. J Clin Med 2023; 12:jcm12103446. [PMID: 37240552 DOI: 10.3390/jcm12103446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/25/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
SARS-CoV-2 is a novel virus that has been affecting the global population by spreading rapidly and causing severe complications, which require prompt and elaborate emergency treatment. Automatic tools to diagnose COVID-19 could potentially be an important and useful aid. Radiologists and clinicians could potentially rely on interpretable AI technologies to address the diagnosis and monitoring of COVID-19 patients. This paper aims to provide a comprehensive analysis of the state-of-the-art deep learning techniques for COVID-19 classification. The previous studies are methodically evaluated, and a summary of the proposed convolutional neural network (CNN)-based classification approaches is presented. The reviewed papers have presented a variety of CNN models and architectures that were developed to provide an accurate and quick automatic tool to diagnose the COVID-19 virus based on presented CT scan or X-ray images. In this systematic review, we focused on the critical components of the deep learning approach, such as network architecture, model complexity, parameter optimization, explainability, and dataset/code availability. The literature search yielded a large number of studies over the past period of the virus spread, and we summarized their past efforts. State-of-the-art CNN architectures, with their strengths and weaknesses, are discussed with respect to diverse technical and clinical evaluation metrics to safely implement current AI studies in medical practice.
Collapse
Affiliation(s)
- Min-Ho Lee
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Adai Shomanov
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Madina Kudaibergenova
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Dmitriy Viderman
- School of Medicine, Nazarbayev University, 5/1 Kerey and Zhanibek Khandar Str., Astana 010000, Kazakhstan
| |
Collapse
|
8
|
Liu Y, Xing W, Zhao M, Lin M. A new classification method for diagnosing COVID-19 pneumonia based on joint CNN features of chest X-ray images and parallel pyramid MLP-mixer module. Neural Comput Appl 2023; 35:1-13. [PMID: 37362575 PMCID: PMC10147369 DOI: 10.1007/s00521-023-08604-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 04/11/2023] [Indexed: 06/28/2023]
Abstract
During the past three years, the coronavirus disease 2019 (COVID-19) has swept the world. The rapid and accurate recognition of covid-19 pneumonia are ,therefore, of great importance. To handle this problem, we propose a new pipeline of deep learning framework for diagnosing COVID-19 pneumonia via chest X-ray images from normal, COVID-19, and other pneumonia patients. In detail, the self-trained YOLO-v4 network was first used to locate and segment the thoracic region, and the output images were scaled to the same size. Subsequently, the pre-trained convolutional neural network was adopted to extract the features of X-ray images from 13 convolutional layers, which were fused with the original image to form a 14-dimensional image matrix. It was then put into three parallel pyramid multi-layer perceptron (MLP)-Mixer modules for comprehensive feature extraction through spatial fusion and channel fusion based on different scales so as to grasp more extensive feature correlation. Finally, by combining all image features from the 14-channel output, the classification task was achieved using two fully connected layers as well as Softmax classifier for classification. Extensive simulations based on a total of 4099 chest X-ray images were conducted to verify the effectiveness of the proposed method. Experimental results indicated that our proposed method can achieve the best performance in almost all cases, which is good for auxiliary diagnosis of COVID-19 and has great clinical application potential.
Collapse
Affiliation(s)
- Yiwen Liu
- College of Information Science and Technology, Donghua University, Shanghai, People’s Republic of China
| | - Wenyu Xing
- School of Information Science and Technology, Fudan University, Shanghai, People’s Republic of China
| | - Mingbo Zhao
- College of Information Science and Technology, Donghua University, Shanghai, People’s Republic of China
- Department of Electrical Engineering, City University of Hong Kong, Kowloon Tong, Hong Kong People’s Republic of China
| | - Mingquan Lin
- Department of Electrical Engineering, City University of Hong Kong, Kowloon Tong, Hong Kong People’s Republic of China
| |
Collapse
|
9
|
Li G, Togo R, Ogawa T, Haseyama M. COVID-19 detection based on self-supervised transfer learning using chest X-ray images. Int J Comput Assist Radiol Surg 2023; 18:715-722. [PMID: 36538184 PMCID: PMC9765379 DOI: 10.1007/s11548-022-02813-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022]
Abstract
PURPOSE Considering several patients screened due to COVID-19 pandemic, computer-aided detection has strong potential in assisting clinical workflow efficiency and reducing the incidence of infections among radiologists and healthcare providers. Since many confirmed COVID-19 cases present radiological findings of pneumonia, radiologic examinations can be useful for fast detection. Therefore, chest radiography can be used to fast screen COVID-19 during the patient triage, thereby determining the priority of patient's care to help saturated medical facilities in a pandemic situation. METHODS In this paper, we propose a new learning scheme called self-supervised transfer learning for detecting COVID-19 from chest X-ray (CXR) images. We compared six self-supervised learning (SSL) methods (Cross, BYOL, SimSiam, SimCLR, PIRL-jigsaw, and PIRL-rotation) with the proposed method. Additionally, we compared six pretrained DCNNs (ResNet18, ResNet50, ResNet101, CheXNet, DenseNet201, and InceptionV3) with the proposed method. We provide quantitative evaluation on the largest open COVID-19 CXR dataset and qualitative results for visual inspection. RESULTS Our method achieved a harmonic mean (HM) score of 0.985, AUC of 0.999, and four-class accuracy of 0.953. We also used the visualization technique Grad-CAM++ to generate visual explanations of different classes of CXR images with the proposed method to increase the interpretability. CONCLUSIONS Our method shows that the knowledge learned from natural images using transfer learning is beneficial for SSL of the CXR images and boosts the performance of representation learning for COVID-19 detection. Our method promises to reduce the incidence of infections among radiologists and healthcare providers.
Collapse
Affiliation(s)
- Guang Li
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Ren Togo
- Faculty of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Takahiro Ogawa
- Faculty of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Miki Haseyama
- Faculty of Information Science and Technology, Hokkaido University, Sapporo, Japan
| |
Collapse
|
10
|
Attallah O. RADIC:A tool for diagnosing COVID-19 from chest CT and X-ray scans using deep learning and quad-radiomics. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS : AN INTERNATIONAL JOURNAL SPONSORED BY THE CHEMOMETRICS SOCIETY 2023; 233:104750. [PMID: 36619376 PMCID: PMC9807270 DOI: 10.1016/j.chemolab.2022.104750] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/29/2022] [Accepted: 12/30/2022] [Indexed: 05/28/2023]
Abstract
Deep learning (DL) algorithms have demonstrated a high ability to perform speedy and accurate COVID-19 diagnosis utilizing computed tomography (CT) and X-Ray scans. The spatial information in these images was used to train DL models in the majority of relevant studies. However, training these models with images generated by radiomics approaches could enhance diagnostic accuracy. Furthermore, combining information from several radiomics approaches with time-frequency representations of the COVID-19 patterns can increase performance even further. This study introduces "RADIC", an automated tool that uses three DL models that are trained using radiomics-generated images to detect COVID-19. First, four radiomics approaches are used to analyze the original CT and X-ray images. Next, each of the three DL models is trained on a different set of radiomics, X-ray, and CT images. Then, for each DL model, deep features are obtained, and their dimensions are decreased using the Fast Walsh Hadamard Transform, yielding a time-frequency representation of the COVID-19 patterns. The tool then uses the discrete cosine transform to combine these deep features. Four classification models are then used to achieve classification. In order to validate the performance of RADIC, two benchmark datasets (CT and X-Ray) for COVID-19 are employed. The final accuracy attained using RADIC is 99.4% and 99% for the first and second datasets respectively. To prove the competing ability of RADIC, its performance is compared with related studies in the literature. The results reflect that RADIC achieve superior performance compared to other studies. The results of the proposed tool prove that a DL model can be trained more effectively with images generated by radiomics techniques than the original X-Ray and CT images. Besides, the incorporation of deep features extracted from DL models trained with multiple radiomics approaches will improve diagnostic accuracy.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering & Technology, Arab Academy for Science, Technology & Maritime Transport, Alexandria, Egypt
| |
Collapse
|
11
|
Hariri M, Avşar E. COVID-19 and pneumonia diagnosis from chest X-ray images using convolutional neural networks. NETWORK MODELING AND ANALYSIS IN HEALTH INFORMATICS AND BIOINFORMATICS 2023; 12:17. [PMID: 36938379 PMCID: PMC10010229 DOI: 10.1007/s13721-023-00413-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 01/15/2023] [Accepted: 02/25/2023] [Indexed: 03/21/2023]
Abstract
X-ray is a useful imaging modality widely utilized for diagnosing COVID-19 virus that infected a high number of people all around the world. The manual examination of these X-ray images may cause problems especially when there is lack of medical staff. Usage of deep learning models is known to be helpful for automated diagnosis of COVID-19 from the X-ray images. However, the widely used convolutional neural network architectures typically have many layers causing them to be computationally expensive. To address these problems, this study aims to design a lightweight differential diagnosis model based on convolutional neural networks. The proposed model is designed to classify the X-ray images belonging to one of the four classes that are Healthy, COVID-19, viral pneumonia, and bacterial pneumonia. To evaluate the model performance, accuracy, precision, recall, and F1-Score were calculated. The performance of the proposed model was compared with those obtained by applying transfer learning to the widely used convolutional neural network models. The results showed that the proposed model with low number of computational layers outperforms the pre-trained benchmark models, achieving an accuracy value of 89.89% while the best pre-trained model (Efficient-Net B2) achieved accuracy of 85.7%. In conclusion, the proposed lightweight model achieved the best overall result in classifying lung diseases allowing it to be used on devices with limited computational power. On the other hand, all the models showed a poor precision on viral pneumonia class and confusion in distinguishing it from bacterial pneumonia class, thus a decrease in the overall accuracy.
Collapse
Affiliation(s)
- Muhab Hariri
- grid.98622.370000 0001 2271 3229Electrical and Electronics Engineering Department, Çukurova University, 01330 Adana, Turkey
| | - Ercan Avşar
- grid.5170.30000 0001 2181 8870National Institute of Aquatic Resources, Technical University Denmark, 9850 Hirtshals, Denmark
- grid.21200.310000 0001 2183 9022Computer Engineering Department, Dokuz Eylül University, 35390 İzmir, Turkey
| |
Collapse
|
12
|
Moujahid H, Cherradi B, El Gannour O, Nagmeldin W, Abdelmaboud A, Al-Sarem M, Bahatti L, Saeed F, Hadwan M. A Novel Explainable CNN Model for Screening COVID-19 on X-ray Images. COMPUTER SYSTEMS SCIENCE AND ENGINEERING 2023; 46:1789-1809. [DOI: 10.32604/csse.2023.034022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 11/13/2022] [Indexed: 06/15/2023]
|
13
|
Lasker A, Obaidullah SM, Chakraborty C, Roy K. Application of Machine Learning and Deep Learning Techniques for COVID-19 Screening Using Radiological Imaging: A Comprehensive Review. SN COMPUTER SCIENCE 2022; 4:65. [PMID: 36467853 PMCID: PMC9702883 DOI: 10.1007/s42979-022-01464-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 10/18/2022] [Indexed: 11/26/2022]
Abstract
Lung, being one of the most important organs in human body, is often affected by various SARS diseases, among which COVID-19 has been found to be the most fatal disease in recent times. In fact, SARS-COVID 19 led to pandemic that spreads fast among the community causing respiratory problems. Under such situation, radiological imaging-based screening [mostly chest X-ray and computer tomography (CT) modalities] has been performed for rapid screening of the disease as it is a non-invasive approach. Due to scarcity of physician/chest specialist/expert doctors, technology-enabled disease screening techniques have been developed by several researchers with the help of artificial intelligence and machine learning (AI/ML). It can be remarkably observed that the researchers have introduced several AI/ML/DL (deep learning) algorithms for computer-assisted detection of COVID-19 using chest X-ray and CT images. In this paper, a comprehensive review has been conducted to summarize the works related to applications of AI/ML/DL for diagnostic prediction of COVID-19, mainly using X-ray and CT images. Following the PRISMA guidelines, total 265 articles have been selected out of 1715 published articles till the third quarter of 2021. Furthermore, this review summarizes and compares varieties of ML/DL techniques, various datasets, and their results using X-ray and CT imaging. A detailed discussion has been made on the novelty of the published works, along with advantages and limitations.
Collapse
Affiliation(s)
- Asifuzzaman Lasker
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Sk Md Obaidullah
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Chandan Chakraborty
- Department of Computer Science & Engineering, National Institute of Technical Teachers’ Training & Research Kolkata, Kolkata, India
| | - Kaushik Roy
- Department of Computer Science, West Bengal State University, Barasat, India
| |
Collapse
|
14
|
Lanjewar MG, Shaikh AY, Parab J. Cloud-based COVID-19 disease prediction system from X-Ray images using convolutional neural network on smartphone. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:1-30. [PMID: 36467434 PMCID: PMC9684956 DOI: 10.1007/s11042-022-14232-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 11/01/2022] [Accepted: 11/04/2022] [Indexed: 06/17/2023]
Abstract
COVID-19 has engulfed over 200 nations through human-to-human transmission, either directly or indirectly. Reverse Transcription-polymerase Chain Reaction (RT-PCR) has been endorsed as a standard COVID-19 diagnostic procedure but has caveats such as low sensitivity, the need for a skilled workforce, and is time-consuming. Coronaviruses show significant manifestation in Chest X-Ray (CX-Ray) images and, thus, can be a viable option for an alternate COVID-19 diagnostic strategy. An automatic COVID-19 detection system can be developed to detect the disease, thus reducing strain on the healthcare system. This paper discusses a real-time Convolutional Neural Network (CNN) based system for COVID-19 illness prediction from CX-Ray images on the cloud. The implemented CNN model displays exemplary results, with training accuracy being 99.94% and validation accuracy reaching 98.81%. The confusion matrix was utilized to assess the models' outcome and achieved 99% precision, 98% recall, 99% F1 score, 100% training area under the curve (AUC) and 98.3% validation AUC. The same CX-Ray dataset was also employed to predict the COVID-19 disease with deep Convolution Neural Networks (DCNN), such as ResNet50, VGG19, InceptonV3, and Xception. The prediction outcome demonstrated that the present CNN was more capable than the DCNN models. The efficient CNN model was deployed to the Platform as a Service (PaaS) cloud.
Collapse
Affiliation(s)
- Madhusudan G. Lanjewar
- School of Physical and Applied Sciences, Goa University, Taleigao Plateau, Goa, 403206 India
| | - Arman Yusuf Shaikh
- School of Physical and Applied Sciences, Goa University, Taleigao Plateau, Goa, 403206 India
| | - Jivan Parab
- School of Physical and Applied Sciences, Goa University, Taleigao Plateau, Goa, 403206 India
| |
Collapse
|
15
|
Deep Learning-Based Networks for Detecting Anomalies in Chest X-Rays. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7833516. [PMID: 35915789 PMCID: PMC9338857 DOI: 10.1155/2022/7833516] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Revised: 06/20/2022] [Accepted: 06/24/2022] [Indexed: 11/17/2022]
Abstract
X-ray images aid medical professionals in the diagnosis and detection of pathologies. They are critical, for example, in the diagnosis of pneumonia, the detection of masses, and, more recently, the detection of COVID-19-related conditions. The chest X-ray is one of the first imaging tests performed when pathology is suspected because it is one of the most accessible radiological examinations. Deep learning-based neural networks, particularly convolutional neural networks, have exploded in popularity in recent years and have become indispensable tools for image classification. Transfer learning approaches, in particular, have enabled the use of previously trained networks' knowledge, eliminating the need for large data sets and lowering the high computational costs associated with this type of network. This research focuses on using deep learning-based neural networks to detect anomalies in chest X-rays. Different convolutional network-based approaches are investigated using the ChestX-ray14 database, which contains over 100,000 X-ray images with labels relating to 14 different pathologies, and different classification objectives are evaluated. Starting with the pretrained networks VGG19, ResNet50, and Inceptionv3, networks based on transfer learning are implemented, with different schemes for the classification stage and data augmentation. Similarly, an ad hoc architecture is proposed and evaluated without transfer learning for the classification objective with more examples. The results show that transfer learning produces acceptable results in most of the tested cases, indicating that it is a viable first step for using deep networks when there are not enough labeled images, which is a common problem when working with medical images. The ad hoc network, on the other hand, demonstrated good generalization with data augmentation and an acceptable accuracy value. The findings suggest that using convolutional neural networks with and without transfer learning to design classifiers for detecting pathologies in chest X-rays is a good idea.
Collapse
|
16
|
Mishra L, Verma S. Graph Attention Autoencoder Inspired CNN based Brain Tumor Classification using MRI. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.06.107] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
17
|
Chetoui M, Akhloufi MA. Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-rays. J Clin Med 2022; 11:jcm11113013. [PMID: 35683400 PMCID: PMC9181325 DOI: 10.3390/jcm11113013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 05/18/2022] [Accepted: 05/23/2022] [Indexed: 01/30/2023] Open
Abstract
The rapid spread of COVID-19 across the globe since its emergence has pushed many countries’ healthcare systems to the verge of collapse. To restrict the spread of the disease and lessen the ongoing cost on the healthcare system, it is critical to appropriately identify COVID-19-positive individuals and isolate them as soon as possible. The primary COVID-19 screening test, RT-PCR, although accurate and reliable, has a long turn-around time. More recently, various researchers have demonstrated the use of deep learning approaches on chest X-ray (CXR) for COVID-19 detection. However, existing Deep Convolutional Neural Network (CNN) methods fail to capture the global context due to their inherent image-specific inductive bias. In this article, we investigated the use of vision transformers (ViT) for detecting COVID-19 in Chest X-ray (CXR) images. Several ViT models were fine-tuned for the multiclass classification problem (COVID-19, Pneumonia and Normal cases). A dataset consisting of 7598 COVID-19 CXR images, 8552 CXR for healthy patients and 5674 for Pneumonia CXR were used. The obtained results achieved high performance with an Area Under Curve (AUC) of 0.99 for multi-class classification (COVID-19 vs. Other Pneumonia vs. normal). The sensitivity of the COVID-19 class achieved 0.99. We demonstrated that the obtained results outperformed comparable state-of-the-art models for detecting COVID-19 on CXR images using CNN architectures. The attention map for the proposed model showed that our model is able to efficiently identify the signs of COVID-19.
Collapse
|
18
|
Meng J, Tan Z, Yu Y, Wang P, Liu S. TL-Med: A two-stage transfer learning recognition model for medical images of COVID-19. Biocybern Biomed Eng 2022; 42:842-855. [PMID: 35506115 PMCID: PMC9051950 DOI: 10.1016/j.bbe.2022.04.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 04/16/2022] [Accepted: 04/20/2022] [Indexed: 12/16/2022]
Abstract
The recognition of medical images with deep learning techniques can assist physicians in clinical diagnosis, but the effectiveness of recognition models relies on massive amounts of labeled data. With the rampant development of the novel coronavirus (COVID-19) worldwide, rapid COVID-19 diagnosis has become an effective measure to combat the outbreak. However, labeled COVID-19 data are scarce. Therefore, we propose a two-stage transfer learning recognition model for medical images of COVID-19 (TL-Med) based on the concept of “generic domain-target-related domain-target domain”. First, we use the Vision Transformer (ViT) pretraining model to obtain generic features from massive heterogeneous data and then learn medical features from large-scale homogeneous data. Two-stage transfer learning uses the learned primary features and the underlying information for COVID-19 image recognition to solve the problem by which data insufficiency leads to the inability of the model to learn underlying target dataset information. The experimental results obtained on a COVID-19 dataset using the TL-Med model produce a recognition accuracy of 93.24%, which shows that the proposed method is more effective in detecting COVID-19 images than other approaches and may greatly alleviate the problem of data scarcity in this field.
Collapse
|
19
|
Shanbehzadeh M, Nopour R, Kazemi-Arpanahi H. Developing an artificial neural network for detecting COVID-19 disease. JOURNAL OF EDUCATION AND HEALTH PROMOTION 2022; 11:2. [PMID: 35281397 PMCID: PMC8893090 DOI: 10.4103/jehp.jehp_387_21] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 04/29/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND From December 2019, atypical pneumonia termed COVID-19 has been increasing exponentially across the world. It poses a great threat and challenge to world health and the economy. Medical specialists face uncertainty in making decisions based on their judgment for COVID-19. Thus, this study aimed to establish an intelligent model based on artificial neural networks (ANNs) for diagnosing COVID-19. MATERIALS AND METHODS Using a single-center registry, we studied the records of 250 confirmed COVID-19 and 150 negative cases from February 9, 2020, to October 20, 2020. The correlation coefficient technique was used to determine the most significant variables of the ANN model. The variables at P < 0.05 were used for model construction. We applied the back-propagation technique for training a neural network on the dataset. After comparing different neural network configurations, the best configuration of ANN was acquired, then its strength has been evaluated. RESULTS After the feature selection process, a total of 18 variables were determined as the most relevant predictors for developing the ANN models. The results indicated that two nested loops' architecture of 9-10-15-2 (10 and 15 neurons used in layer 1 and layer 2, respectively) with the area under the curve of 0.982, the sensitivity of 96.4%, specificity of 90.6%, and accuracy of 94% was introduced as the best configuration model for COVID-19 diagnosis. CONCLUSION The proposed ANN-based clinical decision support system could be considered as a suitable computational technique for the frontline practitioner in early detection, effective intervention, and possibly a reduction of mortality in patients with COVID-19.
Collapse
Affiliation(s)
- Mostafa Shanbehzadeh
- Department of Health Information Technology, School of Paramedical, Ilam University of Medical Sciences, Ilam, Iran
| | - Raoof Nopour
- Department of Health Information Management, Student Research Committee, School of Health Management and Information Sciences Branch, Iran University of Medical Sciences, Tehran, Iran
| | - Hadi Kazemi-Arpanahi
- Department of Health Information Technology, Abadan University of Medical Sciences, Abadan, Iran
- Department of Student Research Committee, Abadan University of Medical Sciences, Abadan, Iran
| |
Collapse
|
20
|
Loey M, El-Sappagh S, Mirjalili S. Bayesian-based optimized deep learning model to detect COVID-19 patients using chest X-ray image data. Comput Biol Med 2022; 142:105213. [PMID: 35026573 PMCID: PMC8730711 DOI: 10.1016/j.compbiomed.2022.105213] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Revised: 12/21/2021] [Accepted: 01/02/2022] [Indexed: 12/12/2022]
Abstract
Coronavirus Disease 2019 (COVID-19) is extremely infectious and rapidly spreading around the globe. As a result, rapid and precise identification of COVID-19 patients is critical. Deep Learning has shown promising performance in a variety of domains and emerged as a key technology in Artificial Intelligence. Recent advances in visual recognition are based on image classification and artefacts detection within these images. The purpose of this study is to classify chest X-ray images of COVID-19 artefacts in changed real-world situations. A novel Bayesian optimization-based convolutional neural network (CNN) model is proposed for the recognition of chest X-ray images. The proposed model has two main components. The first one utilizes CNN to extract and learn deep features. The second component is a Bayesian-based optimizer that is used to tune the CNN hyperparameters according to an objective function. The used large-scale and balanced dataset comprises 10,848 images (i.e., 3616 COVID-19, 3616 normal cases, and 3616 Pneumonia). In the first ablation investigation, we compared Bayesian optimization to three distinct ablation scenarios. We used convergence charts and accuracy to compare the three scenarios. We noticed that the Bayesian search-derived optimal architecture achieved 96% accuracy. To assist qualitative researchers, address their research questions in a methodologically sound manner, a comparison of research method and theme analysis methods was provided. The suggested model is shown to be more trustworthy and accurate in real world.
Collapse
Affiliation(s)
- Mohamed Loey
- Department of Computer Science, Faculty of Computers and Artificial Intelligence, Benha University, Benha, 13518, Egypt; Information Technology Program, New Cairo Technological University, New Cairo, Egypt; Computer Engineering Department, Cybersecurity Department, Engineering and Information Technology College, Buraydah Colleges, Buraydah, Al-Qassim, Saudi Arabia.
| | - Shaker El-Sappagh
- Department of Information Systems, Faculty of Computers and Artificial Intelligence, Benha University, Benha, 13518, Egypt; Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt.
| | - Seyedali Mirjalili
- Center for Artificial Intelligence Research and Optimization, Torrens University Australia, Fortitude Valley, Brisbane, QLD, 4006, Australia; Yonsei Frontier Lab, Yonsei University, Seoul, South Korea.
| |
Collapse
|
21
|
Cengil E, Çınar A. The effect of deep feature concatenation in the classification problem: An approach on COVID-19 disease detection. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:26-40. [PMID: 34898851 PMCID: PMC8653237 DOI: 10.1002/ima.22659] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 08/04/2021] [Accepted: 09/16/2021] [Indexed: 06/01/2023]
Abstract
In image classification applications, the most important thing is to obtain useful features. Convolutional neural networks automatically learn the extracted features during training. The classification process is carried out with the obtained features. Therefore, obtaining successful features is critical to achieving high classification success. This article focuses on providing effective features to enhance classification performance. For this purpose, the success of the process of concatenating features in classification is taken as basis. At first, the features acquired by feature transfer method are extracted from AlexNet, Xception, NASNETLarge, and EfficientNet-B0 architectures, which are known to be successful in classification problems. Concatenating the features results in the creation of a new feature set. The method is completed by subjecting the features to various classification algorithms. The proposed pipeline is applied to the three datasets: "COVID-19 Image Dataset," "COVID-19 Pneumonia Normal Chest X-ray (PA) Dataset," and "COVID-19 Radiography Database" for COVID-19 disease detection. The whole datasets contain three classes (normal, COVID, and pneumonia). The best classification accuracies for the three datasets are 98.8%, 95.9%, and 99.6%, respectively. Performance metrics are given such as: sensitivity, precision, specificity, and F1-score values, as well. Contribution of paper is as follows: COVID-19 disease is similar to other lung infections. This situation makes diagnosis difficult. Furthermore, the virus's rapid spread necessitates the need to detect cases as soon as possible. There has been an increased curiosity in computer-aided deep learning models to provide the requirements. The use of the proposed method will be beneficial as it provides high accuracy.
Collapse
Affiliation(s)
- Emine Cengil
- Department of Computer Engineering, Faculty of EngineeringFirat UniversityElazigTurkey
| | - Ahmet Çınar
- Department of Computer Engineering, Faculty of EngineeringFirat UniversityElazigTurkey
| |
Collapse
|
22
|
Shome D, Kar T, Mohanty SN, Tiwari P, Muhammad K, AlTameem A, Zhang Y, Saudagar AKJ. COVID-Transformer: Interpretable COVID-19 Detection Using Vision Transformer for Healthcare. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:11086. [PMID: 34769600 PMCID: PMC8583247 DOI: 10.3390/ijerph182111086] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 10/16/2021] [Accepted: 10/17/2021] [Indexed: 11/18/2022]
Abstract
In the recent pandemic, accurate and rapid testing of patients remained a critical task in the diagnosis and control of COVID-19 disease spread in the healthcare industry. Because of the sudden increase in cases, most countries have faced scarcity and a low rate of testing. Chest X-rays have been shown in the literature to be a potential source of testing for COVID-19 patients, but manually checking X-ray reports is time-consuming and error-prone. Considering these limitations and the advancements in data science, we proposed a Vision Transformer-based deep learning pipeline for COVID-19 detection from chest X-ray-based imaging. Due to the lack of large data sets, we collected data from three open-source data sets of chest X-ray images and aggregated them to form a 30 K image data set, which is the largest publicly available collection of chest X-ray images in this domain to our knowledge. Our proposed transformer model effectively differentiates COVID-19 from normal chest X-rays with an accuracy of 98% along with an AUC score of 99% in the binary classification task. It distinguishes COVID-19, normal, and pneumonia patient's X-rays with an accuracy of 92% and AUC score of 98% in the Multi-class classification task. For evaluation on our data set, we fine-tuned some of the widely used models in literature, namely, EfficientNetB0, InceptionV3, Resnet50, MobileNetV3, Xception, and DenseNet-121, as baselines. Our proposed transformer model outperformed them in terms of all metrics. In addition, a Grad-CAM based visualization is created which makes our approach interpretable by radiologists and can be used to monitor the progression of the disease in the affected lungs, assisting healthcare.
Collapse
Affiliation(s)
- Debaditya Shome
- School of Electronics Engineering, KIIT Deemed to be University, Odisha 751024, India; (D.S.); (T.K.)
| | - T. Kar
- School of Electronics Engineering, KIIT Deemed to be University, Odisha 751024, India; (D.S.); (T.K.)
| | - Sachi Nandan Mohanty
- Department of Computer Science & Engineering, Vardhaman College of Engineering (Autonomous), Hyderabad 501218, India;
| | - Prayag Tiwari
- Department of Computer Science, Aalto University, 02150 Espoo, Finland;
| | - Khan Muhammad
- Visual Analytics for Knowledge Laboratory (VIS2KNOW Lab), School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul 03063, Korea
| | - Abdullah AlTameem
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
| | - Yazhou Zhang
- Software Engineering College, Zhengzhou University of Light Industry, Zhengzhou 450001, China;
| | - Abdul Khader Jilani Saudagar
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
| |
Collapse
|
23
|
López-Cabrera JD, Orozco-Morales R, Portal-Díaz JA, Lovelle-Enríquez O, Pérez-Díaz M. Current limitations to identify covid-19 using artificial intelligence with chest x-ray imaging (part ii). The shortcut learning problem. HEALTH AND TECHNOLOGY 2021; 11:1331-1345. [PMID: 34660166 PMCID: PMC8502237 DOI: 10.1007/s12553-021-00609-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 10/05/2021] [Indexed: 12/12/2022]
Abstract
Since the outbreak of the COVID-19 pandemic, computer vision researchers have been working on automatic identification of this disease using radiological images. The results achieved by automatic classification methods far exceed those of human specialists, with sensitivity as high as 100% being reported. However, prestigious radiology societies have stated that the use of this type of imaging alone is not recommended as a diagnostic method. According to some experts the patterns presented in these images are unspecific and subtle, overlapping with other viral pneumonias. This report seeks to evaluate the analysis the robustness and generalizability of different approaches using artificial intelligence, deep learning and computer vision to identify COVID-19 using chest X-rays images. We also seek to alert researchers and reviewers to the issue of "shortcut learning". Recommendations are presented to identify whether COVID-19 automatic classification models are being affected by shortcut learning. Firstly, papers using explainable artificial intelligence methods are reviewed. The results of applying external validation sets are evaluated to determine the generalizability of these methods. Finally, studies that apply traditional computer vision methods to perform the same task are considered. It is evident that using the whole chest X-Ray image or the bounding box of the lungs, the image regions that contribute most to the classification appear outside of the lung region, something that is not likely possible. In addition, although the investigations that evaluated their models on data sets external to the training set, the effectiveness of these models decreased significantly, it may provide a more realistic representation as how the model will perform in the clinic. The results indicate that, so far, the existing models often involve shortcut learning, which makes their use less appropriate in the clinical setting.
Collapse
Affiliation(s)
- José Daniel López-Cabrera
- Centro de Investigaciones de La Informática, Facultad de Matemática, Física y Computación, Universidad Central “Marta Abreu” de Las Villas, Villa Clara, Santa Clara, Cuba
| | - Rubén Orozco-Morales
- Departamento de Control Automático, Facultad de Ingeniería Eléctrica, Universidad Central “Marta Abreu” de Las Villas, Villa Clara, Santa Clara, Cuba
| | - Jorge Armando Portal-Díaz
- Departamento de Control Automático, Facultad de Ingeniería Eléctrica, Universidad Central “Marta Abreu” de Las Villas, Villa Clara, Santa Clara, Cuba
| | - Orlando Lovelle-Enríquez
- Departamento de Imagenología, Hospital Comandante Manuel Fajardo Rivero, Villa Clara, Santa Clara, Cuba
| | - Marlén Pérez-Díaz
- Departamento de Control Automático, Facultad de Ingeniería Eléctrica, Universidad Central “Marta Abreu” de Las Villas, Villa Clara, Santa Clara, Cuba
| |
Collapse
|
24
|
Murugan R, Goel T, Mirjalili S, Chakrabartty DK. WOANet: Whale optimized deep neural network for the classification of COVID-19 from radiography images. Biocybern Biomed Eng 2021; 41:1702-1718. [PMID: 34720309 PMCID: PMC8536521 DOI: 10.1016/j.bbe.2021.10.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 10/02/2021] [Accepted: 10/08/2021] [Indexed: 12/23/2022]
Abstract
Coronavirus Diseases (COVID-19) is a new disease that will be declared a global pandemic in 2020. It is characterized by a constellation of traits like fever, dry cough, dyspnea, fatigue, chest pain, etc. Clinical findings have shown that the human chest Computed Tomography(CT) images can diagnose lung infection in most COVID-19 patients. Visual changes in CT scan due to COVID-19 is subjective and evaluated by radiologists for diagnosis purpose. Deep Learning (DL) can provide an automatic diagnosis tool to relieve radiologists' burden for quantitative analysis of CT scan images in patients. However, DL techniques face different training problems like mode collapse and instability. Deciding on training hyper-parameters to adjust the weight and biases of DL by a given CT image dataset is crucial for achieving the best accuracy. This paper combines the backpropagation algorithm and Whale Optimization Algorithm (WOA) to optimize such DL networks. Experimental results for the diagnosis of COVID-19 patients from a comprehensive COVID-CT scan dataset show the best performance compared to other recent methods. The proposed network architecture results were validated with the existing pre-trained network to prove the efficiency of the network.
Collapse
Affiliation(s)
- R Murugan
- Bio-Medical Imaging Laboratory(BIOMIL), Department of Electronics and communication Engineering, National Institute Of Technology Silchar, Assam 788010, India
| | - Tripti Goel
- Bio-Medical Imaging Laboratory(BIOMIL), Department of Electronics and communication Engineering, National Institute Of Technology Silchar, Assam 788010, India
| | - Seyedali Mirjalili
- Centre for Artificial Intelligence Research and Optimisation, Torrens University Australia, Fortitude Valley, Brisbane, QLD 4006, Australia
- Yonsei Frontier Lab, Yonsei University, Seoul, South Korea
| | | |
Collapse
|
25
|
Taresh MM, Zhu N, Ali TAA, Alghaili M, Hameed AS, Mutar ML. KL-MOB: automated COVID-19 recognition using a novel approach based on image enhancement and a modified MobileNet CNN. PeerJ Comput Sci 2021; 7:e694. [PMID: 34616885 PMCID: PMC8459788 DOI: 10.7717/peerj-cs.694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 08/05/2021] [Indexed: 06/13/2023]
Abstract
The emergence of the novel coronavirus pneumonia (COVID-19) pandemic at the end of 2019 led to worldwide chaos. However, the world breathed a sigh of relief when a few countries announced the development of a vaccine and gradually began to distribute it. Nevertheless, the emergence of another wave of this pandemic returned us to the starting point. At present, early detection of infected people is the paramount concern of both specialists and health researchers. This paper proposes a method to detect infected patients through chest x-ray images by using the large dataset available online for COVID-19 (COVIDx), which consists of 2128 X-ray images of COVID-19 cases, 8,066 normal cases, and 5,575 cases of pneumonia. A hybrid algorithm is applied to improve image quality before undertaking neural network training. This algorithm combines two different noise-reduction filters in the image, followed by a contrast enhancement algorithm. To detect COVID-19, we propose a novel convolution neural network (CNN) architecture called KL-MOB (COVID-19 detection network based on the MobileNet structure). The performance of KL-MOB is boosted by adding the Kullback-Leibler (KL) divergence loss function when trained from scratch. The KL divergence loss function is adopted for content-based image retrieval and fine-grained classification to improve the quality of image representation. The results are impressive: the overall benchmark accuracy, sensitivity, specificity, and precision are 98.7%, 98.32%, 98.82% and 98.37%, respectively. These promising results should help other researchers develop innovative methods to aid specialists. The tremendous potential of the method proposed herein can also be used to detect COVID-19 quickly and safely in patients throughout the world.
Collapse
Affiliation(s)
| | - Ningbo Zhu
- College of Information Science and Engineering, Hunan University, Changsha, Hunan, China
| | - Talal Ahmed Ali Ali
- College of Information Science and Engineering, Hunan University, Changsha, Hunan, China
| | - Mohammed Alghaili
- College of Information Science and Engineering, Hunan University, Changsha, Hunan, China
| | - Asaad Shakir Hameed
- Department of Mathematics, General Directorate of Thi-Qar Education, Ministry of Education, Thi-Qar, Iraq
| | - Modhi Lafta Mutar
- Department of Mathematics, General Directorate of Thi-Qar Education, Ministry of Education, Thi-Qar, Iraq
| |
Collapse
|
26
|
Moses DA. Deep learning applied to automatic disease detection using chest X-rays. J Med Imaging Radiat Oncol 2021; 65:498-517. [PMID: 34231311 DOI: 10.1111/1754-9485.13273] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/08/2021] [Indexed: 12/24/2022]
Abstract
Deep learning (DL) has shown rapid advancement and considerable promise when applied to the automatic detection of diseases using CXRs. This is important given the widespread use of CXRs across the world in diagnosing significant pathologies, and the lack of trained radiologists to report them. This review article introduces the basic concepts of DL as applied to CXR image analysis including basic deep neural network (DNN) structure, the use of transfer learning and the application of data augmentation. It then reviews the current literature on how DNN models have been applied to the detection of common CXR abnormalities (e.g. lung nodules, pneumonia, tuberculosis and pneumothorax) over the last few years. This includes DL approaches employed for the classification of multiple different diseases (multi-class classification). Performance of different techniques and models and their comparison with human observers are presented. Some of the challenges facing DNN models, including their future implementation and relationships to radiologists, are also discussed.
Collapse
Affiliation(s)
- Daniel A Moses
- Graduate School of Biomedical Engineering, Faculty of Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Department of Medical Imaging, Prince of Wales Hospital, Sydney, New South Wales, Australia
| |
Collapse
|
27
|
Liver disease classification from ultrasound using multi-scale CNN. Int J Comput Assist Radiol Surg 2021; 16:1537-1548. [PMID: 34097226 DOI: 10.1007/s11548-021-02414-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Accepted: 05/20/2021] [Indexed: 12/13/2022]
Abstract
PURPOSE Ultrasound (US) is the preferred modality for fatty liver disease diagnosis due to its noninvasive, real-time, and cost-effective imaging capabilities. However, traditional B-mode US is qualitative, and therefore, the assessment is very subjective. Computer-aided diagnostic tools can improve the specificity and sensitivity of US and help clinicians to perform uniform diagnoses. METHODS In this work, we propose a novel deep learning model for nonalcoholic fatty liver disease classification from US data. We design a multi-feature guided multi-scale residual convolutional neural network (CNN) architecture to capture features of different receptive fields. B-mode US images are combined with their corresponding local phase filtered images and radial symmetry transformed images as multi-feature inputs for the network. Various fusion strategies are studied to improve prediction accuracy. We evaluate the designed network architectures on B-mode in vivo liver US images collected from 55 subjects. We also provide quantitative results by comparing our proposed multi-feature CNN architecture against traditional CNN designs and machine learning methods. RESULTS Quantitative results show an average classification accuracy above 90% over tenfold cross-validation. Our proposed method achieves a 97.8% area under the ROC curve (AUC) for the patient-specific leave-one-out cross-validation (LOOCV) evaluation. Comprehensive validation results further demonstrate that our proposed approaches achieve significant improvements compared to training mono-feature CNN architectures ([Formula: see text]). CONCLUSIONS Feature combination is valuable for the traditional classification methods, and the use of multi-scale CNN can improve liver classification accuracy. Based on the promising performance, the proposed method has the potential in practical applications to help radiologists diagnose nonalcoholic fatty liver disease.
Collapse
|
28
|
Chaddad A, Hassan L, Desrosiers C. Deep CNN models for predicting COVID-19 in CT and x-ray images. J Med Imaging (Bellingham) 2021; 8:014502. [PMID: 33912622 PMCID: PMC8071782 DOI: 10.1117/1.jmi.8.s1.014502] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 03/26/2021] [Indexed: 01/12/2023] Open
Abstract
Purpose: Coronavirus disease 2019 (COVID-19) is a new infection that has spread worldwide and with no automatic model to reliably detect its presence from images. We aim to investigate the potential of deep transfer learning to predict COVID-19 infection using chest computed tomography (CT) and x-ray images. Approach: Regions of interest (ROI) corresponding to ground-glass opacities (GGO), consolidations, and pleural effusions were labeled in 100 axial lung CT images from 60 COVID-19-infected subjects. These segmented regions were then employed as an additional input to six deep convolutional neural network (CNN) architectures (AlexNet, DenseNet, GoogleNet, NASNet-Mobile, ResNet18, and DarkNet), pretrained on natural images, to differentiate between COVID-19 and normal CT images. We also explored the model's ability to classify x-ray images as COVID-19, non-COVID-19 pneumonia, or normal. Performance on test images was measured with global accuracy and area under the receiver operating characteristic curve (AUC). Results: When using raw CT images as input to the tested models, the highest accuracy of 82% and AUC of 88.16% is achieved. Incorporating the three ROIs as an additional model inputs further boosts performance to an accuracy of 82.30% and an AUC of 90.10% (DarkNet). For x-ray images, we obtained an outstanding AUC of 97% for classifying COVID-19 versus normal versus other. Combing chest CT and x-ray images, DarkNet architecture achieves the highest accuracy of 99.09% and AUC of 99.89% in classifying COVID-19 from non-COVID-19. Our results confirm the ability of deep CNNs with transfer learning to predict COVID-19 in both chest CT and x-ray images. Conclusions: The proposed method could help radiologists increase the accuracy of their diagnosis and increase efficiency in COVID-19 management.
Collapse
Affiliation(s)
- Ahmad Chaddad
- Guilin University of Electronic Technology, School of Artificial Intelligence, Guilin, China
| | - Lama Hassan
- Guilin University of Electronic Technology, School of Artificial Intelligence, Guilin, China
| | | |
Collapse
|