1
|
Moitra M, Alafeef M, Narasimhan A, Kakaria V, Moitra P, Pan D. Diagnosis of COVID-19 with simultaneous accurate prediction of cardiac abnormalities from chest computed tomographic images. PLoS One 2023; 18:e0290494. [PMID: 38096254 PMCID: PMC10721010 DOI: 10.1371/journal.pone.0290494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 08/09/2023] [Indexed: 12/17/2023] Open
Abstract
COVID-19 has potential consequences on the pulmonary and cardiovascular health of millions of infected people worldwide. Chest computed tomographic (CT) imaging has remained the first line of diagnosis for individuals infected with SARS-CoV-2. However, differentiating COVID-19 from other types of pneumonia and predicting associated cardiovascular complications from the same chest-CT images have remained challenging. In this study, we have first used transfer learning method to distinguish COVID-19 from other pneumonia and healthy cases with 99.2% accuracy. Next, we have developed another CNN-based deep learning approach to automatically predict the risk of cardiovascular disease (CVD) in COVID-19 patients compared to the normal subjects with 97.97% accuracy. Our model was further validated against cardiac CT-based markers including cardiac thoracic ratio (CTR), pulmonary artery to aorta ratio (PA/A), and presence of calcified plaque. Thus, we successfully demonstrate that CT-based deep learning algorithms can be employed as a dual screening diagnostic tool to diagnose COVID-19 and differentiate it from other pneumonia, and also predicts CVD risk associated with COVID-19 infection.
Collapse
Affiliation(s)
- Moumita Moitra
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
- Department of Chemical, Biochemical and Environmental Engineering, University of Maryland Baltimore County, Baltimore, Maryland, United States of America
| | - Maha Alafeef
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
- Department of Chemical, Biochemical and Environmental Engineering, University of Maryland Baltimore County, Baltimore, Maryland, United States of America
- Biomedical Engineering Department, Jordan University of Science and Technology, Irbid, Jordan
- Department of Nuclear Engineering, The Pennsylvania State University, State College, Pennsylvania, United States of America
| | - Arjun Narasimhan
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
| | - Vikram Kakaria
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
| | - Parikshit Moitra
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
- Department of Nuclear Engineering, The Pennsylvania State University, State College, Pennsylvania, United States of America
| | - Dipanjan Pan
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
- Department of Chemical, Biochemical and Environmental Engineering, University of Maryland Baltimore County, Baltimore, Maryland, United States of America
- Department of Nuclear Engineering, The Pennsylvania State University, State College, Pennsylvania, United States of America
- Department of Materials Science & Engineering, The Pennsylvania State University, State College, Pennsylvania, United States of America
- Huck Institutes of the Life Sciences, State College, Pennsylvania, United States of America
| |
Collapse
|
2
|
Ahmad HK, Milne MR, Buchlak QD, Ektas N, Sanderson G, Chamtie H, Karunasena S, Chiang J, Holt X, Tang CHM, Seah JCY, Bottrell G, Esmaili N, Brotchie P, Jones C. Machine Learning Augmented Interpretation of Chest X-rays: A Systematic Review. Diagnostics (Basel) 2023; 13:diagnostics13040743. [PMID: 36832231 PMCID: PMC9955112 DOI: 10.3390/diagnostics13040743] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/13/2023] [Accepted: 02/14/2023] [Indexed: 02/18/2023] Open
Abstract
Limitations of the chest X-ray (CXR) have resulted in attempts to create machine learning systems to assist clinicians and improve interpretation accuracy. An understanding of the capabilities and limitations of modern machine learning systems is necessary for clinicians as these tools begin to permeate practice. This systematic review aimed to provide an overview of machine learning applications designed to facilitate CXR interpretation. A systematic search strategy was executed to identify research into machine learning algorithms capable of detecting >2 radiographic findings on CXRs published between January 2020 and September 2022. Model details and study characteristics, including risk of bias and quality, were summarized. Initially, 2248 articles were retrieved, with 46 included in the final review. Published models demonstrated strong standalone performance and were typically as accurate, or more accurate, than radiologists or non-radiologist clinicians. Multiple studies demonstrated an improvement in the clinical finding classification performance of clinicians when models acted as a diagnostic assistance device. Device performance was compared with that of clinicians in 30% of studies, while effects on clinical perception and diagnosis were evaluated in 19%. Only one study was prospectively run. On average, 128,662 images were used to train and validate models. Most classified less than eight clinical findings, while the three most comprehensive models classified 54, 72, and 124 findings. This review suggests that machine learning devices designed to facilitate CXR interpretation perform strongly, improve the detection performance of clinicians, and improve the efficiency of radiology workflow. Several limitations were identified, and clinician involvement and expertise will be key to driving the safe implementation of quality CXR machine learning systems.
Collapse
Affiliation(s)
- Hassan K. Ahmad
- Annalise.ai, Sydney, NSW 2000, Australia
- Department of Emergency Medicine, Royal North Shore Hospital, Sydney, NSW 2065, Australia
- Correspondence:
| | | | - Quinlan D. Buchlak
- Annalise.ai, Sydney, NSW 2000, Australia
- School of Medicine, University of Notre Dame Australia, Sydney, NSW 2007, Australia
- Department of Neurosurgery, Monash Health, Melbourne, VIC 3168, Australia
| | | | | | | | | | - Jason Chiang
- Annalise.ai, Sydney, NSW 2000, Australia
- Department of General Practice, University of Melbourne, Melbourne, VIC 3010, Australia
- Westmead Applied Research Centre, University of Sydney, Sydney, NSW 2006, Australia
| | | | | | - Jarrel C. Y. Seah
- Annalise.ai, Sydney, NSW 2000, Australia
- Department of Radiology, Alfred Health, Melbourne, VIC 3004, Australia
| | | | - Nazanin Esmaili
- School of Medicine, University of Notre Dame Australia, Sydney, NSW 2007, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
| | - Peter Brotchie
- Annalise.ai, Sydney, NSW 2000, Australia
- Department of Radiology, St Vincent’s Health Australia, Melbourne, VIC 3065, Australia
| | - Catherine Jones
- Annalise.ai, Sydney, NSW 2000, Australia
- I-MED Radiology Network, Brisbane, QLD 4006, Australia
- School of Public and Preventive Health, Monash University, Clayton, VIC 3800, Australia
- Department of Clinical Imaging Science, University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
3
|
Esmi N, Golshan Y, Asadi S, Shahbahrami A, Gaydadjiev G. A fuzzy fine-tuned model for COVID-19 diagnosis. Comput Biol Med 2023; 153:106483. [PMID: 36621192 PMCID: PMC9811914 DOI: 10.1016/j.compbiomed.2022.106483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 12/16/2022] [Accepted: 12/25/2022] [Indexed: 01/06/2023]
Abstract
The COVID-19 disease pandemic spread rapidly worldwide and caused extensive human death and financial losses. Therefore, finding accurate, accessible, and inexpensive methods for diagnosing the disease has challenged researchers. To automate the process of diagnosing COVID-19 disease through images, several strategies based on deep learning, such as transfer learning and ensemble learning, have been presented. However, these techniques cannot deal with noises and their propagation in different layers. In addition, many of the datasets already being used are imbalanced, and most techniques have used binary classification, COVID-19, from normal cases. To address these issues, we use the blind/referenceless image spatial quality evaluator to filter out inappropriate data in the dataset. In order to increase the volume and diversity of the data, we merge two datasets. This combination of two datasets allows multi-class classification between the three states of normal, COVID-19, and types of pneumonia, including bacterial and viral types. A weighted multi-class cross-entropy is used to reduce the effect of data imbalance. In addition, a fuzzy fine-tuned Xception model is applied to reduce the noise propagation in different layers. Quantitative analysis shows that our proposed model achieves 96.60% accuracy on the merged test set, which is more accurate than previously mentioned state-of-the-art methods.
Collapse
Affiliation(s)
- Nima Esmi
- Faculty of Science and Engineering, University of Groningen, Netherlands.
| | - Yasaman Golshan
- Department of Computer Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran.
| | - Sara Asadi
- Department of Computer Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran.
| | - Asadollah Shahbahrami
- Faculty of Science and Engineering, University of Groningen, Netherlands; Department of Computer Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran.
| | - Georgi Gaydadjiev
- Faculty of Science and Engineering, University of Groningen, Netherlands.
| |
Collapse
|
4
|
Akhter Y, Singh R, Vatsa M. AI-based radiodiagnosis using chest X-rays: A review. Front Big Data 2023; 6:1120989. [PMID: 37091458 PMCID: PMC10116151 DOI: 10.3389/fdata.2023.1120989] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 01/06/2023] [Indexed: 04/25/2023] Open
Abstract
Chest Radiograph or Chest X-ray (CXR) is a common, fast, non-invasive, relatively cheap radiological examination method in medical sciences. CXRs can aid in diagnosing many lung ailments such as Pneumonia, Tuberculosis, Pneumoconiosis, COVID-19, and lung cancer. Apart from other radiological examinations, every year, 2 billion CXRs are performed worldwide. However, the availability of the workforce to handle this amount of workload in hospitals is cumbersome, particularly in developing and low-income nations. Recent advances in AI, particularly in computer vision, have drawn attention to solving challenging medical image analysis problems. Healthcare is one of the areas where AI/ML-based assistive screening/diagnostic aid can play a crucial part in social welfare. However, it faces multiple challenges, such as small sample space, data privacy, poor quality samples, adversarial attacks and most importantly, the model interpretability for reliability on machine intelligence. This paper provides a structured review of the CXR-based analysis for different tasks, lung diseases and, in particular, the challenges faced by AI/ML-based systems for diagnosis. Further, we provide an overview of existing datasets, evaluation metrics for different[][15mm][0mm]Q5 tasks and patents issued. We also present key challenges and open problems in this research domain.
Collapse
|
5
|
Marques JG, Guedes LA, da Costa Abreu MC. Evaluating Time Influence over Performance of Machine-Learning-Based Diagnosis: A Case Study of COVID-19 Pandemic in Brazil. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 20:136. [PMID: 36612458 PMCID: PMC9819042 DOI: 10.3390/ijerph20010136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/13/2022] [Accepted: 12/15/2022] [Indexed: 06/17/2023]
Abstract
Efficiently recognising severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) symptoms enables a quick and accurate diagnosis to be made, and helps in mitigating the spread of the coronavirus disease 2019. However, the emergence of new variants has caused constant changes in the symptoms associate with COVID-19. These constant changes directly impact the performance of machine-learning-based diagnose. In this context, considering the impact of these changes in symptoms over time is necessary for accurate diagnoses. Thus, in this study, we propose a machine-learning-based approach for diagnosing COVID-19 that considers the importance of time in model predictions. Our approach analyses the performance of XGBoost using two different time-based strategies for model training: month-to-month and accumulated strategies. The model was evaluated using known metrics: accuracy, precision, and recall. Furthermore, to explain the impact of feature changes on model prediction, feature importance was measured using the SHAP technique, an XAI technique. We obtained very interesting results: considering time when creating a COVID-19 diagnostic prediction model is advantageous.
Collapse
Affiliation(s)
- Julliana Gonçalves Marques
- Department of Informatics and Applied Mathematics, Federal University of Rio Grande do Norte, Natal 59078-970, Brazil
| | - Luiz Affonso Guedes
- Department of Computer Engineering and Automation, Federal University of Rio Grande do Norte, Natal 59078-970, Brazil
| | | |
Collapse
|
6
|
El-Baz A, Giridharan GA, Shalaby A, Mahmoud AH, Ghazal M. Special Issue "Computer Aided Diagnosis Sensors". SENSORS (BASEL, SWITZERLAND) 2022; 22:8052. [PMID: 36298403 PMCID: PMC9610085 DOI: 10.3390/s22208052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 10/19/2022] [Indexed: 06/16/2023]
Abstract
Sensors used to diagnose, monitor or treat diseases in the medical domain are known as medical sensors [...].
Collapse
Affiliation(s)
- Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | | | - Ahmed Shalaby
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ali H. Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| |
Collapse
|
7
|
Fraiwan M, Faouri E, Khasawneh N. Classification of Corn Diseases from Leaf Images Using Deep Transfer Learning. PLANTS (BASEL, SWITZERLAND) 2022; 11:2668. [PMID: 36297692 PMCID: PMC9609100 DOI: 10.3390/plants11202668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 10/05/2022] [Accepted: 10/07/2022] [Indexed: 06/16/2023]
Abstract
Corn is a mass-produced agricultural product that plays a major role in the food chain and many agricultural products in addition to biofuels. Furthermore, households in poor countries may depend on small-scale corn cultivation for their basic needs. However, corn crops are vulnerable to diseases, which greatly affects farming yields. Moreover, extreme weather conditions and unseasonable temperatures can accelerate the spread of diseases. The pervasiveness and ubiquity of technology have allowed for the deployment of technological innovations in many areas. Particularly, applications powered by artificial intelligence algorithms have established themselves in many disciplines relating to image, signal, and sound recognition. In this work, we target the application of deep transfer learning in the classification of three corn diseases (i.e., Cercospora leaf spot, common rust, and northern leaf blight) in addition to the healthy plants. Using corn leaf image as input and convolutional neural networks models, no preprocessing or explicit feature extraction was required. Transfer learning using well-established and well-designed deep learning models was performed and extensively evaluated using multiple scenarios for splitting the data. In addition, the experiments were repeated 10 times to account for variability in picking random choices. The four classes were discerned with a mean accuracy of 98.6%. This and the other performance metrics exhibit values that make it feasible to build and deploy applications that can aid farmers and plant pathologists to promptly and accurately perform disease identification and apply the correct remedies.
Collapse
Affiliation(s)
- Mohammad Fraiwan
- Department of Computer Engineering, Jordan University of Science and Technology, Irbid 22110, Jordan
| | - Esraa Faouri
- Department of Computer Engineering, Jordan University of Science and Technology, Irbid 22110, Jordan
| | - Natheer Khasawneh
- Department of Software Engineering, Jordan University of Science and Technology, Irbid 22110, Jordan
| |
Collapse
|
8
|
Fraiwan M, Al-Kofahi N, Ibnian A, Hanatleh O. Detection of developmental dysplasia of the hip in X-ray images using deep transfer learning. BMC Med Inform Decis Mak 2022; 22:216. [PMID: 35964072 PMCID: PMC9375244 DOI: 10.1186/s12911-022-01957-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 07/30/2022] [Indexed: 01/14/2023] Open
Abstract
Background Developmental dysplasia of the hip (DDH) is a relatively common disorder in newborns, with a reported prevalence of 1–5 per 1000 births. It can lead to developmental abnormalities in terms of mechanical difficulties and a displacement of the joint (i.e., subluxation or dysplasia). An early diagnosis in the first few months from birth can drastically improve healing, render surgical intervention unnecessary and reduce bracing time. A pelvic X-ray inspection represents the gold standard for DDH diagnosis. Recent advances in deep learning artificial intelligence have enabled the use of many image-based medical decision-making applications. The present study employs deep transfer learning in detecting DDH in pelvic X-ray images without the need for explicit measurements. Methods Pelvic anteroposterior X-ray images from 354 subjects (120 DDH and 234 normal) were collected locally at two hospitals in northern Jordan. A system that accepts these images as input and classifies them as DDH or normal was developed using thirteen deep transfer learning models. Various performance metrics were evaluated in addition to the overfitting/underfitting behavior and the training times. Results The highest mean DDH detection accuracy was 96.3% achieved using the DarkNet53 model, although other models achieved comparable results. A common theme across all the models was the extremely high sensitivity (i.e., recall) value at the expense of specificity. The F1 score, precision, recall and specificity for DarkNet53 were 95%, 90.6%, 100% and 94.3%, respectively. Conclusions Our automated method appears to be a highly accurate DDH screening and diagnosis method. Moreover, the performance evaluation shows that it is possible to further improve the system by expanding the dataset to include more X-ray images.
Collapse
Affiliation(s)
- Mohammad Fraiwan
- Department of Computer Engineering, Jordan University of Science and Technology, Irbid, Jordan.
| | - Noran Al-Kofahi
- Department of Internal Medicine, Jordan University of Science and Technology, Irbid, Jordan
| | - Ali Ibnian
- Department of Internal Medicine, Jordan University of Science and Technology, Irbid, Jordan
| | - Omar Hanatleh
- Department of Internal Medicine, Jordan University of Science and Technology, Irbid, Jordan
| |
Collapse
|
9
|
Fraiwan M, Faouri E. On the Automatic Detection and Classification of Skin Cancer Using Deep Transfer Learning. SENSORS 2022; 22:s22134963. [PMID: 35808463 PMCID: PMC9269808 DOI: 10.3390/s22134963] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 06/22/2022] [Accepted: 06/29/2022] [Indexed: 12/15/2022]
Abstract
Skin cancer (melanoma and non-melanoma) is one of the most common cancer types and leads to hundreds of thousands of yearly deaths worldwide. It manifests itself through abnormal growth of skin cells. Early diagnosis drastically increases the chances of recovery. Moreover, it may render surgical, radiographic, or chemical therapies unnecessary or lessen their overall usage. Thus, healthcare costs can be reduced. The process of diagnosing skin cancer starts with dermoscopy, which inspects the general shape, size, and color characteristics of skin lesions, and suspected lesions undergo further sampling and lab tests for confirmation. Image-based diagnosis has undergone great advances recently due to the rise of deep learning artificial intelligence. The work in this paper examines the applicability of raw deep transfer learning in classifying images of skin lesions into seven possible categories. Using the HAM1000 dataset of dermoscopy images, a system that accepts these images as input without explicit feature extraction or preprocessing was developed using 13 deep transfer learning models. Extensive evaluation revealed the advantages and shortcomings of such a method. Although some cancer types were correctly classified with high accuracy, the imbalance of the dataset, the small number of images in some categories, and the large number of classes reduced the best overall accuracy to 82.9%.
Collapse
|
10
|
M. V. MK, Atalla S, Almuraqab N, Moonesar IA. Detection of COVID-19 Using Deep Learning Techniques and Cost Effectiveness Evaluation: A Survey. Front Artif Intell 2022; 5:912022. [PMID: 35692941 PMCID: PMC9184735 DOI: 10.3389/frai.2022.912022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Accepted: 04/26/2022] [Indexed: 12/03/2022] Open
Abstract
Graphical-design-based symptomatic techniques in pandemics perform a quintessential purpose in screening hit causes that comparatively render better outcomes amongst the principal radioscopy mechanisms in recognizing and diagnosing COVID-19 cases. The deep learning paradigm has been applied vastly to investigate radiographic images such as Chest X-Rays (CXR) and CT scan images. These radiographic images are rich in information such as patterns and clusters like structures, which are evident in conformance and detection of COVID-19 like pandemics. This paper aims to comprehensively study and analyze detection methodology based on Deep learning techniques for COVID-19 diagnosis. Deep learning technology is a good, practical, and affordable modality that can be deemed a reliable technique for adequately diagnosing the COVID-19 virus. Furthermore, the research determines the potential to enhance image character through artificial intelligence and distinguishes the most inexpensive and most trustworthy imaging method to anticipate dreadful viruses. This paper further discusses the cost-effectiveness of the surveyed methods for detecting COVID-19, in contrast with the other methods. Several finance-related aspects of COVID-19 detection effectiveness of different methods used for COVID-19 detection have been discussed. Overall, this study presents an overview of COVID-19 detection using deep learning methods and their cost-effectiveness and financial implications from the perspective of insurance claim settlement.
Collapse
Affiliation(s)
- Manoj Kumar M. V.
- Department of Information Science and Engineering, Nitte Meenakshi Institute of Technology, Bangalore, India
- *Correspondence: Manoj Kumar M. V.
| | - Shadi Atalla
- College of Engineering & Information Technology, University of Dubai, Dubai, United Arab Emirates
- Shadi Atalla
| | - Nasser Almuraqab
- Dubai Business School, University of Dubai, Dubai, United Arab Emirates
- Nasser Almuraqab
| | - Immanuel Azaad Moonesar
- Health Adminstration & Policy – Academic Affairs, Mohammed Bin Rashid School of Government (MBRSG), Dubai, United Arab Emirates
- Immanuel Azaad Moonesar
| |
Collapse
|
11
|
Investigating the Performance of FixMatch for COVID-19 Detection in Chest X-rays. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094694] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The advent of the COVID-19 pandemic has resulted in medical resources being stretched to their limits. Chest X-rays are one method of diagnosing COVID-19; they are used due to their high efficacy. However, detecting COVID-19 manually by using these images is time-consuming and expensive. While neural networks can be trained to detect COVID-19, doing so requires large amounts of labeled data, which are expensive to collect and code. One approach is to use semi-supervised neural networks to detect COVID-19 based on a very small number of labeled images. This paper explores how well such an approach could work. The FixMatch algorithm, which is a state-of-the-art semi-supervised classification algorithm, was trained on chest X-rays to detect COVID-19, Viral Pneumonia, Bacterial Pneumonia and Lung Opacity. The model was trained with decreasing levels of labeled data and compared with the best supervised CNN models, using transfer learning. FixMatch was able to achieve a COVID F1-score of 0.94 with only 80 labeled samples per class and an overall macro-average F1-score of 0.68 with only 20 labeled samples per class. Furthermore, an exploratory analysis was conducted to determine the performance of FixMatch to detect COVID-19 when trained with imbalanced data. The results show a predictable drop in performance as compared to training with uniform data; however, a statistical analysis suggests that FixMatch may be somewhat robust to data imbalance, as in many cases, and the same types of mistakes are made when the amount of labeled data is decreased.
Collapse
|
12
|
Alkhodari M, Khandoker AH. Detection of COVID-19 in smartphone-based breathing recordings: A pre-screening deep learning tool. PLoS One 2022; 17:e0262448. [PMID: 35025945 PMCID: PMC8758005 DOI: 10.1371/journal.pone.0262448] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/24/2021] [Indexed: 12/14/2022] Open
Abstract
This study was sought to investigate the feasibility of using smartphone-based breathing sounds within a deep learning framework to discriminate between COVID-19, including asymptomatic, and healthy subjects. A total of 480 breathing sounds (240 shallow and 240 deep) were obtained from a publicly available database named Coswara. These sounds were recorded by 120 COVID-19 and 120 healthy subjects via a smartphone microphone through a website application. A deep learning framework was proposed herein that relies on hand-crafted features extracted from the original recordings and from the mel-frequency cepstral coefficients (MFCC) as well as deep-activated features learned by a combination of convolutional neural network and bi-directional long short-term memory units (CNN-BiLSTM). The statistical analysis of patient profiles has shown a significant difference (p-value: 0.041) for ischemic heart disease between COVID-19 and healthy subjects. The Analysis of the normal distribution of the combined MFCC values showed that COVID-19 subjects tended to have a distribution that is skewed more towards the right side of the zero mean (shallow: 0.59±1.74, deep: 0.65±4.35, p-value: <0.001). In addition, the proposed deep learning approach had an overall discrimination accuracy of 94.58% and 92.08% using shallow and deep recordings, respectively. Furthermore, it detected COVID-19 subjects successfully with a maximum sensitivity of 94.21%, specificity of 94.96%, and area under the receiver operating characteristic (AUROC) curves of 0.90. Among the 120 COVID-19 participants, asymptomatic subjects (18 subjects) were successfully detected with 100.00% accuracy using shallow recordings and 88.89% using deep recordings. This study paves the way towards utilizing smartphone-based breathing sounds for the purpose of COVID-19 detection. The observations found in this study were promising to suggest deep learning and smartphone-based breathing sounds as an effective pre-screening tool for COVID-19 alongside the current reverse-transcription polymerase chain reaction (RT-PCR) assay. It can be considered as an early, rapid, easily distributed, time-efficient, and almost no-cost diagnosis technique complying with social distancing restrictions during COVID-19 pandemic.
Collapse
Affiliation(s)
- Mohanad Alkhodari
- Healthcare Engineering Innovation Center (HEIC), Department of Biomedical Engineering, Khalifa University, Abu Dhabi, UAE
| | - Ahsan H. Khandoker
- Healthcare Engineering Innovation Center (HEIC), Department of Biomedical Engineering, Khalifa University, Abu Dhabi, UAE
| |
Collapse
|
13
|
Ensemble Deep Learning for the Detection of COVID-19 in Unbalanced Chest X-ray Dataset. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112210528] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
The ongoing COVID-19 pandemic has caused devastating effects on humanity worldwide. With practical advantages and wide accessibility, chest X-rays (CXRs) play vital roles in the diagnosis of COVID-19 and the evaluation of the extent of lung damages incurred by the virus. This study aimed to leverage deep-learning-based methods toward the automated classification of COVID-19 from normal and viral pneumonia on CXRs, and the identification of indicative regions of COVID-19 biomarkers. Initially, we preprocessed and segmented the lung regions usingDeepLabV3+ method, and subsequently cropped the lung regions. The cropped lung regions were used as inputs to several deep convolutional neural networks (CNNs) for the prediction of COVID-19. The dataset was highly unbalanced; the vast majority were normal images, with a small number of COVID-19 and pneumonia images. To remedy the unbalanced distribution and to avoid biased classification results, we applied five different approaches: (i) balancing the class using weighted loss; (ii) image augmentation to add more images to minority cases; (iii) the undersampling of majority classes; (iv) the oversampling of minority classes; and (v) a hybrid resampling approach of oversampling and undersampling. The best-performing methods from each approach were combined as the ensemble classifier using two voting strategies. Finally, we used the saliency map of CNNs to identify the indicative regions of COVID-19 biomarkers which are deemed useful for interpretability. The algorithms were evaluated using the largest publicly available COVID-19 dataset. An ensemble of the top five CNNs with image augmentation achieved the highest accuracy of 99.23% and area under curve (AUC) of 99.97%, surpassing the results of previous studies.
Collapse
|
14
|
Neonatal Jaundice Diagnosis Using a Smartphone Camera Based on Eye, Skin, and Fused Features with Transfer Learning. SENSORS 2021; 21:s21217038. [PMID: 34770345 PMCID: PMC8588081 DOI: 10.3390/s21217038] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 10/17/2021] [Accepted: 10/21/2021] [Indexed: 11/17/2022]
Abstract
Neonatal jaundice is a common condition worldwide. Failure of timely diagnosis and treatment can lead to death or brain injury. Current diagnostic approaches include a painful and time-consuming invasive blood test and non-invasive tests using costly transcutaneous bilirubinometers. Since periodic monitoring is crucial, multiple efforts have been made to develop non-invasive diagnostic tools using a smartphone camera. However, existing works rely either on skin or eye images using statistical or traditional machine learning methods. In this paper, we adopt a deep transfer learning approach based on eye, skin, and fused images. We also trained well-known traditional machine learning models, including multi-layer perceptron (MLP), support vector machine (SVM), decision tree (DT), and random forest (RF), and compared their performance with that of the transfer learning model. We collected our dataset using a smartphone camera. Moreover, unlike most of the existing contributions, we report accuracy, precision, recall, f-score, and area under the curve (AUC) for all the experiments and analyzed their significance statistically. Our results indicate that the transfer learning model performed the best with skin images, while traditional models achieved the best performance with eyes and fused features. Further, we found that the transfer learning model with skin features performed comparably to the MLP model with eye features.
Collapse
|