1
|
Zhao L, Fong TC, Bell MAL. Detection of COVID-19 features in lung ultrasound images using deep neural networks. COMMUNICATIONS MEDICINE 2024; 4:41. [PMID: 38467808 PMCID: PMC10928066 DOI: 10.1038/s43856-024-00463-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 02/16/2024] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Deep neural networks (DNNs) to detect COVID-19 features in lung ultrasound B-mode images have primarily relied on either in vivo or simulated images as training data. However, in vivo images suffer from limited access to required manual labeling of thousands of training image examples, and simulated images can suffer from poor generalizability to in vivo images due to domain differences. We address these limitations and identify the best training strategy. METHODS We investigated in vivo COVID-19 feature detection with DNNs trained on our carefully simulated datasets (40,000 images), publicly available in vivo datasets (174 images), in vivo datasets curated by our team (958 images), and a combination of simulated and internal or external in vivo datasets. Seven DNN training strategies were tested on in vivo B-mode images from COVID-19 patients. RESULTS Here, we show that Dice similarity coefficients (DSCs) between ground truth and DNN predictions are maximized when simulated data are mixed with external in vivo data and tested on internal in vivo data (i.e., 0.482 ± 0.211), compared with using only simulated B-mode image training data (i.e., 0.464 ± 0.230) or only external in vivo B-mode training data (i.e., 0.407 ± 0.177). Additional maximization is achieved when a separate subset of the internal in vivo B-mode images are included in the training dataset, with the greatest maximization of DSC (and minimization of required training time, or epochs) obtained after mixing simulated data with internal and external in vivo data during training, then testing on the held-out subset of the internal in vivo dataset (i.e., 0.735 ± 0.187). CONCLUSIONS DNNs trained with simulated and in vivo data are promising alternatives to training with only real or only simulated data when segmenting in vivo COVID-19 lung ultrasound features.
Collapse
Affiliation(s)
- Lingyi Zhao
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Tiffany Clair Fong
- Department of Emergency Medicine, Johns Hopkins Medicine, Baltimore, MD, USA
| | - Muyinatu A Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
2
|
Zeng EZ, Ebadi A, Florea A, Wong A. COVID-Net L2C-ULTRA: An Explainable Linear-Convex Ultrasound Augmentation Learning Framework to Improve COVID-19 Assessment and Monitoring. SENSORS (BASEL, SWITZERLAND) 2024; 24:1664. [PMID: 38475199 DOI: 10.3390/s24051664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 02/21/2024] [Accepted: 02/27/2024] [Indexed: 03/14/2024]
Abstract
While no longer a public health emergency of international concern, COVID-19 remains an established and ongoing global health threat. As the global population continues to face significant negative impacts of the pandemic, there has been an increased usage of point-of-care ultrasound (POCUS) imaging as a low-cost, portable, and effective modality of choice in the COVID-19 clinical workflow. A major barrier to the widespread adoption of POCUS in the COVID-19 clinical workflow is the scarcity of expert clinicians who can interpret POCUS examinations, leading to considerable interest in artificial intelligence-driven clinical decision support systems to tackle this challenge. A major challenge to building deep neural networks for COVID-19 screening using POCUS is the heterogeneity in the types of probes used to capture ultrasound images (e.g., convex vs. linear probes), which can lead to very different visual appearances. In this study, we propose an analytic framework for COVID-19 assessment able to consume ultrasound images captured by linear and convex probes. We analyze the impact of leveraging extended linear-convex ultrasound augmentation learning on producing enhanced deep neural networks for COVID-19 assessment, where we conduct data augmentation on convex probe data alongside linear probe data that have been transformed to better resemble convex probe data. The proposed explainable framework, called COVID-Net L2C-ULTRA, employs an efficient deep columnar anti-aliased convolutional neural network designed via a machine-driven design exploration strategy. Our experimental results confirm that the proposed extended linear-convex ultrasound augmentation learning significantly increases performance, with a gain of 3.9% in test accuracy and 3.2% in AUC, 10.9% in recall, and 4.4% in precision. The proposed method also demonstrates a much more effective utilization of linear probe images through a 5.1% performance improvement in recall when such images are added to the training dataset, while all other methods show a decrease in recall when trained on the combined linear-convex dataset. We further verify the validity of the model by assessing what the network considers to be the critical regions of an image with our contribution clinician.
Collapse
Affiliation(s)
- E Zhixuan Zeng
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Ashkan Ebadi
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Digital Technologies Research Centre, National Research Council Canada, Toronto, ON M5T 3J1, Canada
| | - Adrian Florea
- Department of Emergency Medicine, McGill University, Montreal, QC H4A 3J1, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Waterloo Artificial Intelligence Institute, Waterloo, ON N2L 3G1, Canada
| |
Collapse
|
3
|
Chiumello D, Coppola S, Catozzi G, Danzo F, Santus P, Radovanovic D. Lung Imaging and Artificial Intelligence in ARDS. J Clin Med 2024; 13:305. [PMID: 38256439 PMCID: PMC10816549 DOI: 10.3390/jcm13020305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 12/26/2023] [Accepted: 12/30/2023] [Indexed: 01/24/2024] Open
Abstract
Artificial intelligence (AI) can make intelligent decisions in a manner akin to that of the human mind. AI has the potential to improve clinical workflow, diagnosis, and prognosis, especially in radiology. Acute respiratory distress syndrome (ARDS) is a very diverse illness that is characterized by interstitial opacities, mostly in the dependent areas, decreased lung aeration with alveolar collapse, and inflammatory lung edema resulting in elevated lung weight. As a result, lung imaging is a crucial tool for evaluating the mechanical and morphological traits of ARDS patients. Compared to traditional chest radiography, sensitivity and specificity of lung computed tomography (CT) and ultrasound are higher. The state of the art in the application of AI is summarized in this narrative review which focuses on CT and ultrasound techniques in patients with ARDS. A total of eighteen items were retrieved. The primary goals of using AI for lung imaging were to evaluate the risk of developing ARDS, the measurement of alveolar recruitment, potential alternative diagnoses, and outcome. While the physician must still be present to guarantee a high standard of examination, AI could help the clinical team provide the best care possible.
Collapse
Affiliation(s)
- Davide Chiumello
- Department of Health Sciences, University of Milan, 20122 Milan, Italy
- Department of Anesthesia and Intensive Care, ASST Santi Paolo e Carlo, San Paolo University Hospital Milan, 20142 Milan, Italy
- Coordinated Research Center on Respiratory Failure, University of Milan, 20122 Milan, Italy
| | - Silvia Coppola
- Department of Anesthesia and Intensive Care, ASST Santi Paolo e Carlo, San Paolo University Hospital Milan, 20142 Milan, Italy
| | - Giulia Catozzi
- Department of Health Sciences, University of Milan, 20122 Milan, Italy
| | - Fiammetta Danzo
- Division of Respiratory Diseases, Luigi Sacco University Hospital, ASST Fatebenefratelli-Sacco, 20157 Milan, Italy
- Department of Biomedical and Clinical Sciences, Università degli Studi di Milano, 20157 Milan, Italy
| | - Pierachille Santus
- Division of Respiratory Diseases, Luigi Sacco University Hospital, ASST Fatebenefratelli-Sacco, 20157 Milan, Italy
- Department of Biomedical and Clinical Sciences, Università degli Studi di Milano, 20157 Milan, Italy
| | - Dejan Radovanovic
- Division of Respiratory Diseases, Luigi Sacco University Hospital, ASST Fatebenefratelli-Sacco, 20157 Milan, Italy
- Department of Biomedical and Clinical Sciences, Università degli Studi di Milano, 20157 Milan, Italy
| |
Collapse
|
4
|
Hasan MM, Hossain MM, Rahman MM, Azad A, Alyami SA, Moni MA. FP-CNN: Fuzzy pooling-based convolutional neural network for lung ultrasound image classification with explainable AI. Comput Biol Med 2023; 165:107407. [PMID: 37678140 DOI: 10.1016/j.compbiomed.2023.107407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/08/2023] [Accepted: 08/26/2023] [Indexed: 09/09/2023]
Abstract
The COVID-19 pandemic wreaks havoc on healthcare systems all across the world. In pandemic scenarios like COVID-19, the applicability of diagnostic modalities is crucial in medical diagnosis, where non-invasive ultrasound imaging has the potential to be a useful biomarker. This research develops a computer-assisted intelligent methodology for ultrasound lung image classification by utilizing a fuzzy pooling-based convolutional neural network FP-CNN with underlying evidence of particular decisions. The fuzzy-pooling method finds better representative features for ultrasound image classification. The FPCNN model categorizes ultrasound images into one of three classes: covid, disease-free (normal), and pneumonia. Explanations of diagnostic decisions are crucial to ensure the fairness of an intelligent system. This research has used Shapley Additive Explanation (SHAP) to explain the prediction of the FP-CNN models. The prediction of the black-box model is illustrated using the SHAP explanation of the intermediate layers of the black-box model. To determine the most effective model, we have tested different state-of-the-art convolutional neural network architectures with various training strategies, including fine-tuned models, single-layer fuzzy pooling models, and fuzzy pooling at all pooling layers. Among different architectures, the Xception model with all pooling layers having fuzzy pooling achieves the best classification results of 97.2% accuracy. We hope our proposed method will be helpful for the clinical diagnosis of covid-19 from lung ultrasound (LUS) images.
Collapse
Affiliation(s)
- Md Mahmodul Hasan
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Dhaka, Bangladesh.
| | - Muhammad Minoar Hossain
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Dhaka, Bangladesh; Department of Computer Science and Engineering, Bangladesh University, Mohammadpur, Dhaka, 1207, Bangladesh.
| | - Mohammad Motiur Rahman
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Dhaka, Bangladesh.
| | - Akm Azad
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, 13318, Saudi Arabia.
| | - Salem A Alyami
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, 13318, Saudi Arabia.
| | - Mohammad Ali Moni
- Artificial Intelligence & Data Science, School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, QLD 4072, Australia; Artificial Intelligence and Cyber Futures Institute, Charles Stuart University, Bathurst, NSW 2795, Australia.
| |
Collapse
|
5
|
Malík M, Dzian A, Števík M, Vetešková Š, Al Hakim A, Hliboký M, Magyar J, Kolárik M, Bundzel M, Babič F. Lung Ultrasound Reduces Chest X-rays in Postoperative Care after Thoracic Surgery: Is There a Role for Artificial Intelligence?-Systematic Review. Diagnostics (Basel) 2023; 13:2995. [PMID: 37761362 PMCID: PMC10527627 DOI: 10.3390/diagnostics13182995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 08/16/2023] [Accepted: 08/26/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND Chest X-ray (CXR) remains the standard imaging modality in postoperative care after non-cardiac thoracic surgery. Lung ultrasound (LUS) showed promising results in CXR reduction. The aim of this review was to identify areas where the evaluation of LUS videos by artificial intelligence could improve the implementation of LUS in thoracic surgery. METHODS A literature review of the replacement of the CXR by LUS after thoracic surgery and the evaluation of LUS videos by artificial intelligence after thoracic surgery was conducted in Medline. RESULTS Here, eight out of 10 reviewed studies evaluating LUS in CXR reduction showed that LUS can reduce CXR without a negative impact on patient outcome after thoracic surgery. No studies on the evaluation of LUS signs by artificial intelligence after thoracic surgery were found. CONCLUSION LUS can reduce CXR after thoracic surgery. We presume that artificial intelligence could help increase the LUS accuracy, objectify the LUS findings, shorten the learning curve, and decrease the number of inconclusive results. To confirm this assumption, clinical trials are necessary. This research is funded by the Slovak Research and Development Agency, grant number APVV 20-0232.
Collapse
Affiliation(s)
- Marek Malík
- Department of Thoracic Surgery, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava and University Hospital in Martin, Kollárova 4248/2, 036 59 Martin, Slovakia
| | - Anton Dzian
- Department of Thoracic Surgery, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava and University Hospital in Martin, Kollárova 4248/2, 036 59 Martin, Slovakia
| | - Martin Števík
- Radiology Department, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava and University Hospital in Martin, Kollárova 4248/2, 036 59 Martin, Slovakia
| | - Štefánia Vetešková
- Radiology Department, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava and University Hospital in Martin, Kollárova 4248/2, 036 59 Martin, Slovakia
| | - Abdulla Al Hakim
- Department of Thoracic Surgery, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava and University Hospital in Martin, Kollárova 4248/2, 036 59 Martin, Slovakia
| | - Maroš Hliboký
- Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 040 01 Košice, Slovakia
| | - Ján Magyar
- Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 040 01 Košice, Slovakia
| | - Michal Kolárik
- Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 040 01 Košice, Slovakia
| | - Marek Bundzel
- Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 040 01 Košice, Slovakia
| | - František Babič
- Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 040 01 Košice, Slovakia
| |
Collapse
|
6
|
Lucassen RT, Jafari MH, Duggan NM, Jowkar N, Mehrtash A, Fischetti C, Bernier D, Prentice K, Duhaime EP, Jin M, Abolmaesumi P, Heslinga FG, Veta M, Duran-Mendicuti MA, Frisken S, Shyn PB, Golby AJ, Boyer E, Wells WM, Goldsmith AJ, Kapur T. Deep Learning for Detection and Localization of B-Lines in Lung Ultrasound. IEEE J Biomed Health Inform 2023; 27:4352-4361. [PMID: 37276107 PMCID: PMC10540221 DOI: 10.1109/jbhi.2023.3282596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Lung ultrasound (LUS) is an important imaging modality used by emergency physicians to assess pulmonary congestion at the patient bedside. B-line artifacts in LUS videos are key findings associated with pulmonary congestion. Not only can the interpretation of LUS be challenging for novice operators, but visual quantification of B-lines remains subject to observer variability. In this work, we investigate the strengths and weaknesses of multiple deep learning approaches for automated B-line detection and localization in LUS videos. We curate and publish, BEDLUS, a new ultrasound dataset comprising 1,419 videos from 113 patients with a total of 15,755 expert-annotated B-lines. Based on this dataset, we present a benchmark of established deep learning methods applied to the task of B-line detection. To pave the way for interpretable quantification of B-lines, we propose a novel "single-point" approach to B-line localization using only the point of origin. Our results show that (a) the area under the receiver operating characteristic curve ranges from 0.864 to 0.955 for the benchmarked detection methods, (b) within this range, the best performance is achieved by models that leverage multiple successive frames as input, and (c) the proposed single-point approach for B-line localization reaches an F 1-score of 0.65, performing on par with the inter-observer agreement. The dataset and developed methods can facilitate further biomedical research on automated interpretation of lung ultrasound with the potential to expand the clinical utility.
Collapse
|
7
|
Lawley A, Hampson R, Worrall K, Dobie G. Prescriptive Method for Optimizing Cost of Data Collection and Annotation in Machine Learning of Clinical Ultrasound. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082737 DOI: 10.1109/embc40787.2023.10340858] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Machine learning in medical ultrasound faces a major challenge: the prohibitive costs of producing and annotating clinical data. Optimizing the data collection and annotation will improve model training efficiency, reducing project cost and times. This paper prescribes a 2-phase method for cost optimization based on iterative accuracy/sample size predictions, and active learning for annotation optimization. METHODS Using public breast, fetal, and lung ultrasound datasets we can: Optimize data collection by statistically predicting accuracy for a desired dataset size; and optimize labeling efficiency using Active Learning, where predictions with lowest certainty were labelled manually using feedback. A practical case study on BUSI data was used to demonstrate the method prescribed in this work. RESULTS With small data subsets, ~10%, dataset size vs. final accuracy relations can be predicted with diminishing results after 50% usage. Manual annotation was reduced by ~10% using active learning to focus the annotation. CONCLUSION This led to cost reductions of 50%-66%, depending on requirements and initial cost model, on BUSI dataset with a negligible accuracy drop of 3.75% from theoretical maximums.Clinical Relevance- This work provides methodology to optimize dataset size and manual data labelling, this allows generation of cost-effective datasets, of interest to all, but particularly for financially limited trials and feasibility studies, Reducing the time burden on annotating clinicians.
Collapse
|
8
|
Bruno A, Ignesti G, Salvetti O, Moroni D, Martinelli M. Efficient Lung Ultrasound Classification. Bioengineering (Basel) 2023; 10:bioengineering10050555. [PMID: 37237625 DOI: 10.3390/bioengineering10050555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 04/28/2023] [Accepted: 05/02/2023] [Indexed: 05/28/2023] Open
Abstract
A machine learning method for classifying lung ultrasound is proposed here to provide a point of care tool for supporting a safe, fast, and accurate diagnosis that can also be useful during a pandemic such as SARS-CoV-2. Given the advantages (e.g., safety, speed, portability, cost-effectiveness) provided by the ultrasound technology over other examinations (e.g., X-ray, computer tomography, magnetic resonance imaging), our method was validated on the largest public lung ultrasound dataset. Focusing on both accuracy and efficiency, our solution is based on an efficient adaptive ensembling of two EfficientNet-b0 models reaching 100% of accuracy, which, to our knowledge, outperforms the previous state-of-the-art models by at least 5%. The complexity is restrained by adopting specific design choices: ensembling with an adaptive combination layer, ensembling performed on the deep features, and minimal ensemble using two weak models only. In this way, the number of parameters has the same order of magnitude of a single EfficientNet-b0 and the computational cost (FLOPs) is reduced at least by 20%, doubled by parallelization. Moreover, a visual analysis of the saliency maps on sample images of all the classes of the dataset reveals where an inaccurate weak model focuses its attention versus an accurate one.
Collapse
Affiliation(s)
- Antonio Bruno
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Giacomo Ignesti
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Ovidio Salvetti
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Davide Moroni
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Massimo Martinelli
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| |
Collapse
|
9
|
Chen Y, Zhang C, Ding CHQ, Liu L. Generating and Weighting Semantically Consistent Sample Pairs for Ultrasound Contrastive Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1388-1400. [PMID: 37015698 DOI: 10.1109/tmi.2022.3228254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Well-annotated medical datasets enable deep neural networks (DNNs) to gain strong power in extracting lesion-related features. Building such large and well-designed medical datasets is costly due to the need for high-level expertise. Model pre-training based on ImageNet is a common practice to gain better generalization when the data amount is limited. However, it suffers from the domain gap between natural and medical images. In this work, we pre-train DNNs on ultrasound (US) domains instead of ImageNet to reduce the domain gap in medical US applications. To learn US image representations based on unlabeled US videos, we propose a novel meta-learning-based contrastive learning method, namely Meta Ultrasound Contrastive Learning (Meta-USCL). To tackle the key challenge of obtaining semantically consistent sample pairs for contrastive learning, we present a positive pair generation module along with an automatic sample weighting module based on meta-learning. Experimental results on multiple computer-aided diagnosis (CAD) problems, including pneumonia detection, breast cancer classification, and breast tumor segmentation, show that the proposed self-supervised method reaches state-of-the-art (SOTA). The codes are available at https://github.com/Schuture/Meta-USCL.
Collapse
|
10
|
Jung J, Lee H, Jung H, Kim H. Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review. Heliyon 2023; 9:e16110. [PMID: 37234618 PMCID: PMC10205582 DOI: 10.1016/j.heliyon.2023.e16110] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 03/26/2023] [Accepted: 05/05/2023] [Indexed: 05/28/2023] Open
Abstract
Background Significant advancements in the field of information technology have influenced the creation of trustworthy explainable artificial intelligence (XAI) in healthcare. Despite improved performance of XAI, XAI techniques have not yet been integrated into real-time patient care. Objective The aim of this systematic review is to understand the trends and gaps in research on XAI through an assessment of the essential properties of XAI and an evaluation of explanation effectiveness in the healthcare field. Methods A search of PubMed and Embase databases for relevant peer-reviewed articles on development of an XAI model using clinical data and evaluating explanation effectiveness published between January 1, 2011, and April 30, 2022, was conducted. All retrieved papers were screened independently by the two authors. Relevant papers were also reviewed for identification of the essential properties of XAI (e.g., stakeholders and objectives of XAI, quality of personalized explanations) and the measures of explanation effectiveness (e.g., mental model, user satisfaction, trust assessment, task performance, and correctability). Results Six out of 882 articles met the criteria for eligibility. Artificial Intelligence (AI) users were the most frequently described stakeholders. XAI served various purposes, including evaluation, justification, improvement, and learning from AI. Evaluation of the quality of personalized explanations was based on fidelity, explanatory power, interpretability, and plausibility. User satisfaction was the most frequently used measure of explanation effectiveness, followed by trust assessment, correctability, and task performance. The methods of assessing these measures also varied. Conclusion XAI research should address the lack of a comprehensive and agreed-upon framework for explaining XAI and standardized approaches for evaluating the effectiveness of the explanation that XAI provides to diverse AI stakeholders.
Collapse
Affiliation(s)
- Jinsun Jung
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Center for Human-Caring Nurse Leaders for the Future by Brain Korea 21 (BK 21) Four Project, College of Nursing, Seoul National University, Seoul, Republic of Korea
| | - Hyungbok Lee
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Emergency Nursing Department, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hyunggu Jung
- Department of Computer Science and Engineering, University of Seoul, Seoul, Republic of Korea
- Department of Artificial Intelligence, University of Seoul, Seoul, Republic of Korea
| | - Hyeoneui Kim
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Research Institute of Nursing Science, College of Nursing, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
11
|
Gürsoy E, Kaya Y. An overview of deep learning techniques for COVID-19 detection: methods, challenges, and future works. MULTIMEDIA SYSTEMS 2023; 29:1603-1627. [PMID: 37261262 PMCID: PMC10039775 DOI: 10.1007/s00530-023-01083-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/20/2023] [Indexed: 06/02/2023]
Abstract
The World Health Organization (WHO) declared a pandemic in response to the coronavirus COVID-19 in 2020, which resulted in numerous deaths worldwide. Although the disease appears to have lost its impact, millions of people have been affected by this virus, and new infections still occur. Identifying COVID-19 requires a reverse transcription-polymerase chain reaction test (RT-PCR) or analysis of medical data. Due to the high cost and time required to scan and analyze medical data, researchers are focusing on using automated computer-aided methods. This review examines the applications of deep learning (DL) and machine learning (ML) in detecting COVID-19 using medical data such as CT scans, X-rays, cough sounds, MRIs, ultrasound, and clinical markers. First, the data preprocessing, the features used, and the current COVID-19 detection methods are divided into two subsections, and the studies are discussed. Second, the reported publicly available datasets, their characteristics, and the potential comparison materials mentioned in the literature are presented. Third, a comprehensive comparison is made by contrasting the similar and different aspects of the studies. Finally, the results, gaps, and limitations are summarized to stimulate the improvement of COVID-19 detection methods, and the study concludes by listing some future research directions for COVID-19 classification.
Collapse
Affiliation(s)
- Ercan Gürsoy
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| | - Yasin Kaya
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| |
Collapse
|
12
|
Bhosale YH, Patnaik KS. Bio-medical imaging (X-ray, CT, ultrasound, ECG), genome sequences applications of deep neural network and machine learning in diagnosis, detection, classification, and segmentation of COVID-19: a Meta-analysis & systematic review. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-54. [PMID: 37362676 PMCID: PMC10015538 DOI: 10.1007/s11042-023-15029-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 02/01/2023] [Accepted: 02/27/2023] [Indexed: 06/28/2023]
Abstract
This review investigates how Deep Machine Learning (DML) has dealt with the Covid-19 epidemic and provides recommendations for future Covid-19 research. Despite the fact that vaccines for this epidemic have been developed, DL methods have proven to be a valuable asset in radiologists' arsenals for the automated assessment of Covid-19. This detailed review debates the techniques and applications developed for Covid-19 findings using DL systems. It also provides insights into notable datasets used to train neural networks, data partitioning, and various performance measurement metrics. The PRISMA taxonomy has been formed based on pretrained(45 systems) and hybrid/custom(17 systems) models with radiography modalities. A total of 62 systems with respect to X-ray(32), CT(19), ultrasound(7), ECG(2), and genome sequence(2) based modalities as taxonomy are selected from the studied articles. We originate by valuing the present phase of DL and conclude with significant limitations. The restrictions contain incomprehensibility, simplification measures, learning from incomplete labeled data, and data secrecy. Moreover, DML can be utilized to detect and classify Covid-19 from other COPD illnesses. The proposed literature review has found many DL-based systems to fight against Covid19. We expect this article will assist in speeding up the procedure of DL for Covid-19 researchers, including medical, radiology technicians, and data engineers.
Collapse
Affiliation(s)
- Yogesh H. Bhosale
- Computer Science and Engineering Department, Birla Institute of Technology, Mesra, Ranchi, India
| | - K. Sridhar Patnaik
- Computer Science and Engineering Department, Birla Institute of Technology, Mesra, Ranchi, India
| |
Collapse
|
13
|
Kolarik M, Sarnovsky M, Paralic J, Babic F. Explainability of deep learning models in medical video analysis: a survey. PeerJ Comput Sci 2023; 9:e1253. [PMID: 37346619 PMCID: PMC10280416 DOI: 10.7717/peerj-cs.1253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 01/20/2023] [Indexed: 06/23/2023]
Abstract
Deep learning methods have proven to be effective for multiple diagnostic tasks in medicine and have been performing significantly better in comparison to other traditional machine learning methods. However, the black-box nature of deep neural networks has restricted their use in real-world applications, especially in healthcare. Therefore, explainability of the machine learning models, which focuses on providing of the comprehensible explanations of model outputs, may affect the possibility of adoption of such models in clinical use. There are various studies reviewing approaches to explainability in multiple domains. This article provides a review of the current approaches and applications of explainable deep learning for a specific area of medical data analysis-medical video processing tasks. The article introduces the field of explainable AI and summarizes the most important requirements for explainability in medical applications. Subsequently, we provide an overview of existing methods, evaluation metrics and focus more on those that can be applied to analytical tasks involving the processing of video data in the medical domain. Finally we identify some of the open research issues in the analysed area.
Collapse
Affiliation(s)
- Michal Kolarik
- Department of Cybernetics and Artificial Intelligence, Technical University in Kosice, Kosice, Slovakia
| | - Martin Sarnovsky
- Department of Cybernetics and Artificial Intelligence, Technical University in Kosice, Kosice, Slovakia
| | - Jan Paralic
- Department of Cybernetics and Artificial Intelligence, Technical University in Kosice, Kosice, Slovakia
| | - Frantisek Babic
- Department of Cybernetics and Artificial Intelligence, Technical University in Kosice, Kosice, Slovakia
| |
Collapse
|
14
|
Mahalakshmi V, Balobaid A, Kanisha B, Sasirekha R, Ramkumar Raja M. Artificial Intelligence: A Next-Level Approach in Confronting the COVID-19 Pandemic. Healthcare (Basel) 2023; 11:healthcare11060854. [PMID: 36981511 PMCID: PMC10048108 DOI: 10.3390/healthcare11060854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/27/2023] [Accepted: 03/02/2023] [Indexed: 03/15/2023] Open
Abstract
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) which caused coronavirus diseases (COVID-19) in late 2019 in China created a devastating economical loss and loss of human lives. To date, 11 variants have been identified with minimum to maximum severity of infection and surges in cases. Bacterial co-infection/secondary infection is identified during viral respiratory infection, which is a vital reason for morbidity and mortality. The occurrence of secondary infections is an additional burden to the healthcare system; therefore, the quick diagnosis of both COVID-19 and secondary infections will reduce work pressure on healthcare workers. Therefore, well-established support from Artificial Intelligence (AI) could reduce the stress in healthcare and even help in creating novel products to defend against the coronavirus. AI is one of the rapidly growing fields with numerous applications for the healthcare sector. The present review aims to access the recent literature on the role of AI and how its subfamily machine learning (ML) and deep learning (DL) are used to curb the pandemic’s effects. We discuss the role of AI in COVID-19 infections, the detection of secondary infections, technology-assisted protection from COVID-19, global laws and regulations on AI, and the impact of the pandemic on public life.
Collapse
Affiliation(s)
- V. Mahalakshmi
- Department of Computer Science, College of Computer Science & Information Technology, Jazan University, Jazan 45142, Saudi Arabia
- Correspondence: or
| | - Awatef Balobaid
- Department of Computer Science, College of Computer Science & Information Technology, Jazan University, Jazan 45142, Saudi Arabia
| | - B. Kanisha
- Department of Computer Science and Engineering, School of Computing, College of Engineering and Technology, SRM Institute of Science and Technology, Chengalpattu 603203, India
| | - R. Sasirekha
- Department of Computing Technologies, SRM Institute of Science and Technology, Kattankulathur Campus, Chengalpattu 603203, India
| | - M. Ramkumar Raja
- Department of Electrical Engineering, College of Engineering, King Khalid University, Abha 62529, Saudi Arabia
| |
Collapse
|
15
|
Vinod DN, Prabaharan SRS. COVID-19-The Role of Artificial Intelligence, Machine Learning, and Deep Learning: A Newfangled. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:2667-2682. [PMID: 36685135 PMCID: PMC9843670 DOI: 10.1007/s11831-023-09882-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 01/05/2023] [Indexed: 05/29/2023]
Abstract
The absolute previously infected novel coronavirus (COVID-19) was found in Wuhan, China, in December 2019. The COVID-19 epidemic has spread to more than 220 nations and territories globally and has altogether influenced each part of our day-to-day lives. As of 9th March 2022, a total aggregate of 44,78,82,185 (60,07,317) contaminated (dead) COVID-19 cases were accounted for all over the world. The quantities of contaminated cases passing despite everything increment essentially and do not indicate a controlled circumstance. The scope of this paper is to address this issue by presenting a comprehensive and comparative analysis of the existing Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) based approaches used in significance in reacting to the COVID-19 epidemic and diagnosing the severe impacts. The paper provides, firstly, an overview of COVID-19 infection and highlights of this article; Secondly, an overview of exploring various executive innovations by utilizing different resources to stop the spread of COVID-19; Thirdly, a comparison of existing predicting methods of COVID-19 in the literature, with focus on ML, DL and AI-driven techniques with performance metrics; and finally, a discussion on the results of the work as well as future scope.
Collapse
Affiliation(s)
- Dasari Naga Vinod
- Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, Tamil Nadu 600062 India
| | - S. R. S. Prabaharan
- Sathyabama Centre for Advanced Studies, Sathyabama Institute of Science and Technology, Rajiv Gandhi Salai, Chennai, Tamil Nadu 600119 India
| |
Collapse
|
16
|
Arteaga-Marrero N, Villa E, Llanos González AB, Gómez Gil ME, Fernández OA, Ruiz-Alzola J, González-Fernández J. Low-Cost Pseudo-Anthropomorphic PVA-C and Cellulose Lung Phantom for Ultrasound-Guided Interventions. Gels 2023; 9:gels9020074. [PMID: 36826245 PMCID: PMC9957311 DOI: 10.3390/gels9020074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 01/11/2023] [Accepted: 01/13/2023] [Indexed: 01/19/2023] Open
Abstract
A low-cost custom-made pseudo-anthropomorphic lung phantom, offering a model for ultrasound-guided interventions, is presented. The phantom is a rectangular solidstructure fabricated with polyvinyl alcohol cryogel (PVA-C) and cellulose to mimic the healthy parenchyma. The pathologies of interest were embedded as inclusions containing gaseous, liquid, or solid materials. The ribs were 3D-printed using polyethylene terephthalate, and the pleura was made of a bidimensional reticle based on PVA-C. The healthy and pathological tissues were mimicked to display acoustic and echoic properties similar to that of soft tissues. Theflexible fabrication process facilitated the modification of the physical and acoustic properties of the phantom. The phantom's manufacture offers flexibility regarding the number, shape, location, and composition of the inclusions and the insertion of ribs and pleura. In-plane and out-of-plane needle insertions, fine needle aspiration, and core needle biopsy were performed under ultrasound image guidance. The mimicked tissues displayed a resistance and recoil effect typically encountered in a real scenario for a pneumothorax, abscesses, and neoplasms. The presented phantom accurately replicated thoracic tissues (lung, ribs, and pleura) and associated pathologies providing a useful tool for training ultrasound-guided procedures.
Collapse
Affiliation(s)
- Natalia Arteaga-Marrero
- Grupo Tecnología Médica IACTEC, Instituto de Astrofísica de Canarias (IAC), 38205 San Cristóbal de La Laguna, Spain
| | - Enrique Villa
- Grupo Tecnología Médica IACTEC, Instituto de Astrofísica de Canarias (IAC), 38205 San Cristóbal de La Laguna, Spain
- Correspondence:
| | - Ana Belén Llanos González
- Departamento de Neumología, Complejo Universitario de Canarias (HUC), 38320 San Cristóbal de La Laguna, Spain
| | - Marta Elena Gómez Gil
- Departameto de Radiología, Complejo Universitario de Canarias (HUC), 38320 San Cristóbal de La Laguna, Spain
| | - Orlando Acosta Fernández
- Departamento de Neumología, Complejo Universitario de Canarias (HUC), 38320 San Cristóbal de La Laguna, Spain
| | - Juan Ruiz-Alzola
- Grupo Tecnología Médica IACTEC, Instituto de Astrofísica de Canarias (IAC), 38205 San Cristóbal de La Laguna, Spain
- Instituto Universitario de Investigaciones Biomédicas y Sanitarias (IUIBS), Universidad de Las Palmas de Gran Canaria, 35016 Las Palmas de Gran Canaria, Spain
- Departamento de Señales y Comunicaciones, Universidad de Las Palmas de Gran Canaria, 35016 Las Palmas de Gran Canaria, Spain
| | - Javier González-Fernández
- Departamento de Ingeniería Biomédica, Instituto Tecnológico de Canarias (ITC), 38009 Santa Cruz de Tenerife, Spain
| |
Collapse
|
17
|
Chaddad A, Peng J, Xu J, Bouridane A. Survey of Explainable AI Techniques in Healthcare. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23020634. [PMID: 36679430 PMCID: PMC9862413 DOI: 10.3390/s23020634] [Citation(s) in RCA: 27] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 12/14/2022] [Accepted: 12/29/2022] [Indexed: 05/27/2023]
Abstract
Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient's symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
- The Laboratory for Imagery Vision and Artificial Intelligence, Ecole de Technologie Superieure, 1100 Rue Notre Dame O, Montreal, QC H3C 1K3, Canada
| | - Jihao Peng
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
| | - Jian Xu
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
| | - Ahmed Bouridane
- Centre for Data Analytics and Cybersecurity, University of Sharjah, Sharjah 27272, United Arab Emirates
| |
Collapse
|
18
|
Tuncer I, Barua PD, Dogan S, Baygin M, Tuncer T, Tan RS, Yeong CH, Acharya UR. Swin-textural: A novel textural features-based image classification model for COVID-19 detection on chest computed tomography. INFORMATICS IN MEDICINE UNLOCKED 2023; 36:101158. [PMID: 36618887 PMCID: PMC9804964 DOI: 10.1016/j.imu.2022.101158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 12/30/2022] [Accepted: 12/30/2022] [Indexed: 01/01/2023] Open
Abstract
Background Chest computed tomography (CT) has a high sensitivity for detecting COVID-19 lung involvement and is widely used for diagnosis and disease monitoring. We proposed a new image classification model, swin-textural, that combined swin-based patch division with textual feature extraction for automated diagnosis of COVID-19 on chest CT images. The main objective of this work is to evaluate the performance of the swin architecture in feature engineering. Material and method We used a public dataset comprising 2167, 1247, and 757 (total 4171) transverse chest CT images belonging to 80, 80, and 50 (total 210) subjects with COVID-19, other non-COVID lung conditions, and normal lung findings. In our model, resized 420 × 420 input images were divided using uniform square patches of incremental dimensions, which yielded ten feature extraction layers. At each layer, local binary pattern and local phase quantization operations extracted textural features from individual patches as well as the undivided input image. Iterative neighborhood component analysis was used to select the most informative set of features to form ten selected feature vectors and also used to select the 11th vector from among the top selected feature vectors with accuracy >97.5%. The downstream kNN classifier calculated 11 prediction vectors. From these, iterative hard majority voting generated another nine voted prediction vectors. Finally, the best result among the twenty was determined using a greedy algorithm. Results Swin-textural attained 98.71% three-class classification accuracy, outperforming published deep learning models trained on the same dataset. The model has linear time complexity. Conclusions Our handcrafted computationally lightweight swin-textural model can detect COVID-19 accurately on chest CT images with low misclassification rates. The model can be implemented in hospitals for efficient automated screening of COVID-19 on chest CT images. Moreover, findings demonstrate that our presented swin-textural is a self-organized, highly accurate, and lightweight image classification model and is better than the compared deep learning models for this dataset.
Collapse
Affiliation(s)
- Ilknur Tuncer
- Elazig Governorship, Interior Ministry, Elazig, Turkey
| | - Prabal Datta Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, QLD, 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, Faculty of Engineering, Ardahan University, Ardahan, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Ru-San Tan
- Department of Cardiology, National Heart Centre Singapore, Singapore
- Duke-NUS Medical School, Singapore
| | - Chai Hong Yeong
- School of Medicine, Faculty of Health and Medical Sciences, Taylor's University, 47500, Subang Jaya, Malaysia
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
19
|
Monkam P, Lu W, Jin S, Shan W, Wu J, Zhou X, Tang B, Zhao H, Zhang H, Ding X, Chen H, Su L. US-Net: A lightweight network for simultaneous speckle suppression and texture enhancement in ultrasound images. Comput Biol Med 2023; 152:106385. [PMID: 36493732 DOI: 10.1016/j.compbiomed.2022.106385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 11/21/2022] [Accepted: 11/28/2022] [Indexed: 12/02/2022]
Abstract
BACKGROUND Numerous traditional filtering approaches and deep learning-based methods have been proposed to improve the quality of ultrasound (US) image data. However, their results tend to suffer from over-smoothing and loss of texture and fine details. Moreover, they perform poorly on images with different degradation levels and mainly focus on speckle reduction, even though texture and fine detail enhancement are of crucial importance in clinical diagnosis. METHODS We propose an end-to-end framework termed US-Net for simultaneous speckle suppression and texture enhancement in US images. The architecture of US-Net is inspired by U-Net, whereby a feature refinement attention block (FRAB) is introduced to enable an effective learning of multi-level and multi-contextual representative features. Specifically, FRAB aims to emphasize high-frequency image information, which helps boost the restoration and preservation of fine-grained and textural details. Furthermore, our proposed US-Net is trained essentially with real US image data, whereby real US images embedded with simulated multi-level speckle noise are used as an auxiliary training set. RESULTS Extensive quantitative and qualitative experiments indicate that although trained with only one US image data type, our proposed US-Net is capable of restoring images acquired from different body parts and scanning settings with different degradation levels, while exhibiting favorable performance against state-of-the-art image enhancement approaches. Furthermore, utilizing our proposed US-Net as a pre-processing stage for COVID-19 diagnosis results in a gain of 3.6% in diagnostic accuracy. CONCLUSIONS The proposed framework can help improve the accuracy of ultrasound diagnosis.
Collapse
Affiliation(s)
- Patrice Monkam
- Department of Automation, Tsinghua University, Beijing, China; Beijing National Research Center for Information Science and Technology (BNRist), China.
| | - Wenkai Lu
- Department of Automation, Tsinghua University, Beijing, China; Beijing National Research Center for Information Science and Technology (BNRist), China.
| | - Songbai Jin
- Department of Automation, Tsinghua University, Beijing, China; Beijing National Research Center for Information Science and Technology (BNRist), China.
| | - Wenjun Shan
- Department of Automation, Tsinghua University, Beijing, China; Beijing National Research Center for Information Science and Technology (BNRist), China.
| | - Jing Wu
- Department of Automation, Tsinghua University, Beijing, China; Beijing National Research Center for Information Science and Technology (BNRist), China.
| | - Xiang Zhou
- Department of Critical Care Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China.
| | - Bo Tang
- Department of Critical Care Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China.
| | - Hua Zhao
- Department of Critical Care Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China.
| | - Hongmin Zhang
- Department of Critical Care Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China.
| | - Xin Ding
- Department of Critical Care Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China.
| | - Huan Chen
- Department of Critical Care Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China.
| | - Longxiang Su
- Department of Critical Care Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China.
| |
Collapse
|
20
|
Contrasting EfficientNet, ViT, and gMLP for COVID-19 Detection in Ultrasound Imagery. J Pers Med 2022; 12:jpm12101707. [PMID: 36294846 PMCID: PMC9605641 DOI: 10.3390/jpm12101707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 09/19/2022] [Accepted: 10/10/2022] [Indexed: 11/06/2022] Open
Abstract
A timely diagnosis of coronavirus is critical in order to control the spread of the virus. To aid in this, we propose in this paper a deep learning-based approach for detecting coronavirus patients using ultrasound imagery. We propose to exploit the transfer learning of a EfficientNet model pre-trained on the ImageNet dataset for the classification of ultrasound images of suspected patients. In particular, we contrast the results of EfficentNet-B2 with the results of ViT and gMLP. Then, we show the results of the three models by learning from scratch, i.e., without transfer learning. We view the detection problem from a multiclass classification perspective by classifying images as COVID-19, pneumonia, and normal. In the experiments, we evaluated the models on a publically available ultrasound dataset. This dataset consists of 261 recordings (202 videos + 59 images) belonging to 216 distinct patients. The best results were obtained using EfficientNet-B2 with transfer learning. In particular, we obtained precision, recall, and F1 scores of 95.84%, 99.88%, and 24 97.41%, respectively, for detecting the COVID-19 class. EfficientNet-B2 with transfer learning presented an overall accuracy of 96.79%, outperforming gMLP and ViT, which achieved accuracies of 93.03% and 92.82%, respectively.
Collapse
|
21
|
Snider EJ, Hernandez-Torres SI, Avital G, Boice EN. Evaluation of an Object Detection Algorithm for Shrapnel and Development of a Triage Tool to Determine Injury Severity. J Imaging 2022; 8:jimaging8090252. [PMID: 36135417 PMCID: PMC9501864 DOI: 10.3390/jimaging8090252] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/07/2022] [Accepted: 09/12/2022] [Indexed: 01/25/2023] Open
Abstract
Emergency medicine in austere environments rely on ultrasound imaging as an essential diagnostic tool. Without extensive training, identifying abnormalities such as shrapnel embedded in tissue, is challenging. Medical professionals with appropriate expertise are limited in resource-constrained environments. Incorporating artificial intelligence models to aid the interpretation can reduce the skill gap, enabling identification of shrapnel, and its proximity to important anatomical features for improved medical treatment. Here, we apply a deep learning object detection framework, YOLOv3, for shrapnel detection in various sizes and locations with respect to a neurovascular bundle. Ultrasound images were collected in a tissue phantom containing shrapnel, vein, artery, and nerve features. The YOLOv3 framework, classifies the object types and identifies the location. In the testing dataset, the model was successful at identifying each object class, with a mean Intersection over Union and average precision of 0.73 and 0.94, respectively. Furthermore, a triage tool was developed to quantify shrapnel distance from neurovascular features that could notify the end user when a proximity threshold is surpassed, and, thus, may warrant evacuation or surgical intervention. Overall, object detection models such as this will be vital to compensate for lack of expertise in ultrasound interpretation, increasing its availability for emergency and military medicine.
Collapse
Affiliation(s)
- Eric J. Snider
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | | | - Guy Avital
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Trauma & Combat Medicine Branch, Surgeon General’s Headquarters, Israel Defense Forces, Ramat-Gan 52620, Israel
- Division of Anesthesia, Intensive Care & Pain Management, Tel-Aviv Sourasky Medical Center, Affiliated with the Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv 64239, Israel
| | - Emily N. Boice
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Correspondence: ; Tel.: +1-210-539-8721
| |
Collapse
|
22
|
Bhosale YH, Patnaik KS. Application of Deep Learning Techniques in Diagnosis of Covid-19 (Coronavirus): A Systematic Review. Neural Process Lett 2022; 55:1-53. [PMID: 36158520 PMCID: PMC9483290 DOI: 10.1007/s11063-022-11023-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/29/2022] [Indexed: 01/09/2023]
Abstract
Covid-19 is now one of the most incredibly intense and severe illnesses of the twentieth century. Covid-19 has already endangered the lives of millions of people worldwide due to its acute pulmonary effects. Image-based diagnostic techniques like X-ray, CT, and ultrasound are commonly employed to get a quick and reliable clinical condition. Covid-19 identification out of such clinical scans is exceedingly time-consuming, labor-intensive, and susceptible to silly intervention. As a result, radiography imaging approaches using Deep Learning (DL) are consistently employed to achieve great results. Various artificial intelligence-based systems have been developed for the early prediction of coronavirus using radiography pictures. Specific DL methods such as CNN and RNN noticeably extract extremely critical characteristics, primarily in diagnostic imaging. Recent coronavirus studies have used these techniques to utilize radiography image scans significantly. The disease, as well as the present pandemic, was studied using public and private data. A total of 64 pre-trained and custom DL models concerning imaging modality as taxonomies are selected from the studied articles. The constraints relevant to DL-based techniques are the sample selection, network architecture, training with minimal annotated database, and security issues. This includes evaluating causal agents, pathophysiology, immunological reactions, and epidemiological illness. DL-based Covid-19 detection systems are the key focus of this review article. Covid-19 work is intended to be accelerated as a result of this study.
Collapse
Affiliation(s)
- Yogesh H. Bhosale
- Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi 835215 India
| | - K. Sridhar Patnaik
- Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi 835215 India
| |
Collapse
|
23
|
Boice EN, Hernandez Torres SI, Knowlton ZJ, Berard D, Gonzalez JM, Avital G, Snider EJ. Training Ultrasound Image Classification Deep-Learning Algorithms for Pneumothorax Detection Using a Synthetic Tissue Phantom Apparatus. J Imaging 2022; 8:jimaging8090249. [PMID: 36135414 PMCID: PMC9502699 DOI: 10.3390/jimaging8090249] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 08/20/2022] [Accepted: 09/07/2022] [Indexed: 11/17/2022] Open
Abstract
Ultrasound (US) imaging is a critical tool in emergency and military medicine because of its portability and immediate nature. However, proper image interpretation requires skill, limiting its utility in remote applications for conditions such as pneumothorax (PTX) which requires rapid intervention. Artificial intelligence has the potential to automate ultrasound image analysis for various pathophysiological conditions. Training models require large data sets and a means of troubleshooting in real-time for ultrasound integration deployment, and they also require large animal models or clinical testing. Here, we detail the development of a dynamic synthetic tissue phantom model for PTX and its use in training image classification algorithms. The model comprises a synthetic gelatin phantom cast in a custom 3D-printed rib mold and a lung mimicking phantom. When compared to PTX images acquired in swine, images from the phantom were similar in both PTX negative and positive mimicking scenarios. We then used a deep learning image classification algorithm, which we previously developed for shrapnel detection, to accurately predict the presence of PTX in swine images by only training on phantom image sets, highlighting the utility for a tissue phantom for AI applications.
Collapse
Affiliation(s)
- Emily N. Boice
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | | | - Zechariah J. Knowlton
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - David Berard
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - Jose M. Gonzalez
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - Guy Avital
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Trauma & Combat Medicine Branch, Surgeon General’s Headquarters, Israel Defense Forces, Ramat-Gan 52620, Israel
- Division of Anesthesia, Intensive Care & Pain Management, Tel-Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv 64239, Israel
| | - Eric J. Snider
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Correspondence: ; Tel.: +210-539-8721
| |
Collapse
|
24
|
Sultan SR. Association Between Lung Ultrasound Patterns and Pneumonia. Ultrasound Q 2022; 38:246-249. [PMID: 35235542 DOI: 10.1097/ruq.0000000000000598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
ABSTRACT Pneumonia is a common respiratory infection that affects the lungs. Lung ultrasound (LUS) is a portable, cost-effective imaging method, which is free of ionizing radiation and has been shown to be useful for evaluating pneumonia. The aim of this retrospective analytical study was to determine the association between lung ultrasound patterns and pneumonia. For the purpose of performing the required analysis, LUS patterns including consolidations, pleural line irregularities, A lines and B lines from 90 subjects (44 patients with confirmed pneumonia and 46 controls) were retrieved from a published open-access data set, which was reviewed and approved by medical experts. A χ 2 test was used for the comparison of categorical variables to determine the association between each LUS pattern and the presence of pneumonia. There is a significant association between LUS consolidation and the presence of pneumonia ( P < 0.0001). Lung ultrasound A lines are significantly associated with the absence of pneumonia ( P < 0.0001), whereas there are no associations between B lines or pleural line irregularities with pneumonia. Lung ultrasound consolidation is found to be associated with the presence of pneumonia. A lines are associated with healthy lungs, and there is no association of B lines and pleural irregularities with the presence of pneumonia. Further studies investigating LUS patterns with clinical information and symptoms of patients with pneumonia are required.
Collapse
Affiliation(s)
- Salahaden R Sultan
- Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
25
|
Maximino J, Coimbra M, Pedrosa J. Detection of COVID-19 in Point of Care Lung Ultrasound. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1527-1530. [PMID: 36086665 DOI: 10.1109/embc48229.2022.9871235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The coronavirus disease 2019 (COVID-19) evolved into a global pandemic, responsible for a significant number of infections and deaths. In this scenario, point-of-care ultrasound (POCUS) has emerged as a viable and safe imaging modality. Computer vision (CV) solutions have been proposed to aid clinicians in POCUS image interpretation, namely detection/segmentation of structures and image/patient classification but relevant challenges still remain. As such, the aim of this study is to develop CV algorithms, using Deep Learning techniques, to create tools that can aid doctors in the diagnosis of viral and bacterial pneumonia (VP and BP) through POCUS exams. To do so, convolutional neural networks were designed to perform in classification tasks. The architectures chosen to build these models were the VGG16, ResNet50, DenseNet169 e MobileNetV2. Patients images were divided in three classes: healthy (HE), BP and VP (which includes COVID-19). Through a comparative study, which was based on several performance metrics, the model based on the DenseNet169 architecture was designated as the best performing model, achieving 78% average accuracy value of the five iterations of 5- Fold Cross-Validation. Given that the currently available POCUS datasets for COVID-19 are still limited, the training of the models was negatively affected by such and the models were not tested in an independent dataset. Furthermore, it was also not possible to perform lesion detection tasks. Nonetheless, in order to provide explainability and understanding of the models, Gradient-weighted Class Activation Mapping (GradCAM) were used as a tool to highlight the most relevant classification regions. Clinical relevance - Reveals the potential of POCUS to support COVID-19 screening. The results are very promising although the dataset is limite.
Collapse
|
26
|
Boice EN, Hernandez-Torres SI, Snider EJ. Comparison of Ultrasound Image Classifier Deep Learning Algorithms for Shrapnel Detection. J Imaging 2022; 8:jimaging8050140. [PMID: 35621904 PMCID: PMC9144026 DOI: 10.3390/jimaging8050140] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 05/17/2022] [Accepted: 05/18/2022] [Indexed: 02/06/2023] Open
Abstract
Ultrasound imaging is essential in emergency medicine and combat casualty care, oftentimes used as a critical triage tool. However, identifying injuries, such as shrapnel embedded in tissue or a pneumothorax, can be challenging without extensive ultrasonography training, which may not be available in prolonged field care or emergency medicine scenarios. Artificial intelligence can simplify this by automating image interpretation but only if it can be deployed for use in real time. We previously developed a deep learning neural network model specifically designed to identify shrapnel in ultrasound images, termed ShrapML. Here, we expand on that work to further optimize the model and compare its performance to that of conventional models trained on the ImageNet database, such as ResNet50. Through Bayesian optimization, the model’s parameters were further refined, resulting in an F1 score of 0.98. We compared the proposed model to four conventional models: DarkNet-19, GoogleNet, MobileNetv2, and SqueezeNet which were down-selected based on speed and testing accuracy. Although MobileNetv2 achieved a higher accuracy than ShrapML, there was a tradeoff between accuracy and speed, with ShrapML being 10× faster than MobileNetv2. In conclusion, real-time deployment of algorithms such as ShrapML can reduce the cognitive load for medical providers in high-stress emergency or miliary medicine scenarios.
Collapse
|
27
|
An image classification deep-learning algorithm for shrapnel detection from ultrasound images. Sci Rep 2022; 12:8427. [PMID: 35589931 PMCID: PMC9117994 DOI: 10.1038/s41598-022-12367-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 05/06/2022] [Indexed: 01/01/2023] Open
Abstract
Ultrasound imaging is essential for non-invasively diagnosing injuries where advanced diagnostics may not be possible. However, image interpretation remains a challenge as proper expertise may not be available. In response, artificial intelligence algorithms are being investigated to automate image analysis and diagnosis. Here, we highlight an image classification convolutional neural network for detecting shrapnel in ultrasound images. As an initial application, different shrapnel types and sizes were embedded first in a tissue mimicking phantom and then in swine thigh tissue. The algorithm architecture was optimized stepwise by minimizing validation loss and maximizing F1 score. The final algorithm design trained on tissue phantom image sets had an F1 score of 0.95 and an area under the ROC curve of 0.95. It maintained higher than a 90% accuracy for each of 8 shrapnel types. When trained only on swine image sets, the optimized algorithm format had even higher metrics: F1 and area under the ROC curve of 0.99. Overall, the algorithm developed resulted in strong classification accuracy for both the tissue phantom and animal tissue. This framework can be applied to other trauma relevant imaging applications such as internal bleeding to further simplify trauma medicine when resources and image interpretation are scarce.
Collapse
|
28
|
De Rosa L, L'Abbate S, Kusmic C, Faita F. Applications of artificial intelligence in lung ultrasound: Review of deep learning methods for COVID-19 fighting. Artif Intell Med Imaging 2022; 3:42-54. [DOI: 10.35711/aimi.v3.i2.42] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 02/22/2022] [Accepted: 04/26/2022] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND The pandemic outbreak of the novel coronavirus disease (COVID-19) has highlighted the need to combine rapid, non-invasive and widely accessible techniques with the least risk of patient’s cross-infection to achieve a successful early detection and surveillance of the disease. In this regard, the lung ultrasound (LUS) technique has been proved invaluable in both the differential diagnosis and the follow-up of COVID-19 patients, and its potential may be destined to evolve. Recently, indeed, LUS has been empowered through the development of automated image processing techniques.
AIM To provide a systematic review of the application of artificial intelligence (AI) technology in medical LUS analysis of COVID-19 patients using the preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines.
METHODS A literature search was performed for relevant studies published from March 2020 - outbreak of the pandemic - to 30 September 2021. Seventeen articles were included in the result synthesis of this paper.
RESULTS As part of the review, we presented the main characteristics related to AI techniques, in particular deep learning (DL), adopted in the selected articles. A survey was carried out on the type of architectures used, availability of the source code, network weights and open access datasets, use of data augmentation, use of the transfer learning strategy, type of input data and training/test datasets, and explainability.
CONCLUSION Finally, this review highlighted the existing challenges, including the lack of large datasets of reliable COVID-19-based LUS images to test the effectiveness of DL methods and the ethical/regulatory issues associated with the adoption of automated systems in real clinical scenarios.
Collapse
Affiliation(s)
- Laura De Rosa
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
| | - Serena L'Abbate
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
- Institute of Life Sciences, Scuola Superiore Sant’Anna, Pisa 56124, Italy
| | - Claudia Kusmic
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
| | - Francesco Faita
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
| |
Collapse
|
29
|
Erratum: Born et al. Accelerating Detection of Lung Pathologies with Explainable Ultrasound Image Analysis. Appl. Sci. 2021, 11, 672. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The authors wish to make the following corrections to this paper [...]
Collapse
|
30
|
Review of Machine Learning in Lung Ultrasound in COVID-19 Pandemic. J Imaging 2022; 8:jimaging8030065. [PMID: 35324620 PMCID: PMC8952297 DOI: 10.3390/jimaging8030065] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 03/01/2022] [Accepted: 03/02/2022] [Indexed: 12/25/2022] Open
Abstract
Ultrasound imaging of the lung has played an important role in managing patients with COVID-19–associated pneumonia and acute respiratory distress syndrome (ARDS). During the COVID-19 pandemic, lung ultrasound (LUS) or point-of-care ultrasound (POCUS) has been a popular diagnostic tool due to its unique imaging capability and logistical advantages over chest X-ray and CT. Pneumonia/ARDS is associated with the sonographic appearances of pleural line irregularities and B-line artefacts, which are caused by interstitial thickening and inflammation, and increase in number with severity. Artificial intelligence (AI), particularly machine learning, is increasingly used as a critical tool that assists clinicians in LUS image reading and COVID-19 decision making. We conducted a systematic review from academic databases (PubMed and Google Scholar) and preprints on arXiv or TechRxiv of the state-of-the-art machine learning technologies for LUS images in COVID-19 diagnosis. Openly accessible LUS datasets are listed. Various machine learning architectures have been employed to evaluate LUS and showed high performance. This paper will summarize the current development of AI for COVID-19 management and the outlook for emerging trends of combining AI-based LUS with robotics, telehealth, and other techniques.
Collapse
|
31
|
Frank O, Schipper N, Vaturi M, Soldati G, Smargiassi A, Inchingolo R, Torri E, Perrone T, Mento F, Demi L, Galun M, Eldar YC, Bagon S. Integrating Domain Knowledge Into Deep Networks for Lung Ultrasound With Applications to COVID-19. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:571-581. [PMID: 34606447 PMCID: PMC9014480 DOI: 10.1109/tmi.2021.3117246] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 09/26/2021] [Accepted: 09/29/2021] [Indexed: 05/18/2023]
Abstract
Lung ultrasound (LUS) is a cheap, safe and non-invasive imaging modality that can be performed at patient bed-side. However, to date LUS is not widely adopted due to lack of trained personnel required for interpreting the acquired LUS frames. In this work we propose a framework for training deep artificial neural networks for interpreting LUS, which may promote broader use of LUS. When using LUS to evaluate a patient's condition, both anatomical phenomena (e.g., the pleural line, presence of consolidations), as well as sonographic artifacts (such as A- and B-lines) are of importance. In our framework, we integrate domain knowledge into deep neural networks by inputting anatomical features and LUS artifacts in the form of additional channels containing pleural and vertical artifacts masks along with the raw LUS frames. By explicitly supplying this domain knowledge, standard off-the-shelf neural networks can be rapidly and efficiently finetuned to accomplish various tasks on LUS data, such as frame classification or semantic segmentation. Our framework allows for a unified treatment of LUS frames captured by either convex or linear probes. We evaluated our proposed framework on the task of COVID-19 severity assessment using the ICLUS dataset. In particular, we finetuned simple image classification models to predict per-frame COVID-19 severity score. We also trained a semantic segmentation model to predict per-pixel COVID-19 severity annotations. Using the combined raw LUS frames and the detected lines for both tasks, our off-the-shelf models performed better than complicated models specifically designed for these tasks, exemplifying the efficacy of our framework.
Collapse
|
32
|
The adoption of deep learning interpretability techniques on diabetic retinopathy analysis: a review. Med Biol Eng Comput 2022; 60:633-642. [PMID: 35083634 PMCID: PMC8791699 DOI: 10.1007/s11517-021-02487-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 11/14/2021] [Indexed: 11/01/2022]
Abstract
Diabetic retinopathy (DR) is a chronic eye condition that is rapidly growing due to the prevalence of diabetes. There are challenges such as the dearth of ophthalmologists, healthcare resources, and facilities that are unable to provide patients with appropriate eye screening services. As a result, deep learning (DL) has the potential to play a critical role as a powerful automated diagnostic tool in the field of ophthalmology, particularly in the early detection of DR when compared to traditional detection techniques. The DL models are known as black boxes, despite the fact that they are widely adopted. They make no attempt to explain how the model learns representations or why it makes a particular prediction. Due to the black box design architecture, DL methods make it difficult for intended end-users like ophthalmologists to grasp how the models function, preventing model acceptance for clinical usage. Recently, several studies on the interpretability of DL methods used in DR-related tasks such as DR classification and segmentation have been published. The goal of this paper is to provide a detailed overview of interpretability strategies used in DR-related tasks. This paper also includes the authors' insights and future directions in the field of DR to help the research community overcome research problems.
Collapse
|
33
|
Zhang R, Meng F, Li H, Wu Q, Ngan KN. Category boundary re-decision by component labels to improve generation of class activation map. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.10.072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
34
|
Zhao L, Lediju Bell MA. A Review of Deep Learning Applications in Lung Ultrasound Imaging of COVID-19 Patients. BME FRONTIERS 2022; 2022:9780173. [PMID: 36714302 PMCID: PMC9880989 DOI: 10.34133/2022/9780173] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
The massive and continuous spread of COVID-19 has motivated researchers around the world to intensely explore, understand, and develop new techniques for diagnosis and treatment. Although lung ultrasound imaging is a less established approach when compared to other medical imaging modalities such as X-ray and CT, multiple studies have demonstrated its promise to diagnose COVID-19 patients. At the same time, many deep learning models have been built to improve the diagnostic efficiency of medical imaging. The integration of these initially parallel efforts has led multiple researchers to report deep learning applications in medical imaging of COVID-19 patients, most of which demonstrate the outstanding potential of deep learning to aid in the diagnosis of COVID-19. This invited review is focused on deep learning applications in lung ultrasound imaging of COVID-19 and provides a comprehensive overview of ultrasound systems utilized for data acquisition, associated datasets, deep learning models, and comparative performance.
Collapse
Affiliation(s)
- Lingyi Zhao
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA,Department of Computer Science, Johns Hopkins University, Baltimore, USA,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
35
|
Fang X, Li W, Huang J, Li W, Feng Q, Han Y, Ding X, Zhang J. Ultrasound image intelligent diagnosis in community-acquired pneumonia of children using convolutional neural network-based transfer learning. Front Pediatr 2022; 10:1063587. [PMID: 36507139 PMCID: PMC9729936 DOI: 10.3389/fped.2022.1063587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 11/11/2022] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Studies show that lung ultrasound (LUS) can accurately diagnose community-acquired pneumonia (CAP) and keep children away from radiation, however, it takes a long time and requires experienced doctors. Therefore, a robust, automatic and computer-based diagnosis of LUS is essential. OBJECTIVE To construct and analyze convolutional neural networks (CNNs) based on transfer learning (TL) to explore the feasibility of ultrasound image diagnosis and grading in CAP of children. METHODS 89 children expected to receive a diagnosis of CAP were prospectively enrolled. Clinical data were collected, a LUS images database was established comprising 916 LUS images, and the diagnostic values of LUS in CAP were analyzed. We employed pre-trained models (AlexNet, VGG 16, VGG 19, Inception v3, ResNet 18, ResNet 50, DenseNet 121 and DenseNet 201) to perform CAP diagnosis and grading on the LUS database and evaluated the performance of each model. RESULTS Among the 89 children, 24 were in the non-CAP group, and 65 were finally diagnosed with CAP, including 44 in the mild group and 21 in the severe group. LUS was highly consistent with clinical diagnosis, CXR and chest CT (kappa values = 0.943, 0.837, 0.835). Experimental results revealed that, after k-fold cross-validation, Inception v3 obtained the best diagnosis accuracy, PPV, sensitivity and AUC of 0.87 ± 0.02, 0.90 ± 0.03, 0.92 ± 0.04 and 0.82 ± 0.04, respectively, for our dataset out of all pre-trained models. As a result, best accuracy, PPV and specificity of 0.75 ± 0.03, 0.89 ± 0.05 and 0.80 ± 0.10 were achieved for severity classification in Inception v3. CONCLUSIONS LUS is a reliable method for diagnosing CAP in children. Experiments showed that, after transfer learning, the CNN models successfully diagnosed and classified LUS of CAP in children; of these, the Inception v3 achieves the best performance and may serve as a tool for the further research and development of AI automatic diagnosis LUS system in clinical applications. REGISTRATION www.chictr.org.cn ChiCTR2200057328.
Collapse
Affiliation(s)
- Xiaohui Fang
- Department of Pediatrics, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wen Li
- Department of Ultrasound Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Junjie Huang
- Department of Electronic Information, Shanghai Ocean University, Shanghai, China
| | - Weimei Li
- Department of Ultrasound Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qingzhong Feng
- Department of Electronic Information, Shanghai Ocean University, Shanghai, China
| | - Yanlin Han
- Department of Electronic Information, Shanghai Ocean University, Shanghai, China
| | - Xiaowei Ding
- Department of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jinping Zhang
- Department of Pediatrics, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
36
|
Gillman AG, Lunardo F, Prinable J, Belous G, Nicolson A, Min H, Terhorst A, Dowling JA. Automated COVID-19 diagnosis and prognosis with medical imaging and who is publishing: a systematic review. Phys Eng Sci Med 2021; 45:13-29. [PMID: 34919204 PMCID: PMC8678975 DOI: 10.1007/s13246-021-01093-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 12/13/2021] [Indexed: 12/31/2022]
Abstract
Objectives: To conduct a systematic survey of published techniques for automated diagnosis and prognosis of COVID-19 diseases using medical imaging, assessing the validity of reported performance and investigating the proposed clinical use-case. To conduct a scoping review into the authors publishing such work. Methods: The Scopus database was queried and studies were screened for article type, and minimum source normalized impact per paper and citations, before manual relevance assessment and a bias assessment derived from a subset of the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). The number of failures of the full CLAIM was adopted as a surrogate for risk-of-bias. Methodological and performance measurements were collected from each technique. Each study was assessed by one author. Comparisons were evaluated for significance with a two-sided independent t-test. Findings: Of 1002 studies identified, 390 remained after screening and 81 after relevance and bias exclusion. The ratio of exclusion for bias was 71%, indicative of a high level of bias in the field. The mean number of CLAIM failures per study was 8.3 ± 3.9 [1,17] (mean ± standard deviation [min,max]). 58% of methods performed diagnosis versus 31% prognosis. Of the diagnostic methods, 38% differentiated COVID-19 from healthy controls. For diagnostic techniques, area under the receiver operating curve (AUC) = 0.924 ± 0.074 [0.810,0.991] and accuracy = 91.7% ± 6.4 [79.0,99.0]. For prognostic techniques, AUC = 0.836 ± 0.126 [0.605,0.980] and accuracy = 78.4% ± 9.4 [62.5,98.0]. CLAIM failures did not correlate with performance, providing confidence that the highest results were not driven by biased papers. Deep learning techniques reported higher AUC (p < 0.05) and accuracy (p < 0.05), but no difference in CLAIM failures was identified. Interpretation: A majority of papers focus on the less clinically impactful diagnosis task, contrasted with prognosis, with a significant portion performing a clinically unnecessary task of differentiating COVID-19 from healthy. Authors should consider the clinical scenario in which their work would be deployed when developing techniques. Nevertheless, studies report superb performance in a potentially impactful application. Future work is warranted in translating techniques into clinical tools.
Collapse
Affiliation(s)
- Ashley G Gillman
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia.
| | - Febrio Lunardo
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia.,College of Science and Engineering, James Cook University, Australian Tropical Science Innovation Precinct, Townsville, QLD, 4814, Australia
| | - Joseph Prinable
- ACRF Image X Institute, University of Sydney, Level 2, Biomedical Building (C81), 1 Central Ave, Australian Technology Park, Eveleigh, Sydney, NSW, 2015, Australia
| | - Gregg Belous
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Aaron Nicolson
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Hang Min
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Andrew Terhorst
- Data61, Commonwealth Scientific and Industrial Research Organisation, College Road, Sandy Bay, Hobart, TAS, 7005, Australia
| | - Jason A Dowling
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| |
Collapse
|
37
|
Garcia Santa Cruz B, Bossa MN, Sölter J, Husch AD. Public Covid-19 X-ray datasets and their impact on model bias - A systematic review of a significant problem. Med Image Anal 2021. [PMID: 34597937 DOI: 10.1101/2021.02.15.21251775] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Computer-aided-diagnosis and stratification of COVID-19 based on chest X-ray suffers from weak bias assessment and limited quality-control. Undetected bias induced by inappropriate use of datasets, and improper consideration of confounders prevents the translation of prediction models into clinical practice. By adopting established tools for model evaluation to the task of evaluating datasets, this study provides a systematic appraisal of publicly available COVID-19 chest X-ray datasets, determining their potential use and evaluating potential sources of bias. Only 9 out of more than a hundred identified datasets met at least the criteria for proper assessment of risk of bias and could be analysed in detail. Remarkably most of the datasets utilised in 201 papers published in peer-reviewed journals, are not among these 9 datasets, thus leading to models with high risk of bias. This raises concerns about the suitability of such models for clinical use. This systematic review highlights the limited description of datasets employed for modelling and aids researchers to select the most suitable datasets for their task.
Collapse
Affiliation(s)
- Beatriz Garcia Santa Cruz
- Centre Hospitalier de Luxembourg, 4, Rue Ernest Barble, Luxembourg L-1210, Luxembourg; Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg.
| | - Matías Nicolás Bossa
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg; Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Pleinlaan 2, Brussels B-1050, Belgium
| | - Jan Sölter
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg
| | - Andreas Dominik Husch
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg
| |
Collapse
|
38
|
Gudigar A, Raghavendra U, Nayak S, Ooi CP, Chan WY, Gangavarapu MR, Dharmik C, Samanth J, Kadri NA, Hasikin K, Barua PD, Chakraborty S, Ciaccio EJ, Acharya UR. Role of Artificial Intelligence in COVID-19 Detection. SENSORS (BASEL, SWITZERLAND) 2021; 21:8045. [PMID: 34884045 PMCID: PMC8659534 DOI: 10.3390/s21238045] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 11/26/2021] [Accepted: 11/26/2021] [Indexed: 12/15/2022]
Abstract
The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.
Collapse
Affiliation(s)
- Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Sneha Nayak
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore;
| | - Wai Yee Chan
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur 50603, Malaysia;
| | - Mokshagna Rohit Gangavarapu
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chinmay Dharmik
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nahrizul Adib Kadri
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Prabal Datta Barua
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia;
- School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
| | - Subrata Chakraborty
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Edward J. Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, NY 10032, USA;
| | - U. Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore;
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
- International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 860-8555, Japan
| |
Collapse
|
39
|
Horry MJ, Chakraborty S, Pradhan B, Fallahpoor M, Chegeni H, Paul M. Factors determining generalization in deep learning models for scoring COVID-CT images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:9264-9293. [PMID: 34814345 DOI: 10.3934/mbe.2021456] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The COVID-19 pandemic has inspired unprecedented data collection and computer vision modelling efforts worldwide, focused on the diagnosis of COVID-19 from medical images. However, these models have found limited, if any, clinical application due in part to unproven generalization to data sets beyond their source training corpus. This study investigates the generalizability of deep learning models using publicly available COVID-19 Computed Tomography data through cross dataset validation. The predictive ability of these models for COVID-19 severity is assessed using an independent dataset that is stratified for COVID-19 lung involvement. Each inter-dataset study is performed using histogram equalization, and contrast limited adaptive histogram equalization with and without a learning Gabor filter. We show that under certain conditions, deep learning models can generalize well to an external dataset with F1 scores up to 86%. The best performing model shows predictive accuracy of between 75% and 96% for lung involvement scoring against an external expertly stratified dataset. From these results we identify key factors promoting deep learning generalization, being primarily the uniform acquisition of training images, and secondly diversity in CT slice position.
Collapse
Affiliation(s)
- Michael James Horry
- Center for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Australia
| | - Subrata Chakraborty
- Center for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Australia
| | - Biswajeet Pradhan
- Center for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Australia
- Center of Excellence for Climate Change Research, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Earth Observation Center, Institute of Climate Change, Universiti Kebangsaan Malaysia, Selangor 43600, Malaysia
| | - Maryam Fallahpoor
- Center for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Australia
| | - Hossein Chegeni
- Fellowship of Interventional Radiology Imaging Center, IranMehr General Hospital, Iran
| | - Manoranjan Paul
- Machine Vision and Digital Health (MaViDH), School of Computing, Mathematics, and Engineering, Charles Sturt University, Australia
| |
Collapse
|
40
|
Public Covid-19 X-ray datasets and their impact on model bias - A systematic review of a significant problem. Med Image Anal 2021; 74:102225. [PMID: 34597937 PMCID: PMC8479314 DOI: 10.1016/j.media.2021.102225] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 08/29/2021] [Accepted: 09/02/2021] [Indexed: 12/23/2022]
Abstract
Computer-aided-diagnosis and stratification of COVID-19 based on chest X-ray suffers from weak bias assessment and limited quality-control. Undetected bias induced by inappropriate use of datasets, and improper consideration of confounders prevents the translation of prediction models into clinical practice. By adopting established tools for model evaluation to the task of evaluating datasets, this study provides a systematic appraisal of publicly available COVID-19 chest X-ray datasets, determining their potential use and evaluating potential sources of bias. Only 9 out of more than a hundred identified datasets met at least the criteria for proper assessment of risk of bias and could be analysed in detail. Remarkably most of the datasets utilised in 201 papers published in peer-reviewed journals, are not among these 9 datasets, thus leading to models with high risk of bias. This raises concerns about the suitability of such models for clinical use. This systematic review highlights the limited description of datasets employed for modelling and aids researchers to select the most suitable datasets for their task.
Collapse
|
41
|
Magrelli S, Valentini P, De Rose C, Morello R, Buonsenso D. Classification of Lung Disease in Children by Using Lung Ultrasound Images and Deep Convolutional Neural Network. Front Physiol 2021; 12:693448. [PMID: 34512375 PMCID: PMC8432935 DOI: 10.3389/fphys.2021.693448] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 08/05/2021] [Indexed: 01/12/2023] Open
Abstract
Bronchiolitis is the most common cause of hospitalization of children in the first year of life and pneumonia is the leading cause of infant mortality worldwide. Lung ultrasound technology (LUS) is a novel imaging diagnostic tool for the early detection of respiratory distress and offers several advantages due to its low-cost, relative safety, portability, and easy repeatability. More precise and efficient diagnostic and therapeutic strategies are needed. Deep-learning-based computer-aided diagnosis (CADx) systems, using chest X-ray images, have recently demonstrated their potential as a screening tool for pulmonary disease (such as COVID-19 pneumonia). We present the first computer-aided diagnostic scheme for LUS images of pulmonary diseases in children. In this study, we trained from scratch four state-of-the-art deep-learning models (VGG19, Xception, Inception-v3 and Inception-ResNet-v2) for detecting children with bronchiolitis and pneumonia. In our experiments we used a data set consisting of 5,907 images from 33 healthy infants, 3,286 images from 22 infants with bronchiolitis, and 4,769 images from 7 children suffering from bacterial pneumonia. Using four-fold cross-validation, we implemented one binary classification (healthy vs. bronchiolitis) and one three-class classification (healthy vs. bronchiolitis vs. bacterial pneumonia) out of three classes. Affine transformations were applied for data augmentation. Hyperparameters were optimized for the learning rate, dropout regularization, batch size, and epoch iteration. The Inception-ResNet-v2 model provides the highest classification performance, when compared with the other models used on test sets: for healthy vs. bronchiolitis, it provides 97.75% accuracy, 97.75% sensitivity, and 97% specificity whereas for healthy vs. bronchiolitis vs. bacterial pneumonia, the Inception-v3 model provides the best results with 91.5% accuracy, 91.5% sensitivity, and 95.86% specificity. We performed a gradient-weighted class activation mapping (Grad-CAM) visualization and the results were qualitatively evaluated by a pediatrician expert in LUS imaging: heatmaps highlight areas containing diagnostic-relevant LUS imaging-artifacts, e.g., A-, B-, pleural-lines, and consolidations. These complex patterns are automatically learnt from the data, thus avoiding hand-crafted features usage. By using LUS imaging, the proposed framework might aid in the development of an accessible and rapid decision support-method for diagnosing pulmonary diseases in children using LUS imaging.
Collapse
Affiliation(s)
| | - Piero Valentini
- Department of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy.,Global Health Research Institute, Istituto di Igiene, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Cristina De Rose
- Department of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Rosa Morello
- Department of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Danilo Buonsenso
- Department of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy.,Global Health Research Institute, Istituto di Igiene, Università Cattolica del Sacro Cuore, Rome, Italy.,Dipartimento di Scienze Biotecnologiche di Base, Cliniche Intensivologiche e Perioperatorie, Università Cattolica del Sacro Cuore, Rome, Italy
| |
Collapse
|
42
|
Diaz-Escobar J, Ordóñez-Guillén NE, Villarreal-Reyes S, Galaviz-Mosqueda A, Kober V, Rivera-Rodriguez R, Lozano Rizk JE. Deep-learning based detection of COVID-19 using lung ultrasound imagery. PLoS One 2021; 16:e0255886. [PMID: 34388187 PMCID: PMC8363024 DOI: 10.1371/journal.pone.0255886] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 07/27/2021] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND The COVID-19 pandemic has exposed the vulnerability of healthcare services worldwide, especially in underdeveloped countries. There is a clear need to develop novel computer-assisted diagnosis tools to provide rapid and cost-effective screening in places where massive traditional testing is not feasible. Lung ultrasound is a portable, easy to disinfect, low cost and non-invasive tool that can be used to identify lung diseases. Computer-assisted analysis of lung ultrasound imagery is a relatively recent approach that has shown great potential for diagnosing pulmonary conditions, being a viable alternative for screening and diagnosing COVID-19. OBJECTIVE To evaluate and compare the performance of deep-learning techniques for detecting COVID-19 infections from lung ultrasound imagery. METHODS We adapted different pre-trained deep learning architectures, including VGG19, InceptionV3, Xception, and ResNet50. We used the publicly available POCUS dataset comprising 3326 lung ultrasound frames of healthy, COVID-19, and pneumonia patients for training and fine-tuning. We conducted two experiments considering three classes (COVID-19, pneumonia, and healthy) and two classes (COVID-19 versus pneumonia and COVID-19 versus non-COVID-19) of predictive models. The obtained results were also compared with the POCOVID-net model. For performance evaluation, we calculated per-class classification metrics (Precision, Recall, and F1-score) and overall metrics (Accuracy, Balanced Accuracy, and Area Under the Receiver Operating Characteristic Curve). Lastly, we performed a statistical analysis of performance results using ANOVA and Friedman tests followed by post-hoc analysis using the Wilcoxon signed-rank test with the Holm's step-down correction. RESULTS InceptionV3 network achieved the best average accuracy (89.1%), balanced accuracy (89.3%), and area under the receiver operating curve (97.1%) for COVID-19 detection from bacterial pneumonia and healthy lung ultrasound data. The ANOVA and Friedman tests found statistically significant performance differences between models for accuracy, balanced accuracy and area under the receiver operating curve. Post-hoc analysis showed statistically significant differences between the performance obtained with the InceptionV3-based model and POCOVID-net, VGG19-, and ResNet50-based models. No statistically significant differences were found in the performance obtained with InceptionV3- and Xception-based models. CONCLUSIONS Deep learning techniques for computer-assisted analysis of lung ultrasound imagery provide a promising avenue for COVID-19 screening and diagnosis. Particularly, we found that the InceptionV3 network provides the most promising predictive results from all AI-based techniques evaluated in this work. InceptionV3- and Xception-based models can be used to further develop a viable computer-assisted screening tool for COVID-19 based on ultrasound imagery.
Collapse
Affiliation(s)
- Julia Diaz-Escobar
- CICESE Research Center, Ensenada, Baja California, México
- Faculty of Science, UABC, Ensenada, Baja California, México
| | | | | | | | - Vitaly Kober
- CICESE Research Center, Ensenada, Baja California, México
- Department of Mathematics, Chelyabinsk State University, Chelyabinsk, Russia
| | | | | |
Collapse
|
43
|
Automated detection of pneumonia in lung ultrasound using deep video classification for COVID-19. INFORMATICS IN MEDICINE UNLOCKED 2021; 25:100687. [PMID: 34368420 PMCID: PMC8332742 DOI: 10.1016/j.imu.2021.100687] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 07/28/2021] [Accepted: 07/29/2021] [Indexed: 12/15/2022] Open
Abstract
There is a crucial need for quick testing and diagnosis of patients during the COVID-19 pandemic. Lung ultrasound is an imaging modality that is cost-effective, widely accessible, and can be used to diagnose acute respiratory distress syndrome in patients with COVID-19. It can be used to find important characteristics in the images, including A-lines, B-lines, consolidation, and pleural effusion, which all inform the clinician in monitoring and diagnosing the disease. With the use of portable ultrasound transducers, lung ultrasound images can be easily acquired, however, the images are often of poor quality. They often require an expert clinician interpretation, which may be time-consuming and is highly subjective. We propose a method for fast and reliable interpretation of lung ultrasound images by use of deep learning, based on the Kinetics-I3D network. Our learned model can classify an entire lung ultrasound scan obtained at point-of-care, without requiring the use of preprocessing or a frame-by-frame analysis. We compare our video classifier against ground truth classification annotations provided by a set of expert radiologists and clinicians, which include A-lines, B-lines, consolidation, and pleural effusion. Our classification method achieves an accuracy of 90% and an average precision score of 95% with the use of 5-fold cross-validation. The results indicate the potential use of automated analysis of portable lung ultrasound images to assist clinicians in screening and diagnosing patients.
Collapse
|
44
|
Komatsu M, Sakai A, Dozen A, Shozu K, Yasutomi S, Machino H, Asada K, Kaneko S, Hamamoto R. Towards Clinical Application of Artificial Intelligence in Ultrasound Imaging. Biomedicines 2021; 9:720. [PMID: 34201827 PMCID: PMC8301304 DOI: 10.3390/biomedicines9070720] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/13/2021] [Accepted: 06/18/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies.
Collapse
Affiliation(s)
- Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Akira Sakai
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ai Dozen
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Kanto Shozu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Suguru Yasutomi
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
45
|
Born J, Beymer D, Rajan D, Coy A, Mukherjee VV, Manica M, Prasanna P, Ballah D, Guindy M, Shaham D, Shah PL, Karteris E, Robertus JL, Gabrani M, Rosen-Zvi M. On the role of artificial intelligence in medical imaging of COVID-19. PATTERNS (NEW YORK, N.Y.) 2021; 2:100269. [PMID: 33969323 PMCID: PMC8086827 DOI: 10.1016/j.patter.2021.100269] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Although a plethora of research articles on AI methods on COVID-19 medical imaging are published, their clinical value remains unclear. We conducted the largest systematic review of the literature addressing the utility of AI in imaging for COVID-19 patient care. By keyword searches on PubMed and preprint servers throughout 2020, we identified 463 manuscripts and performed a systematic meta-analysis to assess their technical merit and clinical relevance. Our analysis evidences a significant disparity between clinical and AI communities, in the focus on both imaging modalities (AI experts neglected CT and ultrasound, favoring X-ray) and performed tasks (71.9% of AI papers centered on diagnosis). The vast majority of manuscripts were found to be deficient regarding potential use in clinical practice, but 2.7% (n = 12) publications were assigned a high maturity level and are summarized in greater detail. We provide an itemized discussion of the challenges in developing clinically relevant AI solutions with recommendations and remedies.
Collapse
Affiliation(s)
- Jannis Born
- IBM Research Europe, Zurich, Switzerland
- Department for Biosystems Science & Engineering, ETH Zurich, Zurich, Switzerland
| | | | | | - Adam Coy
- IBM Almaden Research Center, San Jose, CA, USA
- Vision Radiology, Dallas, TX, USA
| | | | | | - Prasanth Prasanna
- IBM Almaden Research Center, San Jose, CA, USA
- Department of Radiology and Imaging Sciences, University of Utah Health Sciences Center, Salt Lake City, UT, USA
| | - Deddeh Ballah
- IBM Almaden Research Center, San Jose, CA, USA
- Department of Radiology, Seton Medical Center, Daly City, CA, USA
| | - Michal Guindy
- Assuta Medical Centres Radiology, Tel-Aviv, Israel
- Ben-Gurion University Medical School, Be'er Sheva, Israel
| | - Dorith Shaham
- Department of Radiology, Hadassah-Hebrew University Medical Center, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Pallav L. Shah
- Royal Brompton and Harefield Hospitals, Guy's and St Thomas' NHS Foundation Trust, London, UK
- Chelsea & Westminster Hospital, London, UK
- National Heart & Lung Institute, Imperial College London, London, UK
| | - Emmanouil Karteris
- College of Health, Medicine and Life Sciences, Brunel University London, London, UK
| | - Jan L. Robertus
- Royal Brompton and Harefield Hospitals, Guy's and St Thomas' NHS Foundation Trust, London, UK
- National Heart & Lung Institute, Imperial College London, London, UK
| | | | - Michal Rosen-Zvi
- IBM Research Haifa, Haifa, Israel
- Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|