1
|
Prinzi F, Orlando A, Gaglio S, Vitabile S. Interpretable Radiomic Signature for Breast Microcalcification Detection and Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1038-1053. [PMID: 38351223 DOI: 10.1007/s10278-024-01012-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 11/20/2023] [Accepted: 12/05/2023] [Indexed: 06/13/2024]
Abstract
Breast microcalcifications are observed in 80% of mammograms, and a notable proportion can lead to invasive tumors. However, diagnosing microcalcifications is a highly complicated and error-prone process due to their diverse sizes, shapes, and subtle variations. In this study, we propose a radiomic signature that effectively differentiates between healthy tissue, benign microcalcifications, and malignant microcalcifications. Radiomic features were extracted from a proprietary dataset, composed of 380 healthy tissue, 136 benign, and 242 malignant microcalcifications ROIs. Subsequently, two distinct signatures were selected to differentiate between healthy tissue and microcalcifications (detection task) and between benign and malignant microcalcifications (classification task). Machine learning models, namely Support Vector Machine, Random Forest, and XGBoost, were employed as classifiers. The shared signature selected for both tasks was then used to train a multi-class model capable of simultaneously classifying healthy, benign, and malignant ROIs. A significant overlap was discovered between the detection and classification signatures. The performance of the models was highly promising, with XGBoost exhibiting an AUC-ROC of 0.830, 0.856, and 0.876 for healthy, benign, and malignant microcalcifications classification, respectively. The intrinsic interpretability of radiomic features, and the use of the Mean Score Decrease method for model introspection, enabled models' clinical validation. In fact, the most important features, namely GLCM Contrast, FO Minimum and FO Entropy, were compared and found important in other studies on breast cancer.
Collapse
Affiliation(s)
- Francesco Prinzi
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy.
- Department of Computer Science and Technology, University of Cambridge, CB2 1TN, Cambridge, United Kingdom.
| | - Alessia Orlando
- Section of Radiology - Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University Hospital "Paolo Giaccone", Palermo, Italy
| | - Salvatore Gaglio
- Department of Engineering, University of Palermo, Palermo, Italy
- Institute for High-Performance Computing and Networking, National Research Council (ICAR-CNR), Palermo, Italy
| | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
| |
Collapse
|
2
|
Yim D, Khuntia J, Parameswaran V, Meyers A. Preliminary Evidence of the Use of Generative AI in Health Care Clinical Services: Systematic Narrative Review. JMIR Med Inform 2024; 12:e52073. [PMID: 38506918 PMCID: PMC10993141 DOI: 10.2196/52073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/12/2023] [Accepted: 01/30/2024] [Indexed: 03/21/2024] Open
Abstract
BACKGROUND Generative artificial intelligence tools and applications (GenAI) are being increasingly used in health care. Physicians, specialists, and other providers have started primarily using GenAI as an aid or tool to gather knowledge, provide information, train, or generate suggestive dialogue between physicians and patients or between physicians and patients' families or friends. However, unless the use of GenAI is oriented to be helpful in clinical service encounters that can improve the accuracy of diagnosis, treatment, and patient outcomes, the expected potential will not be achieved. As adoption continues, it is essential to validate the effectiveness of the infusion of GenAI as an intelligent technology in service encounters to understand the gap in actual clinical service use of GenAI. OBJECTIVE This study synthesizes preliminary evidence on how GenAI assists, guides, and automates clinical service rendering and encounters in health care The review scope was limited to articles published in peer-reviewed medical journals. METHODS We screened and selected 0.38% (161/42,459) of articles published between January 1, 2020, and May 31, 2023, identified from PubMed. We followed the protocols outlined in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines to select highly relevant studies with at least 1 element on clinical use, evaluation, and validation to provide evidence of GenAI use in clinical services. The articles were classified based on their relevance to clinical service functions or activities using the descriptive and analytical information presented in the articles. RESULTS Of 161 articles, 141 (87.6%) reported using GenAI to assist services through knowledge access, collation, and filtering. GenAI was used for disease detection (19/161, 11.8%), diagnosis (14/161, 8.7%), and screening processes (12/161, 7.5%) in the areas of radiology (17/161, 10.6%), cardiology (12/161, 7.5%), gastrointestinal medicine (4/161, 2.5%), and diabetes (6/161, 3.7%). The literature synthesis in this study suggests that GenAI is mainly used for diagnostic processes, improvement of diagnosis accuracy, and screening and diagnostic purposes using knowledge access. Although this solves the problem of knowledge access and may improve diagnostic accuracy, it is oriented toward higher value creation in health care. CONCLUSIONS GenAI informs rather than assisting or automating clinical service functions in health care. There is potential in clinical service, but it has yet to be actualized for GenAI. More clinical service-level evidence that GenAI is used to streamline some functions or provides more automated help than only information retrieval is needed. To transform health care as purported, more studies related to GenAI applications must automate and guide human-performed services and keep up with the optimism that forward-thinking health care organizations will take advantage of GenAI.
Collapse
Affiliation(s)
- Dobin Yim
- Loyola University, Maryland, MD, United States
| | - Jiban Khuntia
- University of Colorado Denver, Denver, CO, United States
| | | | - Arlen Meyers
- University of Colorado Denver, Denver, CO, United States
| |
Collapse
|
3
|
Casella B, Riviera W, Aldinucci M, Menegaz G. Protocol for training MERGE: A federated multi-input neural network for COVID-19 prognosis. STAR Protoc 2024; 5:102812. [PMID: 38180836 PMCID: PMC10801336 DOI: 10.1016/j.xpro.2023.102812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 11/16/2023] [Accepted: 12/18/2023] [Indexed: 01/07/2024] Open
Abstract
Federated learning is a cooperative learning approach that has emerged as an effective way to address privacy concerns. Here, we present a protocol for training MERGE: a federated multi-input neural network (NN) for COVID-19 prognosis. We describe steps for collecting and preprocessing datasets. We then detail the process of training a multi-input NN. This protocol can be adapted for use with datasets containing both image- and table-based input sources. For complete details on the use and execution of this protocol, please refer to Casella et al.1.
Collapse
Affiliation(s)
- Bruno Casella
- Computer Science Department, University of Turin, 10149 Turin, Italy.
| | - Walter Riviera
- Computer Science Department, University of Verona, 37134 Verona, Italy
| | - Marco Aldinucci
- Computer Science Department, University of Turin, 10149 Turin, Italy
| | - Gloria Menegaz
- Engineering for Innovation Medicine Department, University of Verona, 37134 Verona, Italy
| |
Collapse
|
4
|
Lin J, Yang J, Yin M, Tang Y, Chen L, Xu C, Zhu S, Gao J, Liu L, Liu X, Gu C, Huang Z, Wei Y, Zhu J. Development and Validation of Multimodal Models to Predict the 30-Day Mortality of ICU Patients Based on Clinical Parameters and Chest X-Rays. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01066-1. [PMID: 38448758 DOI: 10.1007/s10278-024-01066-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 02/21/2024] [Accepted: 02/22/2024] [Indexed: 03/08/2024]
Abstract
We aimed to develop and validate multimodal ICU patient prognosis models that combine clinical parameters data and chest X-ray (CXR) images. A total of 3798 subjects with clinical parameters and CXR images were extracted from the Medical Information Mart for Intensive Care IV (MIMIC-IV) database and an external hospital (the test set). The primary outcome was 30-day mortality after ICU admission. Automated machine learning (AutoML) and convolutional neural networks (CNNs) were used to construct single-modal models based on clinical parameters and CXR separately. An early fusion approach was used to integrate both modalities (clinical parameters and CXR) into a multimodal model named PrismICU. Compared to the single-modal models, i.e., the clinical parameter model (AUC = 0.80, F1-score = 0.43) and the CXR model (AUC = 0.76, F1-score = 0.45) and the scoring system APACHE II (AUC = 0.83, F1-score = 0.77), PrismICU (AUC = 0.95, F1 score = 0.95) showed improved performance in predicting the 30-day mortality in the validation set. In the test set, PrismICU (AUC = 0.82, F1-score = 0.61) was also better than the clinical parameters model (AUC = 0.72, F1-score = 0.50), CXR model (AUC = 0.71, F1-score = 0.36), and APACHE II (AUC = 0.62, F1-score = 0.50). PrismICU, which integrated clinical parameters data and CXR images, performed better than single-modal models and the existing scoring system. It supports the potential of multimodal models based on structured data and imaging in clinical management.
Collapse
Affiliation(s)
- Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Jin Yang
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Yuxiu Tang
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Liquan Chen
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Chang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Jingwen Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Xiaolin Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Chenqi Gu
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Zhou Huang
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Yao Wei
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China.
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China.
| |
Collapse
|
5
|
Prinzi F, Currieri T, Gaglio S, Vitabile S. Shallow and deep learning classifiers in medical image analysis. Eur Radiol Exp 2024; 8:26. [PMID: 38438821 PMCID: PMC10912073 DOI: 10.1186/s41747-024-00428-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 01/03/2024] [Indexed: 03/06/2024] Open
Abstract
An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians' decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between "shallow" learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and "deep" learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• Deep classifiers implement automatic feature extraction and classification.• The classifier selection is based on data and computational resources availability, task, and explanation needs.
Collapse
Affiliation(s)
- Francesco Prinzi
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
- Department of Computer Science and Technology, University of Cambridge, Cambridge, CB2 1TN, UK
| | - Tiziana Currieri
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
| | - Salvatore Gaglio
- Department of Engineering, University of Palermo, Palermo, Italy
- Institute for High-Performance Computing and Networking, National Research Council (ICAR-CNR), Palermo, Italy
| | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy.
| |
Collapse
|
6
|
Alì M, Fantesini A, Morcella MT, Ibba S, D'Anna G, Fazzini D, Papa S. Adoption of AI in Oncological Imaging: Ethical, Regulatory, and Medical-Legal Challenges. Crit Rev Oncog 2024; 29:29-35. [PMID: 38505879 DOI: 10.1615/critrevoncog.2023050584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
Artificial Intelligence (AI) algorithms have shown great promise in oncological imaging, outperforming or matching radiologists in retrospective studies, signifying their potential for advanced screening capabilities. These AI tools offer valuable support to radiologists, assisting them in critical tasks such as prioritizing reporting, early cancer detection, and precise measurements, thereby bolstering clinical decision-making. With the healthcare landscape witnessing a surge in imaging requests and a decline in available radiologists, the integration of AI has become increasingly appealing. By streamlining workflow efficiency and enhancing patient care, AI presents a transformative solution to the challenges faced by oncological imaging practices. Nevertheless, successful AI integration necessitates navigating various ethical, regulatory, and medical-legal challenges. This review endeavors to provide a comprehensive overview of these obstacles, aiming to foster a responsible and effective implementation of AI in oncological imaging.
Collapse
Affiliation(s)
- Marco Alì
- Radiology Unit, CDI, Centro Diagnostico Italiano, Via Simone Saint Bon, 20, 20147 Milan, Italy
| | - Arianna Fantesini
- Suor Orsola Benincasa University, Corso Vittorio Emanuele 292, Naples, Italy; RE:LAB s.r.l., Via Tamburini, 5, 42122 Reggio Emilia, Italy
| | | | - Simona Ibba
- CDI Centro Diagnostico Italiano, Via Saint Bon 20, Milan, Italy
| | - Gennaro D'Anna
- Neuroimaging Unit, ASST Ovest Milanese, Via Papa Giovanni Paolo II, Legnano (Milan), Italy
| | - Deborah Fazzini
- CDI Centro Diagnostico Italiano, Via Saint Bon 20, Milan, Italy
| | - Sergio Papa
- Radiology Unit, CDI, Centro Diagnostico Italiano, Via Simone Saint Bon, 20, 20147 Milan, Italy
| |
Collapse
|
7
|
Henao JAG, Depotter A, Bower DV, Bajercius H, Todorova PT, Saint-James H, de Mortanges AP, Barroso MC, He J, Yang J, You C, Staib LH, Gange C, Ledda RE, Caminiti C, Silva M, Cortopassi IO, Dela Cruz CS, Hautz W, Bonel HM, Sverzellati N, Duncan JS, Reyes M, Poellinger A. A Multiclass Radiomics Method-Based WHO Severity Scale for Improving COVID-19 Patient Assessment and Disease Characterization From CT Scans. Invest Radiol 2023; 58:882-893. [PMID: 37493348 PMCID: PMC10662611 DOI: 10.1097/rli.0000000000001005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 05/26/2023] [Indexed: 07/27/2023]
Abstract
OBJECTIVES The aim of this study was to evaluate the severity of COVID-19 patients' disease by comparing a multiclass lung lesion model to a single-class lung lesion model and radiologists' assessments in chest computed tomography scans. MATERIALS AND METHODS The proposed method, AssessNet-19, was developed in 2 stages in this retrospective study. Four COVID-19-induced tissue lesions were manually segmented to train a 2D-U-Net network for a multiclass segmentation task followed by extensive extraction of radiomic features from the lung lesions. LASSO regression was used to reduce the feature set, and the XGBoost algorithm was trained to classify disease severity based on the World Health Organization Clinical Progression Scale. The model was evaluated using 2 multicenter cohorts: a development cohort of 145 COVID-19-positive patients from 3 centers to train and test the severity prediction model using manually segmented lung lesions. In addition, an evaluation set of 90 COVID-19-positive patients was collected from 2 centers to evaluate AssessNet-19 in a fully automated fashion. RESULTS AssessNet-19 achieved an F1-score of 0.76 ± 0.02 for severity classification in the evaluation set, which was superior to the 3 expert thoracic radiologists (F1 = 0.63 ± 0.02) and the single-class lesion segmentation model (F1 = 0.64 ± 0.02). In addition, AssessNet-19 automated multiclass lesion segmentation obtained a mean Dice score of 0.70 for ground-glass opacity, 0.68 for consolidation, 0.65 for pleural effusion, and 0.30 for band-like structures compared with ground truth. Moreover, it achieved a high agreement with radiologists for quantifying disease extent with Cohen κ of 0.94, 0.92, and 0.95. CONCLUSIONS A novel artificial intelligence multiclass radiomics model including 4 lung lesions to assess disease severity based on the World Health Organization Clinical Progression Scale more accurately determines the severity of COVID-19 patients than a single-class model and radiologists' assessment.
Collapse
|
8
|
Casella B, Riviera W, Aldinucci M, Menegaz G. MERGE: A model for multi-input biomedical federated learning. PATTERNS (NEW YORK, N.Y.) 2023; 4:100856. [PMID: 38035188 PMCID: PMC10682752 DOI: 10.1016/j.patter.2023.100856] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 09/11/2023] [Accepted: 09/13/2023] [Indexed: 12/02/2023]
Abstract
Driven by the deep learning (DL) revolution, artificial intelligence (AI) has become a fundamental tool for many biomedical tasks, including analyzing and classifying diagnostic images. Imaging, however, is not the only source of information. Tabular data, such as personal and genomic data and blood test results, are routinely collected but rarely considered in DL pipelines. Nevertheless, DL requires large datasets that often must be pooled from different institutions, raising non-trivial privacy concerns. Federated learning (FL) is a cooperative learning paradigm that aims to address these issues by moving models instead of data across different institutions. Here, we present a federated multi-input architecture using images and tabular data as a methodology to enhance model performance while preserving data privacy. We evaluated it on two showcases: the prognosis of COVID-19 and patients' stratification in Alzheimer's disease, providing evidence of enhanced accuracy and F1 scores against single-input models and improved generalizability against non-federated models.
Collapse
Affiliation(s)
- Bruno Casella
- Department of Computer Science, University of Turin, 10149 Turin, Italy
| | - Walter Riviera
- Department of Computer Science, University of Verona, 37134 Verona, Italy
| | - Marco Aldinucci
- Department of Computer Science, University of Turin, 10149 Turin, Italy
| | - Gloria Menegaz
- Department of Engineering for Innovation Medicine, University of Verona, 37134 Verona, Italy
| |
Collapse
|
9
|
Joni SS, Gerami R, Pashaei F, Ebrahiminik H, Karimi M. Quantitative evaluation of CT scan images to determinate the prognosis of COVID-19 patient using deep learning. Eur J Transl Myol 2023; 33:11571. [PMID: 37491956 PMCID: PMC10583151 DOI: 10.4081/ejtm.2023.11571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 07/13/2023] [Indexed: 07/27/2023] Open
Abstract
The purpose of this research is to evaluate the accuracy of AI-assisted quantification in comparison to conventional CT parameters reviewed by a radiologist in predicting the severity, progression, and clinical outcome of disease. The current study is a cross-sectional study that was conducted on patients with the diagnosis of COVID-19 and underwent a pulmonary CT scan between August 23th, 2021 to December 21th, 2022. The initial CT scan on admission was used for imaging analysis. The presence of ground glass opacity (GGO), and consolidation were visually evaluated. CT severity score was calculated according to a semi-quantitative method. In addition, AI based quantification of GGO and consolidation volume were also performed. 291 patients (mean age: 64.7 ± 7; 129 males) were included. GGO + consolidation was more frequently revealed in progress-to-severe group whereas pure GGO was more likely to be found in non-severe group. Compared to non-severe group, patients in progress-to-severe group had larger GGO volume percentage (40.6%± 11.9%versus 21.7%± 8.8%, p ˂0.001) as well as consolidation volume percentage (4.8% ± 2% versus 1.9% ± 1%, p < 0.001). Among imaging parameters, consolidation volume percentage and the largest area under curve (AUC) in discriminating non-severe from progress-to-severe group (AUC = 0.91, p < 0.001). According to multivariate regression, consolidation volume was the strongest predictor for disease progression. In conclusion, the consolidation volume measured on the initial chest CT was the most accurate predictor of disease progression, and a larger consolidation volume was associated with a poor clinical outcome. In patients with COVID-19, AI-assisted lesion quantification was useful for risk stratification and prognosis evaluation.
Collapse
Affiliation(s)
- Saeid Sadeghi Joni
- Department of Radiology, Faculty of medicine, Aja University of Medical Sciences, Tehran.
| | - Reza Gerami
- Department of Radiology, Faculty of medicine, Aja University of Medical Sciences, Tehran.
| | - Fakhereh Pashaei
- Radiation Sciences Research Center (RSRC), Aja University of Medical Sciences, Tehran.
| | - Hojat Ebrahiminik
- Department of Interventional Radiology and Radiation Sciences Research Center, Aja University of Medical Sciences, Tehran.
| | - Mahmood Karimi
- Department of Internal Medicine, Faculty of Medicine, AJA University of Medical Sciences, Tehran.
| |
Collapse
|
10
|
Wang C, Liu S, Tang Y, Yang H, Liu J. Diagnostic Test Accuracy of Deep Learning Prediction Models on COVID-19 Severity: Systematic Review and Meta-Analysis. J Med Internet Res 2023; 25:e46340. [PMID: 37477951 PMCID: PMC10403760 DOI: 10.2196/46340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/27/2023] [Accepted: 06/30/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND Deep learning (DL) prediction models hold great promise in the triage of COVID-19. OBJECTIVE We aimed to evaluate the diagnostic test accuracy of DL prediction models for assessing and predicting the severity of COVID-19. METHODS We searched PubMed, Scopus, LitCovid, Embase, Ovid, and the Cochrane Library for studies published from December 1, 2019, to April 30, 2022. Studies that used DL prediction models to assess or predict COVID-19 severity were included, while those without diagnostic test accuracy analysis or severity dichotomies were excluded. QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies 2), PROBAST (Prediction Model Risk of Bias Assessment Tool), and funnel plots were used to estimate the bias and applicability. RESULTS A total of 12 retrospective studies involving 2006 patients reported the cross-sectionally assessed value of DL on COVID-19 severity. The pooled sensitivity and area under the curve were 0.92 (95% CI 0.89-0.94; I2=0.00%) and 0.95 (95% CI 0.92-0.96), respectively. A total of 13 retrospective studies involving 3951 patients reported the longitudinal predictive value of DL for disease severity. The pooled sensitivity and area under the curve were 0.76 (95% CI 0.74-0.79; I2=0.00%) and 0.80 (95% CI 0.76-0.83), respectively. CONCLUSIONS DL prediction models can help clinicians identify potentially severe cases for early triage. However, high-quality research is lacking. TRIAL REGISTRATION PROSPERO CRD42022329252; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD 42022329252.
Collapse
Affiliation(s)
- Changyu Wang
- Department of Medical Informatics, West China Medical School, Sichuan University, Chengdu, China
- West China College of Stomatology, Sichuan University, Chengdu, China
| | - Siru Liu
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Yu Tang
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Hao Yang
- Information Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jialin Liu
- Department of Medical Informatics, West China Medical School, Sichuan University, Chengdu, China
- Information Center, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
11
|
Bradshaw TJ, Huemann Z, Hu J, Rahmim A. A Guide to Cross-Validation for Artificial Intelligence in Medical Imaging. Radiol Artif Intell 2023; 5:e220232. [PMID: 37529208 PMCID: PMC10388213 DOI: 10.1148/ryai.220232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 05/02/2023] [Accepted: 05/10/2023] [Indexed: 08/03/2023]
Abstract
Artificial intelligence (AI) is being increasingly used to automate and improve technologies within the field of medical imaging. A critical step in the development of an AI algorithm is estimating its prediction error through cross-validation (CV). The use of CV can help prevent overoptimism in AI algorithms and can mitigate certain biases associated with hyperparameter tuning and algorithm selection. This article introduces the principles of CV and provides a practical guide on the use of CV for AI algorithm development in medical imaging. Different CV techniques are described, as well as their advantages and disadvantages under different scenarios. Common pitfalls in prediction error estimation and guidance on how to avoid them are also discussed. Keywords: Education, Research Design, Technical Aspects, Statistics, Supervised Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2023.
Collapse
|
12
|
Ibba S, Tancredi C, Fantesini A, Cellina M, Presta R, Montanari R, Papa S, Alì M. How do patients perceive the AI-radiologists interaction? Results of a survey on 2119 responders. Eur J Radiol 2023; 165:110917. [PMID: 37327548 DOI: 10.1016/j.ejrad.2023.110917] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 05/16/2023] [Accepted: 05/31/2023] [Indexed: 06/18/2023]
Abstract
PURPOSE In this study we investigate how patients perceive the interaction between artificial intelligence (AI) and radiologists by designing a survey. METHOD We created a survey focused on the application of Artificial Intelligence in radiology which consisted of 20 questions distributed in three sections:Only completed questionnaires were considered for analysis. RESULTS 2119 subjects completed the survey. Among them, 1216 respondents were over 60 years old, showing interest in AI even though they were not digital natives. Although >45% of the respondents reported a high level of education, only 3% said they were AI experts. 87% of respondents favored using AI to support diagnosis but would like to be informed. Only 10% would consult another specialist if their doctor used AI support. Most respondents (76%) said they would not feel comfortable if the diagnosis was made by the AI alone, highlighting the importance of the physician's role in the emotional management of the patient. Finally, 36% of respondents were willing to discuss the topic further in a focus group. CONCLUSION Patients' perception of the use of AI in radiology was positive, although still strictly linked to the supervision of the radiologist. Respondents showed interest and willingness to learn more about AI in the medical field, confirming how patients' confidence in AI technology and its acceptance is central to its widespread use in clinical practice.
Collapse
Affiliation(s)
- Simona Ibba
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, CDI Centro Diagnostico Italiano S.p.A., Via Simone Saint Bon 20, 20147 Milan, Italy.
| | - Chiara Tancredi
- Suor Orsola Benincasa University, Corso Vittorio Emanuele 292, 80135 Naples, Italy.
| | - Arianna Fantesini
- Suor Orsola Benincasa University, Corso Vittorio Emanuele 292, 80135 Naples, Italy; RE:LAB s.r.l., Via Tamburini, 5, 42122 Reggio Emilia, Italy.
| | - Michaela Cellina
- Radiology Department, ASST Fatebenefratelli Sacco, Piazza Principessa Clotilde 3, 20121 Milan, Italy.
| | - Roberta Presta
- Suor Orsola Benincasa University, Corso Vittorio Emanuele 292, 80135 Naples, Italy.
| | - Roberto Montanari
- Suor Orsola Benincasa University, Corso Vittorio Emanuele 292, 80135 Naples, Italy; RE:LAB s.r.l., Via Tamburini, 5, 42122 Reggio Emilia, Italy.
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, CDI Centro Diagnostico Italiano S.p.A., Via Simone Saint Bon 20, 20147 Milan, Italy.
| | - Marco Alì
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, CDI Centro Diagnostico Italiano S.p.A., Via Simone Saint Bon 20, 20147 Milan, Italy; Bracco Imaging S.p.A., Via Egidio Folli, 50, 20134 Milan, Italy.
| |
Collapse
|
13
|
Suwalska A, Tobiasz J, Prazuch W, Socha M, Foszner P, Piotrowski D, Gruszczynska K, Sliwinska M, Walecki J, Popiela T, Przybylski G, Nowak M, Fiedor P, Pawlowska M, Flisiak R, Simon K, Zapolska G, Gizycka B, Szurowska E, Marczyk M, Cieszanowski A, Polanska J. POLCOVID: a multicenter multiclass chest X-ray database (Poland, 2020-2021). Sci Data 2023; 10:348. [PMID: 37268643 DOI: 10.1038/s41597-023-02229-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 05/11/2023] [Indexed: 06/04/2023] Open
Abstract
The outbreak of the SARS-CoV-2 pandemic has put healthcare systems worldwide to their limits, resulting in increased waiting time for diagnosis and required medical assistance. With chest radiographs (CXR) being one of the most common COVID-19 diagnosis methods, many artificial intelligence tools for image-based COVID-19 detection have been developed, often trained on a small number of images from COVID-19-positive patients. Thus, the need for high-quality and well-annotated CXR image databases increased. This paper introduces POLCOVID dataset, containing chest X-ray (CXR) images of patients with COVID-19 or other-type pneumonia, and healthy individuals gathered from 15 Polish hospitals. The original radiographs are accompanied by the preprocessed images limited to the lung area and the corresponding lung masks obtained with the segmentation model. Moreover, the manually created lung masks are provided for a part of POLCOVID dataset and the other four publicly available CXR image collections. POLCOVID dataset can help in pneumonia or COVID-19 diagnosis, while the set of matched images and lung masks may serve for the development of lung segmentation solutions.
Collapse
Affiliation(s)
- Aleksandra Suwalska
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
| | - Joanna Tobiasz
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
- Department of Computer Graphics, Vision and Digital Systems, Silesian University of Technology, Gliwice, Poland
| | - Wojciech Prazuch
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
| | - Marek Socha
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
| | - Pawel Foszner
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
- Department of Computer Graphics, Vision and Digital Systems, Silesian University of Technology, Gliwice, Poland
| | - Damian Piotrowski
- Department of Infectious Diseases and Hepatology, Medical University of Silesia, Katowice, Poland
| | - Katarzyna Gruszczynska
- Department of Radiology and Nuclear Medicine, Medical University of Silesia, Katowice, Poland
| | - Magdalena Sliwinska
- Department of Diagnostic Imaging, Voivodship Specialist Hospital, Wroclaw, Poland
| | - Jerzy Walecki
- Department of Diagnostic Radiology, Central Clinical Hospital of the Ministry of Internal Affairs and Administration, Warsaw, Poland
| | - Tadeusz Popiela
- Department of Radiology, Jagiellonian University Medical College, Krakow, Poland
| | - Grzegorz Przybylski
- Department of Lung Diseases, Cancer and Tuberculosis, Kujawsko-Pomorskie Pulmonology Center, Bydgoszcz, Poland
| | - Mateusz Nowak
- Department of Radiology, Silesian Hospital, Cieszyn, Poland
| | - Piotr Fiedor
- Department of General and Transplantation Surgery, Medical University of Warsaw, Warsaw, Poland
| | - Malgorzata Pawlowska
- Department of Infectious Diseases and Hepatology, Collegium Medicum in Bydgoszcz, Nicolaus Copernicus University, Torun, Poland
| | - Robert Flisiak
- Department of Infectious Diseases and Hepatology, Medical University of Bialystok, Bialystok, Poland
| | - Krzysztof Simon
- Department of Infectious Diseases and Hepatology, Wroclaw Medical University, Wroclaw, Poland
| | | | - Barbara Gizycka
- Department of Imaging Diagnostics, MEGREZ Hospital, Tychy, Poland
| | - Edyta Szurowska
- 2nd Department of Radiology, Medical University of Gdansk, Gdansk, Poland
| | - Michal Marczyk
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland.
- Yale Cancer Center, Yale School of Medicine, New Haven, CT, USA.
| | - Andrzej Cieszanowski
- Department of Radiology I, The Maria Sklodowska-Curie National Research Institute of Oncology, Warsaw, Poland.
| | - Joanna Polanska
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
| |
Collapse
|
14
|
Wu Y, Dravid A, Wehbe RM, Katsaggelos AK. DeepCOVID-Fuse: A Multi-Modality Deep Learning Model Fusing Chest X-rays and Clinical Variables to Predict COVID-19 Risk Levels. Bioengineering (Basel) 2023; 10:bioengineering10050556. [PMID: 37237626 DOI: 10.3390/bioengineering10050556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 04/28/2023] [Accepted: 05/02/2023] [Indexed: 05/28/2023] Open
Abstract
The COVID-19 pandemic has posed unprecedented challenges to global healthcare systems, highlighting the need for accurate and timely risk prediction models that can prioritize patient care and allocate resources effectively. This study presents DeepCOVID-Fuse, a deep learning fusion model that predicts risk levels in patients with confirmed COVID-19 by combining chest radiographs (CXRs) and clinical variables. The study collected initial CXRs, clinical variables, and outcomes (i.e., mortality, intubation, hospital length of stay, Intensive care units (ICU) admission) from February to April 2020, with risk levels determined by the outcomes. The fusion model was trained on 1657 patients (Age: 58.30 ± 17.74; Female: 807) and validated on 428 patients (56.41 ± 17.03; 190) from the local healthcare system and tested on 439 patients (56.51 ± 17.78; 205) from a different holdout hospital. The performance of well-trained fusion models on full or partial modalities was compared using DeLong and McNemar tests. Results show that DeepCOVID-Fuse significantly (p < 0.05) outperformed models trained only on CXRs or clinical variables, with an accuracy of 0.658 and an area under the receiver operating characteristic curve (AUC) of 0.842. The fusion model achieves good outcome predictions even when only one of the modalities is used in testing, demonstrating its ability to learn better feature representations across different modalities during training.
Collapse
Affiliation(s)
- Yunan Wu
- Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60201, USA
| | - Amil Dravid
- Department of Computer Science, Northwestern University, Evanston, IL 60201, USA
| | - Ramsey Michael Wehbe
- The Division of Cardiology, Department of Medicine and Bluhm Cardiovascular Institute, Northwestern Memorial Hospital, Chicago, IL 60611, USA
| | - Aggelos K Katsaggelos
- Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60201, USA
- Department of Computer Science, Northwestern University, Evanston, IL 60201, USA
| |
Collapse
|
15
|
Sun Y, Salerno S, He X, Pan Z, Yang E, Sujimongkol C, Song J, Wang X, Han P, Kang J, Sjoding MW, Jolly S, Christiani DC, Li Y. Use of machine learning to assess the prognostic utility of radiomic features for in-hospital COVID-19 mortality. Sci Rep 2023; 13:7318. [PMID: 37147440 PMCID: PMC10161188 DOI: 10.1038/s41598-023-34559-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 05/03/2023] [Indexed: 05/07/2023] Open
Abstract
As portable chest X-rays are an efficient means of triaging emergent cases, their use has raised the question as to whether imaging carries additional prognostic utility for survival among patients with COVID-19. This study assessed the importance of known risk factors on in-hospital mortality and investigated the predictive utility of radiomic texture features using various machine learning approaches. We detected incremental improvements in survival prognostication utilizing texture features derived from emergent chest X-rays, particularly among older patients or those with a higher comorbidity burden. Important features included age, oxygen saturation, blood pressure, and certain comorbid conditions, as well as image features related to the intensity and variability of pixel distribution. Thus, widely available chest X-rays, in conjunction with clinical information, may be predictive of survival outcomes of patients with COVID-19, especially older, sicker patients, and can aid in disease management by providing additional information.
Collapse
Affiliation(s)
- Yuming Sun
- Department of Biostatistics, University of Michigan, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| | - Stephen Salerno
- Department of Biostatistics, University of Michigan, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| | - Xinwei He
- Department of Biostatistics, University of Michigan, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| | - Ziyang Pan
- Department of Biostatistics, University of Michigan, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| | - Eileen Yang
- Department of Biostatistics, University of Michigan, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| | - Chinakorn Sujimongkol
- Department of Biostatistics, University of Michigan, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| | - Jiyeon Song
- Department of Biostatistics, University of Michigan, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| | - Xinan Wang
- Department of Environmental Health and Epidemiology, Harvard T. H. Chan School of Public Health, 677 Huntington Avenue, Boston, MA, 02115, USA
| | - Peisong Han
- Department of Biostatistics, University of Michigan, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| | - Jian Kang
- Department of Biostatistics, University of Michigan, 1415 Washington Heights, Ann Arbor, MI, 48109, USA
| | - Michael W Sjoding
- Division of Pulmonary and Critical Care, Department of Internal Medicine, University of Michigan Medical School, 1500 East Medical Center Drive, Ann Arbor, MI, 48109, USA
| | - Shruti Jolly
- Department of Radiation Oncology, University of Michigan Rogel Cancer Center, 1500 East Medical Center Drive, Ann Arbor, MI, 48109, USA
| | - David C Christiani
- Department of Environmental Health and Epidemiology, Harvard T. H. Chan School of Public Health, 677 Huntington Avenue, Boston, MA, 02115, USA
- Division of Pulmonary and Critical Care, Department of Internal Medicine, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, 02114, USA
| | - Yi Li
- Department of Biostatistics, University of Michigan, 1415 Washington Heights, Ann Arbor, MI, 48109, USA.
| |
Collapse
|
16
|
Rahman T, Chowdhury MEH, Khandakar A, Mahbub ZB, Hossain MSA, Alhatou A, Abdalla E, Muthiyal S, Islam KF, Kashem SBA, Khan MS, Zughaier SM, Hossain M. BIO-CXRNET: a robust multimodal stacking machine learning technique for mortality risk prediction of COVID-19 patients using chest X-ray images and clinical data. Neural Comput Appl 2023; 35:1-23. [PMID: 37362565 PMCID: PMC10157130 DOI: 10.1007/s00521-023-08606-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 04/11/2023] [Indexed: 06/28/2023]
Abstract
Nowadays, quick, and accurate diagnosis of COVID-19 is a pressing need. This study presents a multimodal system to meet this need. The presented system employs a machine learning module that learns the required knowledge from the datasets collected from 930 COVID-19 patients hospitalized in Italy during the first wave of COVID-19 (March-June 2020). The dataset consists of twenty-five biomarkers from electronic health record and Chest X-ray (CXR) images. It is found that the system can diagnose low- or high-risk patients with an accuracy, sensitivity, and F1-score of 89.03%, 90.44%, and 89.03%, respectively. The system exhibits 6% higher accuracy than the systems that employ either CXR images or biomarker data. In addition, the system can calculate the mortality risk of high-risk patients using multivariate logistic regression-based nomogram scoring technique. Interested physicians can use the presented system to predict the early mortality risks of COVID-19 patients using the web-link: Covid-severity-grading-AI. In this case, a physician needs to input the following information: CXR image file, Lactate Dehydrogenase (LDH), Oxygen Saturation (O2%), White Blood Cells Count, C-reactive protein, and Age. This way, this study contributes to the management of COVID-19 patients by predicting early mortality risk. Supplementary Information The online version contains supplementary material available at 10.1007/s00521-023-08606-w.
Collapse
Affiliation(s)
- Tawsifur Rahman
- Department of Electrical Engineering, Qatar University, P.O. Box 2713, Doha, Qatar
| | | | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, P.O. Box 2713, Doha, Qatar
| | - Zaid Bin Mahbub
- Department of Physics and Mathematics, North South University, Dhaka, 1229 Bangladesh
| | | | - Abraham Alhatou
- Department of Biology, University of South Carolina (USC), Columbia, SC 29208 USA
| | - Eynas Abdalla
- Anesthesia Department, Hamad General Hospital, P.O. Box 3050, Doha, Qatar
| | - Sreekumar Muthiyal
- Department of Radiology, Hamad General Hospital, P.O. Box 3050, Doha, Qatar
| | | | - Saad Bin Abul Kashem
- Department of Computer Science, AFG College with the University of Aberdeen, Doha, Qatar
| | - Muhammad Salman Khan
- Department of Electrical Engineering, Qatar University, P.O. Box 2713, Doha, Qatar
| | - Susu M. Zughaier
- Department of Basic Medical Sciences, College of Medicine, QU Health, Qatar University, P.O. Box 2713, Doha, Qatar
| | - Maqsud Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka, 1229 Bangladesh
| |
Collapse
|
17
|
Ahmad J, Saudagar AKJ, Malik KM, Khan MB, AlTameem A, Alkhathami M, Hasanat MHA. Prognosis Prediction in COVID-19 Patients through Deep Feature Space Reasoning. Diagnostics (Basel) 2023; 13:diagnostics13081387. [PMID: 37189488 DOI: 10.3390/diagnostics13081387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/05/2023] [Accepted: 03/17/2023] [Indexed: 05/17/2023] Open
Abstract
The COVID-19 pandemic has presented a unique challenge for physicians worldwide, as they grapple with limited data and uncertainty in diagnosing and predicting disease outcomes. In such dire circumstances, the need for innovative methods that can aid in making informed decisions with limited data is more critical than ever before. To allow prediction with limited COVID-19 data as a case study, we present a complete framework for progression and prognosis prediction in chest X-rays (CXR) through reasoning in a COVID-specific deep feature space. The proposed approach relies on a pre-trained deep learning model that has been fine-tuned specifically for COVID-19 CXRs to identify infection-sensitive features from chest radiographs. Using a neuronal attention-based mechanism, the proposed method determines dominant neural activations that lead to a feature subspace where neurons are more sensitive to COVID-related abnormalities. This process allows the input CXRs to be projected into a high-dimensional feature space where age and clinical attributes like comorbidities are associated with each CXR. The proposed method can accurately retrieve relevant cases from electronic health records (EHRs) using visual similarity, age group, and comorbidity similarities. These cases are then analyzed to gather evidence for reasoning, including diagnosis and treatment. By using a two-stage reasoning process based on the Dempster-Shafer theory of evidence, the proposed method can accurately predict the severity, progression, and prognosis of a COVID-19 patient when sufficient evidence is available. Experimental results on two large datasets show that the proposed method achieves 88% precision, 79% recall, and 83.7% F-score on the test sets.
Collapse
Affiliation(s)
- Jamil Ahmad
- Department of Computer Science, Islamia College Peshawar, Peshawar 25120, Pakistan
| | | | - Khalid Mahmood Malik
- Department of Computer Science and Engineering, Oakland University, Rochester, MI 48309, USA
| | - Muhammad Badruddin Khan
- Information Systems Department, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Abdullah AlTameem
- Information Systems Department, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Mohammed Alkhathami
- Information Systems Department, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | | |
Collapse
|
18
|
Automated prediction of COVID-19 severity upon admission by chest X-ray images and clinical metadata aiming at accuracy and explainability. Sci Rep 2023; 13:4226. [PMID: 36918593 PMCID: PMC10012307 DOI: 10.1038/s41598-023-30505-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 02/24/2023] [Indexed: 03/16/2023] Open
Abstract
In the past few years COVID-19 posed a huge threat to healthcare systems around the world. One of the first waves of the pandemic hit Northern Italy severely resulting in high casualties and in the near breakdown of primary care. Due to these facts, the Covid CXR Hackathon-Artificial Intelligence for Covid-19 prognosis: aiming at accuracy and explainability challenge had been launched at the beginning of February 2022, releasing a new imaging dataset with additional clinical metadata for each accompanying chest X-ray (CXR). In this article we summarize our techniques at correctly diagnosing chest X-ray images collected upon admission for severity of COVID-19 outcome. In addition to X-ray imagery, clinical metadata was provided and the challenge also aimed at creating an explainable model. We created a best-performing, as well as, an explainable model that makes an effort to map clinical metadata to image features whilst predicting the prognosis. We also did many ablation studies in order to identify crucial parts of the models and the predictive power of each feature in the datasets. We conclude that CXRs at admission do not help the predicting power of the metadata significantly by itself and contain mostly information that is also mutually present in the blood samples and other clinical factors collected at admission.
Collapse
|
19
|
Guarrasi V, Soda P. Multi-objective optimization determines when, which and how to fuse deep networks: An application to predict COVID-19 outcomes. Comput Biol Med 2023; 154:106625. [PMID: 36738713 PMCID: PMC9892294 DOI: 10.1016/j.compbiomed.2023.106625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 01/18/2023] [Accepted: 01/28/2023] [Indexed: 02/05/2023]
Abstract
The COVID-19 pandemic has caused millions of cases and deaths and the AI-related scientific community, after being involved with detecting COVID-19 signs in medical images, has been now directing the efforts towards the development of methods that can predict the progression of the disease. This task is multimodal by its very nature and, recently, baseline results achieved on the publicly available AIforCOVID dataset have shown that chest X-ray scans and clinical information are useful to identify patients at risk of severe outcomes. While deep learning has shown superior performance in several medical fields, in most of the cases it considers unimodal data only. In this respect, when, which and how to fuse the different modalities is an open challenge in multimodal deep learning. To cope with these three questions here we present a novel approach optimizing the setup of a multimodal end-to-end model. It exploits Pareto multi-objective optimization working with a performance metric and the diversity score of multiple candidate unimodal neural networks to be fused. We test our method on the AIforCOVID dataset, attaining state-of-the-art results, not only outperforming the baseline performance but also being robust to external validation. Moreover, exploiting XAI algorithms we figure out a hierarchy among the modalities and we extract the features' intra-modality importance, enriching the trust on the predictions made by the model.
Collapse
Affiliation(s)
- Valerio Guarrasi
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy; Department of Computer, Control, and Management Engineering, Sapienza University of Rome, Italy.
| | - Paolo Soda
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy; Department of Radiation Sciences, Radiation Physics, Biomedical Engineering, Umeå, University, Umeå, Sweden.
| |
Collapse
|
20
|
Arias-Garzón D, Tabares-Soto R, Bernal-Salcedo J, Ruz GA. Biases associated with database structure for COVID-19 detection in X-ray images. Sci Rep 2023; 13:3477. [PMID: 36859430 PMCID: PMC9975856 DOI: 10.1038/s41598-023-30174-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 02/17/2023] [Indexed: 03/03/2023] Open
Abstract
Several artificial intelligence algorithms have been developed for COVID-19-related topics. One that has been common is the COVID-19 diagnosis using chest X-rays, where the eagerness to obtain early results has triggered the construction of a series of datasets where bias management has not been thorough from the point of view of patient information, capture conditions, class imbalance, and careless mixtures of multiple datasets. This paper analyses 19 datasets of COVID-19 chest X-ray images, identifying potential biases. Moreover, computational experiments were conducted using one of the most popular datasets in this domain, which obtains a 96.19% of classification accuracy on the complete dataset. Nevertheless, when evaluated with the ethical tool Aequitas, it fails on all the metrics. Ethical tools enhanced with some distribution and image quality considerations are the keys to developing or choosing a dataset with fewer bias issues. We aim to provide broad research on dataset problems, tools, and suggestions for future dataset developments and COVID-19 applications using chest X-ray images.
Collapse
Affiliation(s)
- Daniel Arias-Garzón
- grid.441739.c0000 0004 0486 2919Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001 Colombia
| | - Reinel Tabares-Soto
- grid.441739.c0000 0004 0486 2919Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001 Colombia ,grid.440617.00000 0001 2162 5606Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, 7941169 Santiago, Chile ,grid.7779.e0000 0001 2290 6370Departamento de Sistemas e Informática, Universidad de Caldas, Manizales, 170001 Colombia
| | - Joshua Bernal-Salcedo
- grid.441739.c0000 0004 0486 2919Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001 Colombia
| | - Gonzalo A. Ruz
- grid.440617.00000 0001 2162 5606Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, 7941169 Santiago, Chile ,grid.512276.5Center of Applied Ecology and Sustainability (CAPES), 8331150 Santiago, Chile ,Data Observatory Foundation, 7941169 Santiago, Chile
| |
Collapse
|
21
|
Wu CW, Pham BT, Wang JC, Wu YK, Kuo CY, Hsu YC. The COVIDTW study: Clinical predictors of COVID-19 mortality and a novel AI prognostic model using chest X-ray. J Formos Med Assoc 2023; 122:267-275. [PMID: 36208973 PMCID: PMC9510092 DOI: 10.1016/j.jfma.2022.09.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 08/19/2022] [Accepted: 09/21/2022] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND There is a lack of published research on the impact of the first wave of the COVID-19 pandemic in Taiwan. We investigated the mortality risk factors among critically ill patients with COVID-19 in Taiwan during the initial wave. Furthermore, we aim to develop a novel AI mortality prediction model using chest X-ray (CXR) alone. METHOD We retrospectively reviewed the medical records of patients with COVID-19 at Taipei Tzu Chi Hospital from May 15 to July 15 2021. We enrolled adult patients who received invasive mechanical ventilation. The CXR images of each enrolled patient were divided into 4 categories (1st, pre-ETT, ETT, and WORST). To establish a prediction model, we used the MobilenetV3-Small model with "Imagenet" pretrained weights, followed by high Dropout regularization layers. We trained the model with these data with Five-Fold Cross-Validation to evaluate model performance. RESULT A total of 64 patients were enrolled. The overall mortality rate was 45%. The median time from symptom onset to intubation was 8 days. Vasopressor use and a higher BRIXIA score on the WORST CXR were associated with an increased risk of mortality. The areas under the curve of the 1st, pre-ETT, ETT, and WORST CXRs by the AI model were 0.87, 0.92, 0.96, and 0.93 respectively. CONCLUSION The mortality rate of COVID-19 patients who receive invasive mechanical ventilation was high. Septic shock and high BRIXIA score were clinical predictors of mortality. The novel AI mortality prediction model using CXR alone exhibited a high performance.
Collapse
Affiliation(s)
- Chih-Wei Wu
- Division of Pulmonary Medicine, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan,Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
| | - Bach-Tung Pham
- Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan
| | - Jia-Ching Wang
- Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan
| | - Yao-Kuang Wu
- Division of Pulmonary Medicine, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan
| | - Chan-Yen Kuo
- Department of Research, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan
| | - Yi-Chiung Hsu
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan.
| |
Collapse
|
22
|
Artificial Intelligence-Assisted Chest X-ray for the Diagnosis of COVID-19: A Systematic Review and Meta-Analysis. Diagnostics (Basel) 2023; 13:diagnostics13040584. [PMID: 36832072 PMCID: PMC9955250 DOI: 10.3390/diagnostics13040584] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/25/2023] [Accepted: 02/01/2023] [Indexed: 02/09/2023] Open
Abstract
Because it is an accessible and routine image test, medical personnel commonly use a chest X-ray for COVID-19 infections. Artificial intelligence (AI) is now widely applied to improve the precision of routine image tests. Hence, we investigated the clinical merit of the chest X-ray to detect COVID-19 when assisted by AI. We used PubMed, Cochrane Library, MedRxiv, ArXiv, and Embase to search for relevant research published between 1 January 2020 and 30 May 2022. We collected essays that dissected AI-based measures used for patients diagnosed with COVID-19 and excluded research lacking measurements using relevant parameters (i.e., sensitivity, specificity, and area under curve). Two independent researchers summarized the information, and discords were eliminated by consensus. A random effects model was used to calculate the pooled sensitivities and specificities. The sensitivity of the included research studies was enhanced by eliminating research with possible heterogeneity. A summary receiver operating characteristic curve (SROC) was generated to investigate the diagnostic value for detecting COVID-19 patients. Nine studies were recruited in this analysis, including 39,603 subjects. The pooled sensitivity and specificity were estimated as 0.9472 (p = 0.0338, 95% CI 0.9009-0.9959) and 0.9610 (p < 0.0001, 95% CI 0.9428-0.9795), respectively. The area under the SROC was 0.98 (95% CI 0.94-1.00). The heterogeneity of diagnostic odds ratio was presented in the recruited studies (I2 = 36.212, p = 0.129). The AI-assisted chest X-ray scan for COVID-19 detection offered excellent diagnostic potential and broader application.
Collapse
|
23
|
Matsumoto T, Walston SL, Walston M, Kabata D, Miki Y, Shiba M, Ueda D. Deep Learning-Based Time-to-Death Prediction Model for COVID-19 Patients Using Clinical Data and Chest Radiographs. J Digit Imaging 2023; 36:178-188. [PMID: 35941407 PMCID: PMC9360661 DOI: 10.1007/s10278-022-00691-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 06/20/2022] [Accepted: 07/22/2022] [Indexed: 11/18/2022] Open
Abstract
Accurate estimation of mortality and time to death at admission for COVID-19 patients is important and several deep learning models have been created for this task. However, there are currently no prognostic models which use end-to-end deep learning to predict time to event for admitted COVID-19 patients using chest radiographs and clinical data. We retrospectively implemented a new artificial intelligence model combining DeepSurv (a multiple-perceptron implementation of the Cox proportional hazards model) and a convolutional neural network (CNN) using 1356 COVID-19 inpatients. For comparison, we also prepared DeepSurv only with clinical data, DeepSurv only with images (CNNSurv), and Cox proportional hazards models. Clinical data and chest radiographs at admission were used to estimate patient outcome (death or discharge) and duration to the outcome. The Harrel's concordance index (c-index) of the DeepSurv with CNN model was 0.82 (0.75-0.88) and this was significantly higher than the DeepSurv only with clinical data model (c-index = 0.77 (0.69-0.84), p = 0.011), CNNSurv (c-index = 0.70 (0.63-0.79), p = 0.001), and the Cox proportional hazards model (c-index = 0.71 (0.63-0.79), p = 0.001). These results suggest that the time-to-event prognosis model became more accurate when chest radiographs and clinical data were used together.
Collapse
Affiliation(s)
- Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Shannon Leigh Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Michael Walston
- Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Daijiro Kabata
- Department of Medical Statistics, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Masatsugu Shiba
- Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan.,Department of Medical Statistics, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Daiju Ueda
- Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan. .,Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan.
| |
Collapse
|
24
|
Meng Y, Bridge J, Addison C, Wang M, Merritt C, Franks S, Mackey M, Messenger S, Sun R, Fitzmaurice T, McCann C, Li Q, Zhao Y, Zheng Y. Bilateral adaptive graph convolutional network on CT based Covid-19 diagnosis with uncertainty-aware consensus-assisted multiple instance learning. Med Image Anal 2023; 84:102722. [PMID: 36574737 PMCID: PMC9753459 DOI: 10.1016/j.media.2022.102722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 10/17/2022] [Accepted: 12/02/2022] [Indexed: 12/23/2022]
Abstract
Coronavirus disease (COVID-19) has caused a worldwide pandemic, putting millions of people's health and lives in jeopardy. Detecting infected patients early on chest computed tomography (CT) is critical in combating COVID-19. Harnessing uncertainty-aware consensus-assisted multiple instance learning (UC-MIL), we propose to diagnose COVID-19 using a new bilateral adaptive graph-based (BA-GCN) model that can use both 2D and 3D discriminative information in 3D CT volumes with arbitrary number of slices. Given the importance of lung segmentation for this task, we have created the largest manual annotation dataset so far with 7,768 slices from COVID-19 patients, and have used it to train a 2D segmentation model to segment the lungs from individual slices and mask the lungs as the regions of interest for the subsequent analyses. We then used the UC-MIL model to estimate the uncertainty of each prediction and the consensus between multiple predictions on each CT slice to automatically select a fixed number of CT slices with reliable predictions for the subsequent model reasoning. Finally, we adaptively constructed a BA-GCN with vertices from different granularity levels (2D and 3D) to aggregate multi-level features for the final diagnosis with the benefits of the graph convolution network's superiority to tackle cross-granularity relationships. Experimental results on three largest COVID-19 CT datasets demonstrated that our model can produce reliable and accurate COVID-19 predictions using CT volumes with any number of slices, which outperforms existing approaches in terms of learning and generalisation ability. To promote reproducible research, we have made the datasets, including the manual annotations and cleaned CT dataset, as well as the implementation code, available at https://doi.org/10.5281/zenodo.6361963.
Collapse
Affiliation(s)
- Yanda Meng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Joshua Bridge
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Cliff Addison
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | - Manhui Wang
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | | | - Stu Franks
- Alces Flight Limited, Bicester, United Kingdom
| | - Maria Mackey
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Steve Messenger
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Renrong Sun
- Department of Radiology, Hubei Provincial Hospital of Integrated Chinese and Western Medicine, Hubei University of Chinese Medicine, Wuhan, China
| | - Thomas Fitzmaurice
- Adult Cystic Fibrosis Unit, Liverpool Heart and Chest Hospital NHS Foundation Trust, Liverpool, United Kingdom
| | - Caroline McCann
- Radiology, Liverpool Heart and Chest Hospital NHS Foundation Trust, United Kingdom
| | - Qiang Li
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Yitian Zhao
- The Affiliated People's Hospital of Ningbo University, Ningbo, China; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Science, Ningbo, China.
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom; Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart & Chest Hospital, Liverpool, United Kingdom.
| |
Collapse
|
25
|
Interpretable Differential Diagnosis of Non-COVID Viral Pneumonia, Lung Opacity and COVID-19 Using Tuned Transfer Learning and Explainable AI. Healthcare (Basel) 2023; 11:healthcare11030410. [PMID: 36766986 PMCID: PMC9914430 DOI: 10.3390/healthcare11030410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 01/20/2023] [Accepted: 01/28/2023] [Indexed: 02/04/2023] Open
Abstract
The coronavirus epidemic has spread to virtually every country on the globe, inflicting enormous health, financial, and emotional devastation, as well as the collapse of healthcare systems in some countries. Any automated COVID detection system that allows for fast detection of the COVID-19 infection might be highly beneficial to the healthcare service and people around the world. Molecular or antigen testing along with radiology X-ray imaging is now utilized in clinics to diagnose COVID-19. Nonetheless, due to a spike in coronavirus and hospital doctors' overwhelming workload, developing an AI-based auto-COVID detection system with high accuracy has become imperative. On X-ray images, the diagnosis of COVID-19, non-COVID-19 non-COVID viral pneumonia, and other lung opacity can be challenging. This research utilized artificial intelligence (AI) to deliver high-accuracy automated COVID-19 detection from normal chest X-ray images. Further, this study extended to differentiate COVID-19 from normal, lung opacity and non-COVID viral pneumonia images. We have employed three distinct pre-trained models that are Xception, VGG19, and ResNet50 on a benchmark dataset of 21,165 X-ray images. Initially, we formulated the COVID-19 detection problem as a binary classification problem to classify COVID-19 from normal X-ray images and gained 97.5%, 97.5%, and 93.3% accuracy for Xception, VGG19, and ResNet50 respectively. Later we focused on developing an efficient model for multi-class classification and gained an accuracy of 75% for ResNet50, 92% for VGG19, and finally 93% for Xception. Although Xception and VGG19's performances were identical, Xception proved to be more efficient with its higher precision, recall, and f-1 scores. Finally, we have employed Explainable AI on each of our utilized model which adds interpretability to our study. Furthermore, we have conducted a comprehensive comparison of the model's explanations and the study revealed that Xception is more precise in indicating the actual features that are responsible for a model's predictions.This addition of explainable AI will benefit the medical professionals greatly as they will get to visualize how a model makes its prediction and won't have to trust our developed machine-learning models blindly.
Collapse
|
26
|
Impact of Wavelet Kernels on Predictive Capability of Radiomic Features: A Case Study on COVID-19 Chest X-ray Images. J Imaging 2023; 9:jimaging9020032. [PMID: 36826951 PMCID: PMC9961017 DOI: 10.3390/jimaging9020032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/15/2023] [Accepted: 01/28/2023] [Indexed: 02/03/2023] Open
Abstract
Radiomic analysis allows for the detection of imaging biomarkers supporting decision-making processes in clinical environments, from diagnosis to prognosis. Frequently, the original set of radiomic features is augmented by considering high-level features, such as wavelet transforms. However, several wavelets families (so called kernels) are able to generate different multi-resolution representations of the original image, and which of them produces more salient images is not yet clear. In this study, an in-depth analysis is performed by comparing different wavelet kernels and by evaluating their impact on predictive capabilities of radiomic models. A dataset composed of 1589 chest X-ray images was used for COVID-19 prognosis prediction as a case study. Random forest, support vector machine, and XGBoost were trained (on a subset of 1103 images) after a rigorous feature selection strategy to build-up the predictive models. Next, to evaluate the models generalization capability on unseen data, a test phase was performed (on a subset of 486 images). The experimental findings showed that Bior1.5, Coif1, Haar, and Sym2 kernels guarantee better and similar performance for all three machine learning models considered. Support vector machine and random forest showed comparable performance, and they were better than XGBoost. Additionally, random forest proved to be the most stable model, ensuring an appropriate balance between sensitivity and specificity.
Collapse
|
27
|
Irmici G, Cè M, Caloro E, Khenkina N, Della Pepa G, Ascenti V, Martinenghi C, Papa S, Oliva G, Cellina M. Chest X-ray in Emergency Radiology: What Artificial Intelligence Applications Are Available? Diagnostics (Basel) 2023; 13:diagnostics13020216. [PMID: 36673027 PMCID: PMC9858224 DOI: 10.3390/diagnostics13020216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 12/28/2022] [Accepted: 01/03/2023] [Indexed: 01/11/2023] Open
Abstract
Due to its widespread availability, low cost, feasibility at the patient's bedside and accessibility even in low-resource settings, chest X-ray is one of the most requested examinations in radiology departments. Whilst it provides essential information on thoracic pathology, it can be difficult to interpret and is prone to diagnostic errors, particularly in the emergency setting. The increasing availability of large chest X-ray datasets has allowed the development of reliable Artificial Intelligence (AI) tools to help radiologists in everyday clinical practice. AI integration into the diagnostic workflow would benefit patients, radiologists, and healthcare systems in terms of improved and standardized reporting accuracy, quicker diagnosis, more efficient management, and appropriateness of the therapy. This review article aims to provide an overview of the applications of AI for chest X-rays in the emergency setting, emphasizing the detection and evaluation of pneumothorax, pneumonia, heart failure, and pleural effusion.
Collapse
Affiliation(s)
- Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elena Caloro
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Natallia Khenkina
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Gianmarco Della Pepa
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Velio Ascenti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Carlo Martinenghi
- Radiology Department, San Raffaele Hospital, Via Olgettina 60, 20132 Milan, Italy
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Giancarlo Oliva
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
| | - Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
| |
Collapse
|
28
|
Cellina M, Cè M, Irmici G, Ascenti V, Caloro E, Bianchi L, Pellegrino G, D’Amico N, Papa S, Carrafiello G. Artificial Intelligence in Emergency Radiology: Where Are We Going? Diagnostics (Basel) 2022; 12:diagnostics12123223. [PMID: 36553230 PMCID: PMC9777804 DOI: 10.3390/diagnostics12123223] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 12/11/2022] [Accepted: 12/16/2022] [Indexed: 12/23/2022] Open
Abstract
Emergency Radiology is a unique branch of imaging, as rapidity in the diagnosis and management of different pathologies is essential to saving patients' lives. Artificial Intelligence (AI) has many potential applications in emergency radiology: firstly, image acquisition can be facilitated by reducing acquisition times through automatic positioning and minimizing artifacts with AI-based reconstruction systems to optimize image quality, even in critical patients; secondly, it enables an efficient workflow (AI algorithms integrated with RIS-PACS workflow), by analyzing the characteristics and images of patients, detecting high-priority examinations and patients with emergent critical findings. Different machine and deep learning algorithms have been trained for the automated detection of different types of emergency disorders (e.g., intracranial hemorrhage, bone fractures, pneumonia), to help radiologists to detect relevant findings. AI-based smart reporting, summarizing patients' clinical data, and analyzing the grading of the imaging abnormalities, can provide an objective indicator of the disease's severity, resulting in quick and optimized treatment planning. In this review, we provide an overview of the different AI tools available in emergency radiology, to keep radiologists up to date on the current technological evolution in this field.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
- Correspondence:
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Velio Ascenti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elena Caloro
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Lorenzo Bianchi
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Giuseppe Pellegrino
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Natascha D’Amico
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Gianpaolo Carrafiello
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
- Radiology Department, Fondazione IRCCS Cà Granda, Policlinico di Milano Ospedale Maggiore, Via Sforza 35, 20122 Milan, Italy
| |
Collapse
|
29
|
Walston SL, Matsumoto T, Miki Y, Ueda D. Artificial intelligence-based model for COVID-19 prognosis incorporating chest radiographs and clinical data; a retrospective model development and validation study. Br J Radiol 2022; 95:20220058. [PMID: 36193755 PMCID: PMC9733620 DOI: 10.1259/bjr.20220058] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 08/19/2022] [Accepted: 08/23/2022] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVES The purpose of this study was to develop an artificial intelligence-based model to prognosticate COVID-19 patients at admission by combining clinical data and chest radiographs. METHODS This retrospective study used the Stony Brook University COVID-19 dataset of 1384 inpatients. After exclusions, 1356 patients were randomly divided into training (1083) and test datasets (273). We implemented three artificial intelligence models, which classified mortality, ICU admission, or ventilation risk. Each model had three submodels with different inputs: clinical data, chest radiographs, and both. We showed the importance of the variables using SHapley Additive exPlanations (SHAP) values. RESULTS The mortality prediction model was best overall with area under the curve, sensitivity, specificity, and accuracy of 0.79 (0.72-0.86), 0.74 (0.68-0.79), 0.77 (0.61-0.88), and 0.74 (0.69-0.79) for the clinical data-based model; 0.77 (0.69-0.85), 0.67 (0.61-0.73), 0.81 (0.67-0.92), 0.70 (0.64-0.75) for the image-based model, and 0.86 (0.81-0.91), 0.76 (0.70-0.81), 0.77 (0.61-0.88), 0.76 (0.70-0.81) for the mixed model. The mixed model had the best performance (p value < 0.05). The radiographs ranked fourth for prognostication overall, and first of the inpatient tests assessed. CONCLUSIONS These results suggest that prognosis models become more accurate if AI-derived chest radiograph features and clinical data are used together. ADVANCES IN KNOWLEDGE This AI model evaluates chest radiographs together with clinical data in order to classify patients as having high or low mortality risk. This work shows that chest radiographs taken at admission have significant COVID-19 prognostic information compared to clinical data other than age and sex.
Collapse
Affiliation(s)
| | | | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University,1-4-3 Asahi-machi, Abeno-ku, Osaka, Japan
| | | |
Collapse
|
30
|
Cellina M, Cè M, Khenkina N, Sinichich P, Cervelli M, Poggi V, Boemi S, Ierardi AM, Carrafiello G. Artificial Intellgence in the Era of Precision Oncological Imaging. Technol Cancer Res Treat 2022; 21:15330338221141793. [PMID: 36426565 PMCID: PMC9703524 DOI: 10.1177/15330338221141793] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Rapid-paced development and adaptability of artificial intelligence algorithms have secured their almost ubiquitous presence in the field of oncological imaging. Artificial intelligence models have been created for a variety of tasks, including risk stratification, automated detection, and segmentation of lesions, characterization, grading and staging, prediction of prognosis, and treatment response. Soon, artificial intelligence could become an essential part of every step of oncological workup and patient management. Integration of neural networks and deep learning into radiological artificial intelligence algorithms allow for extrapolating imaging features otherwise inaccessible to human operators and pave the way to truly personalized management of oncological patients.Although a significant proportion of currently available artificial intelligence solutions belong to basic and translational cancer imaging research, their progressive transfer to clinical routine is imminent, contributing to the development of a personalized approach in oncology. We thereby review the main applications of artificial intelligence in oncological imaging, describe the example of their successful integration into research and clinical practice, and highlight the challenges and future perspectives that will shape the field of oncological radiology.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, Milano, Italy,Michaela Cellina, MD, Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121, Milano, Italy.
| | - Maurizio Cè
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Natallia Khenkina
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Polina Sinichich
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Marco Cervelli
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Vittoria Poggi
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Sara Boemi
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | | | - Gianpaolo Carrafiello
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy,Radiology Department, Fondazione IRCCS Cà Granda, Milan, Italy
| |
Collapse
|
31
|
Batra S, Sharma H, Boulila W, Arya V, Srivastava P, Khan MZ, Krichen M. An Intelligent Sensor Based Decision Support System for Diagnosing Pulmonary Ailment through Standardized Chest X-ray Scans. SENSORS (BASEL, SWITZERLAND) 2022; 22:7474. [PMID: 36236573 PMCID: PMC9571822 DOI: 10.3390/s22197474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 09/28/2022] [Accepted: 09/29/2022] [Indexed: 06/16/2023]
Abstract
Academics and the health community are paying much attention to developing smart remote patient monitoring, sensors, and healthcare technology. For the analysis of medical scans, various studies integrate sophisticated deep learning strategies. A smart monitoring system is needed as a proactive diagnostic solution that may be employed in an epidemiological scenario such as COVID-19. Consequently, this work offers an intelligent medicare system that is an IoT-empowered, deep learning-based decision support system (DSS) for the automated detection and categorization of infectious diseases (COVID-19 and pneumothorax). The proposed DSS system was evaluated using three independent standard-based chest X-ray scans. The suggested DSS predictor has been used to identify and classify areas on whole X-ray scans with abnormalities thought to be attributable to COVID-19, reaching an identification and classification accuracy rate of 89.58% for normal images and 89.13% for COVID-19 and pneumothorax. With the suggested DSS system, a judgment depending on individual chest X-ray scans may be made in approximately 0.01 s. As a result, the DSS system described in this study can forecast at a pace of 95 frames per second (FPS) for both models, which is near to real-time.
Collapse
Affiliation(s)
- Shivani Batra
- Department of Computer Science and Engineering, KIET Group of Institutions, Ghaziabad 201206, India
| | - Harsh Sharma
- Department of Computer Science and Engineering, KIET Group of Institutions, Ghaziabad 201206, India
| | - Wadii Boulila
- Robotics and Internet-of-Things Laboratory, Prince Sultan University, Riyadh 12435, Saudi Arabia
- RIADI Laboratory, National School of Computer Sciences, University of Manouba, Manouba 2010, Tunisia
| | - Vaishali Arya
- School of Engineering, GD Goenka University, Gurugram 122103, India
| | - Prakash Srivastava
- Department of Computer Science and Engineering, Graphic Era (Deemed to Be University), Dehradun 248002, India
| | - Mohammad Zubair Khan
- Department of Computer Science and Information, Taibah University, Medina 42353, Saudi Arabia
| | - Moez Krichen
- Faculty of Computer Science & IT, Al Baha University, Al Baha 65779, Saudi Arabia
| |
Collapse
|
32
|
Multithreshold Segmentation and Machine Learning Based Approach to Differentiate COVID-19 from Viral Pneumonia. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2728866. [PMID: 36039344 PMCID: PMC9420061 DOI: 10.1155/2022/2728866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 06/13/2022] [Accepted: 07/05/2022] [Indexed: 11/17/2022]
Abstract
Coronavirus disease (COVID-19) has created an unprecedented devastation and the loss of millions of lives globally. Contagious nature and fatalities invariably pose challenges to physicians and healthcare support systems. Clinical diagnostic evaluation using reverse transcription-polymerase chain reaction and other approaches are currently in use. The Chest X-ray (CXR) and CT images were effectively utilized in screening purposes that could provide relevant data on localized regions affected by the infection. A step towards automated screening and diagnosis using CXR and CT could be of considerable importance in these turbulent times. The main objective is to probe a simple threshold-based segmentation approach to identify possible infection regions in CXR images and investigate intensity-based, wavelet transform (WT)-based, and Laws based texture features with statistical measures. Further feature selection strategy using Random Forest (RF) then selected features used to create Machine Learning (ML) representation with Support Vector Machine (SVM) and a Random Forest (RF) to make different COVID-19 from viral pneumonia (VP). The results obtained clearly indicate that the intensity and WT-based features vary in the two pathologies that are better differentiated with the combined features trained using SVM and RF classifiers. Classifier performance measures like an Area Under the Curve (AUC) of 0.97 and by and large classification accuracy of 0.9 using the RF model clearly indicate that the methodology implemented is useful in characterizing COVID-19 and Viral Pneumonia.
Collapse
|
33
|
Chen X, Zhang Y, Cao G, Zhou J, Lin Y, Chen B, Nie K, Fu G, Su MY, Wang M. Dynamic change of COVID-19 lung infection evaluated using co-registration of serial chest CT images. Front Public Health 2022; 10:915615. [PMID: 36033815 PMCID: PMC9412202 DOI: 10.3389/fpubh.2022.915615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 07/18/2022] [Indexed: 01/22/2023] Open
Abstract
Purpose To evaluate the volumetric change of COVID-19 lesions in the lung of patients receiving serial CT imaging for monitoring the evolution of the disease and the response to treatment. Materials and methods A total of 48 patients, 28 males and 20 females, who were confirmed to have COVID-19 infection and received chest CT examination, were identified. The age range was 21-93 years old, with a mean of 54 ± 18 years. Of them, 33 patients received the first follow-up (F/U) scan, 29 patients received the second F/U scan, and 11 patients received the third F/U scan. The lesion region of interest (ROI) was manually outlined. A two-step registration method, first using the Affine alignment, followed by the non-rigid Demons algorithm, was developed to match the lung areas on the baseline and F/U images. The baseline lesion ROI was mapped to the F/U images using the obtained geometric transformation matrix, and the radiologist outlined the lesion ROI on F/U CT again. Results The median (interquartile range) lesion volume (cm3) was 30.9 (83.1) at baseline CT exam, 18.3 (43.9) at first F/U, 7.6 (18.9) at second F/U, and 0.6 (19.1) at third F/U, which showed a significant trend of decrease with time. The two-step registration could significantly decrease the mean squared error (MSE) between baseline and F/U images with p < 0.001. The method could match the lung areas and the large vessels inside the lung. When using the mapped baseline ROIs as references, the second-look ROI drawing showed a significantly increased volume, p < 0.05, presumably due to the consideration of all the infected areas at baseline. Conclusion The results suggest that the registration method can be applied to assist in the evaluation of longitudinal changes of COVID-19 lesions on chest CT.
Collapse
Affiliation(s)
- Xiao Chen
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yang Zhang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, United States,Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Guoquan Cao
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Jiahuan Zhou
- Department of Radiology, Yuyao Hospital of Traditional Chinese Medicine, Ningbo, China
| | - Ya Lin
- The People's Hospital of Cangnan, Wenzhou, China
| | | | - Ke Nie
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Gangze Fu
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China,*Correspondence: Gangze Fu
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, CA, United States,Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan,Min-Ying Su
| | - Meihao Wang
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China,Meihao Wang
| |
Collapse
|
34
|
Munera N, Garcia-Gallo E, Gonzalez Á, Zea J, Fuentes YV, Serrano C, Ruiz-Cuartas A, Rodriguez A, Reyes LF. A novel model to predict severe COVID-19 and mortality using an artificial intelligence algorithm to interpret chest X-Rays and clinical variables. ERJ Open Res 2022; 8:00010-2022. [PMID: 35765299 PMCID: PMC9059131 DOI: 10.1183/23120541.00010-2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 04/19/2022] [Indexed: 11/25/2022] Open
Abstract
Background Patients with coronavirus disease 2019 (COVID-19) could develop severe disease requiring admission to the intensive care unit (ICU). This article presents a novel method that predicts whether a patient will need admission to the ICU and assesses the risk of in-hospital mortality by training a deep-learning model that combines a set of clinical variables and features in chest radiographs. Methods This was a prospective diagnostic test study. Patients with confirmed severe acute respiratory syndrome coronavirus 2 infection between March 2020 and January 2021 were included. This study was designed to build predictive models obtained by training convolutional neural networks for chest radiograph images using an artificial intelligence (AI) tool and a random forest analysis to identify critical clinical variables. Then, both architectures were connected and fine-tuned to provide combined models. Results 2552 patients were included in the clinical cohort. The variables independently associated with ICU admission were age, fraction of inspired oxygen (FiO2) on admission, dyspnoea on admission and obesity. Moreover, the variables associated with hospital mortality were age, FiO2 on admission and dyspnoea. When implementing the AI model to interpret the chest radiographs and the clinical variables identified by random forest, we developed a model that accurately predicts ICU admission (area under the curve (AUC) 0.92±0.04) and hospital mortality (AUC 0.81±0.06) in patients with confirmed COVID-19. Conclusions This automated chest radiograph interpretation algorithm, along with clinical variables, is a reliable alternative to identify patients at risk of developing severe COVID-19 who might require admission to the ICU. In patients with #COVID19, an automated chest radiograph interpretation algorithm, along with clinical variables, is a reliable alternative to identify patients at risk of developing severe COVID-19, who might require admission to the intensive care unithttps://bit.ly/3Kf61TK
Collapse
|
35
|
Mortality Prediction of COVID-19 Patients Using Radiomic and Neural Network Features Extracted from a Wide Chest X-ray Sample Size: A Robust Approach for Different Medical Imbalanced Scenarios. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083903] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Aim: The aim of this study was to develop robust prognostic models for mortality prediction of COVID-19 patients, applicable to different sets of real scenarios, using radiomic and neural network features extracted from chest X-rays (CXRs) with a certified and commercially available software. Methods: 1816 patients from 5 different hospitals in the Province of Reggio Emilia were included in the study. Overall, 201 radiomic features and 16 neural network features were extracted from each COVID-19 patient’s radiography. The initial dataset was balanced to train the classifiers with the same number of dead and survived patients, randomly selected. The pipeline had three main parts: balancing procedure; three-step feature selection; and mortality prediction with radiomic features through three machine learning (ML) classification models: AdaBoost (ADA), Quadratic Discriminant Analysis (QDA) and Random Forest (RF). Five evaluation metrics were computed on the test samples. The performance for death prediction was validated on both a balanced dataset (Case 1) and an imbalanced dataset (Case 2). Results: accuracy (ACC), area under the ROC-curve (AUC) and sensitivity (SENS) for the best classifier were, respectively, 0.72 ± 0.01, 0.82 ± 0.02 and 0.84 ± 0.04 for Case 1 and 0.70 ± 0.04, 0.79 ± 0.03 and 0.76 ± 0.06 for Case 2. These results show that the prediction of COVID-19 mortality is robust in a different set of scenarios. Conclusions: Our large and varied dataset made it possible to train ML algorithms to predict COVID-19 mortality using radiomic and neural network features of CXRs.
Collapse
|
36
|
Wu Z, Xue R, Shao M. Knowledge graph analysis and visualization of AI technology applied in COVID-19. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2022; 29:26396-26408. [PMID: 34859342 PMCID: PMC8638799 DOI: 10.1007/s11356-021-17800-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 11/23/2021] [Indexed: 05/14/2023]
Abstract
With the global outbreak of coronavirus disease (COVID-19) all over the world, artificial intelligence (AI) technology is widely used in COVID-19 and has become a hot topic. In recent 2 years, the application of AI technology in COVID-19 has developed rapidly, and more than 100 relevant papers are published every month. In this paper, we combined with the bibliometric and visual knowledge map analysis, used the WOS database as the sample data source, and applied VOSviewer and CiteSpace analysis tools to carry out multi-dimensional statistical analysis and visual analysis about 1903 pieces of literature of recent 2 years (by the end of July this year). The data is analyzed by several terms with the main annual article and citation count, major publication sources, institutions and countries, their contribution and collaboration, etc. Since last year, the research on the COVID-19 has sharply increased; especially the corresponding research fields combined with the AI technology are expanding, such as medicine, management, economics, and informatics. The China and USA are the most prolific countries in AI applied in COVID-19, which have made a significant contribution to AI applied in COVID-19, as the high-level international collaboration of countries and institutions is increasing and more impactful. Moreover, we widely studied the issues: detection, surveillance, risk prediction, therapeutic research, virus modeling, and analysis of COVID-19. Finally, we put forward perspective challenges and limits to the application of AI in the COVID-19 for researchers and practitioners to facilitate future research on AI applied in COVID-19.
Collapse
Affiliation(s)
- Zongsheng Wu
- School of Computer Science, Xianyang Normal University, Xianyang, 712000, Shaanxi, China.
| | - Ru Xue
- School of Information Engineering, Xizang Minzu University, Xianyang, 712082, Shaanxi, China
| | - Meiyun Shao
- School of Information Engineering, Xizang Minzu University, Xianyang, 712082, Shaanxi, China
| |
Collapse
|
37
|
A hybrid machine learning/deep learning COVID-19 severity predictive model from CT images and clinical data. Sci Rep 2022; 12:4329. [PMID: 35288579 PMCID: PMC8919158 DOI: 10.1038/s41598-022-07890-1] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 02/22/2022] [Indexed: 01/08/2023] Open
Abstract
AbstractCOVID-19 clinical presentation and prognosis are highly variable, ranging from asymptomatic and paucisymptomatic cases to acute respiratory distress syndrome and multi-organ involvement. We developed a hybrid machine learning/deep learning model to classify patients in two outcome categories, non-ICU and ICU (intensive care admission or death), using 558 patients admitted in a northern Italy hospital in February/May of 2020. A fully 3D patient-level CNN classifier on baseline CT images is used as feature extractor. Features extracted, alongside with laboratory and clinical data, are fed for selection in a Boruta algorithm with SHAP game theoretical values. A classifier is built on the reduced feature space using CatBoost gradient boosting algorithm and reaching a probabilistic AUC of 0.949 on holdout test set. The model aims to provide clinical decision support to medical doctors, with the probability score of belonging to an outcome class and with case-based SHAP interpretation of features importance.
Collapse
|
38
|
Tricarico D, Calandri M, Barba M, Piatti C, Geninatti C, Basile D, Gatti M, Melis M, Veltri A. Convolutional Neural Network-Based Automatic Analysis of Chest Radiographs for the Detection of COVID-19 Pneumonia: A Prioritizing Tool in the Emergency Department, Phase I Study and Preliminary "Real Life" Results. Diagnostics (Basel) 2022; 12:diagnostics12030570. [PMID: 35328122 PMCID: PMC8947382 DOI: 10.3390/diagnostics12030570] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Revised: 02/11/2022] [Accepted: 02/15/2022] [Indexed: 12/26/2022] Open
Abstract
The aim of our study is the development of an automatic tool for the prioritization of COVID-19 diagnostic workflow in the emergency department by analyzing chest X-rays (CXRs). The Convolutional Neural Network (CNN)-based method we propose has been tested retrospectively on a single-center set of 542 CXRs evaluated by experienced radiologists. The SARS-CoV-2 positive dataset (n = 234) consists of CXRs collected between March and April 2020, with the COVID-19 infection being confirmed by an RT-PCR test within 24 h. The SARS-CoV-2 negative dataset (n = 308) includes CXRs from 2019, therefore prior to the pandemic. For each image, the CNN computes COVID-19 risk indicators, identifying COVID-19 cases and prioritizing the urgent ones. After installing the software into the hospital RIS, a preliminary comparison between local daily COVID-19 cases and predicted risk indicators for 2918 CXRs in the same period was performed. Significant improvements were obtained for both prioritization and identification using the proposed method. Mean Average Precision (MAP) increased (p < 1.21 × 10−21 from 43.79% with random sorting to 71.75% with our method. CNN sensitivity was 78.23%, higher than radiologists’ 61.1%; specificity was 64.20%. In the real-life setting, this method had a correlation of 0.873. The proposed CNN-based system effectively prioritizes CXRs according to COVID-19 risk in an experimental setting; preliminary real-life results revealed high concordance with local pandemic incidence.
Collapse
Affiliation(s)
- Davide Tricarico
- AITEM Artificial Intelligence Technologies Multipurpose s.r.l., Corso Castelfidardo 36, 10129 Turin, Italy; (D.T.); (M.M.)
- Department of Mathematics “G. Peano”, University of Turin, Via Carlo Alberto 10, 10123 Turin, Italy
| | - Marco Calandri
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
| | - Matteo Barba
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
- Correspondence:
| | - Clara Piatti
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
| | - Carlotta Geninatti
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
| | - Domenico Basile
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
| | - Marco Gatti
- Radiology Unit, Department of Surgical Sciences, University of Turin, Città della Salute e della Scienza di Torino, Corso Bramante, 88/90, 10126 Turin, Italy;
| | - Massimiliano Melis
- AITEM Artificial Intelligence Technologies Multipurpose s.r.l., Corso Castelfidardo 36, 10129 Turin, Italy; (D.T.); (M.M.)
| | - Andrea Veltri
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
| |
Collapse
|
39
|
Guarrasi V, D'Amico NC, Sicilia R, Cordelli E, Soda P. Pareto optimization of deep networks for COVID-19 diagnosis from chest X-rays. PATTERN RECOGNITION 2022; 121:108242. [PMID: 34393277 PMCID: PMC8351284 DOI: 10.1016/j.patcog.2021.108242] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 07/26/2021] [Accepted: 08/08/2021] [Indexed: 05/05/2023]
Abstract
The year 2020 was characterized by the COVID-19 pandemic that has caused, by the end of March 2021, more than 2.5 million deaths worldwide. Since the beginning, besides the laboratory test, used as the gold standard, many applications have been applying deep learning algorithms to chest X-ray images to recognize COVID-19 infected patients. In this context, we found out that convolutional neural networks perform well on a single dataset but struggle to generalize to other data sources. To overcome this limitation, we propose a late fusion approach where we combine the outputs of several state-of-the-art CNNs, introducing a novel method that allows us to construct an optimum ensemble determining which and how many base learners should be aggregated. This choice is driven by a two-objective function that maximizes, on a validation set, the accuracy and the diversity of the ensemble itself. A wide set of experiments on several publicly available datasets, accounting for more than 92,000 images, shows that the proposed approach provides average recognition rates up to 93.54% when tested on external datasets.
Collapse
Affiliation(s)
- Valerio Guarrasi
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy
- Department of Computer, Control, and Management Engineering, Sapienza University of Rome, Italy
| | - Natascha Claudia D'Amico
- Department of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano S.p.A., Milan, Italy
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy
| | - Rosa Sicilia
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy
| | - Ermanno Cordelli
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy
| | - Paolo Soda
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy
| |
Collapse
|
40
|
Gillman AG, Lunardo F, Prinable J, Belous G, Nicolson A, Min H, Terhorst A, Dowling JA. Automated COVID-19 diagnosis and prognosis with medical imaging and who is publishing: a systematic review. Phys Eng Sci Med 2021; 45:13-29. [PMID: 34919204 PMCID: PMC8678975 DOI: 10.1007/s13246-021-01093-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 12/13/2021] [Indexed: 12/31/2022]
Abstract
Objectives: To conduct a systematic survey of published techniques for automated diagnosis and prognosis of COVID-19 diseases using medical imaging, assessing the validity of reported performance and investigating the proposed clinical use-case. To conduct a scoping review into the authors publishing such work. Methods: The Scopus database was queried and studies were screened for article type, and minimum source normalized impact per paper and citations, before manual relevance assessment and a bias assessment derived from a subset of the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). The number of failures of the full CLAIM was adopted as a surrogate for risk-of-bias. Methodological and performance measurements were collected from each technique. Each study was assessed by one author. Comparisons were evaluated for significance with a two-sided independent t-test. Findings: Of 1002 studies identified, 390 remained after screening and 81 after relevance and bias exclusion. The ratio of exclusion for bias was 71%, indicative of a high level of bias in the field. The mean number of CLAIM failures per study was 8.3 ± 3.9 [1,17] (mean ± standard deviation [min,max]). 58% of methods performed diagnosis versus 31% prognosis. Of the diagnostic methods, 38% differentiated COVID-19 from healthy controls. For diagnostic techniques, area under the receiver operating curve (AUC) = 0.924 ± 0.074 [0.810,0.991] and accuracy = 91.7% ± 6.4 [79.0,99.0]. For prognostic techniques, AUC = 0.836 ± 0.126 [0.605,0.980] and accuracy = 78.4% ± 9.4 [62.5,98.0]. CLAIM failures did not correlate with performance, providing confidence that the highest results were not driven by biased papers. Deep learning techniques reported higher AUC (p < 0.05) and accuracy (p < 0.05), but no difference in CLAIM failures was identified. Interpretation: A majority of papers focus on the less clinically impactful diagnosis task, contrasted with prognosis, with a significant portion performing a clinically unnecessary task of differentiating COVID-19 from healthy. Authors should consider the clinical scenario in which their work would be deployed when developing techniques. Nevertheless, studies report superb performance in a potentially impactful application. Future work is warranted in translating techniques into clinical tools.
Collapse
Affiliation(s)
- Ashley G Gillman
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia.
| | - Febrio Lunardo
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia.,College of Science and Engineering, James Cook University, Australian Tropical Science Innovation Precinct, Townsville, QLD, 4814, Australia
| | - Joseph Prinable
- ACRF Image X Institute, University of Sydney, Level 2, Biomedical Building (C81), 1 Central Ave, Australian Technology Park, Eveleigh, Sydney, NSW, 2015, Australia
| | - Gregg Belous
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Aaron Nicolson
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Hang Min
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Andrew Terhorst
- Data61, Commonwealth Scientific and Industrial Research Organisation, College Road, Sandy Bay, Hobart, TAS, 7005, Australia
| | - Jason A Dowling
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| |
Collapse
|
41
|
Garcia Santa Cruz B, Bossa MN, Sölter J, Husch AD. Public Covid-19 X-ray datasets and their impact on model bias - A systematic review of a significant problem. Med Image Anal 2021. [PMID: 34597937 DOI: 10.1101/2021.02.15.21251775] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Computer-aided-diagnosis and stratification of COVID-19 based on chest X-ray suffers from weak bias assessment and limited quality-control. Undetected bias induced by inappropriate use of datasets, and improper consideration of confounders prevents the translation of prediction models into clinical practice. By adopting established tools for model evaluation to the task of evaluating datasets, this study provides a systematic appraisal of publicly available COVID-19 chest X-ray datasets, determining their potential use and evaluating potential sources of bias. Only 9 out of more than a hundred identified datasets met at least the criteria for proper assessment of risk of bias and could be analysed in detail. Remarkably most of the datasets utilised in 201 papers published in peer-reviewed journals, are not among these 9 datasets, thus leading to models with high risk of bias. This raises concerns about the suitability of such models for clinical use. This systematic review highlights the limited description of datasets employed for modelling and aids researchers to select the most suitable datasets for their task.
Collapse
Affiliation(s)
- Beatriz Garcia Santa Cruz
- Centre Hospitalier de Luxembourg, 4, Rue Ernest Barble, Luxembourg L-1210, Luxembourg; Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg.
| | - Matías Nicolás Bossa
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg; Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Pleinlaan 2, Brussels B-1050, Belgium
| | - Jan Sölter
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg
| | - Andreas Dominik Husch
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg
| |
Collapse
|
42
|
Public Covid-19 X-ray datasets and their impact on model bias - A systematic review of a significant problem. Med Image Anal 2021; 74:102225. [PMID: 34597937 PMCID: PMC8479314 DOI: 10.1016/j.media.2021.102225] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 08/29/2021] [Accepted: 09/02/2021] [Indexed: 12/23/2022]
Abstract
Computer-aided-diagnosis and stratification of COVID-19 based on chest X-ray suffers from weak bias assessment and limited quality-control. Undetected bias induced by inappropriate use of datasets, and improper consideration of confounders prevents the translation of prediction models into clinical practice. By adopting established tools for model evaluation to the task of evaluating datasets, this study provides a systematic appraisal of publicly available COVID-19 chest X-ray datasets, determining their potential use and evaluating potential sources of bias. Only 9 out of more than a hundred identified datasets met at least the criteria for proper assessment of risk of bias and could be analysed in detail. Remarkably most of the datasets utilised in 201 papers published in peer-reviewed journals, are not among these 9 datasets, thus leading to models with high risk of bias. This raises concerns about the suitability of such models for clinical use. This systematic review highlights the limited description of datasets employed for modelling and aids researchers to select the most suitable datasets for their task.
Collapse
|