1
|
Jaiswal A, Fervers P, Meng F, Zhang H, Móré D, Giannakis A, Wailzer J, Bucher AM, Maintz D, Kottlors J, Shahzad R, Persigehl T. Performance of AI Approaches for COVID-19 Diagnosis Using Chest CT Scans: The Impact of Architecture and Dataset. ROFO-FORTSCHR RONTG 2025. [PMID: 40300640 DOI: 10.1055/a-2577-3928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2025]
Abstract
AI is emerging as a promising tool for diagnosing COVID-19 based on chest CT scans. The aim of this study was the comparison of AI models for COVID-19 diagnosis. Therefore, we: (1) trained three distinct AI models for classifying COVID-19 and non-COVID-19 pneumonia (nCP) using a large, clinically relevant CT dataset, (2) evaluated the models' performance using an independent test set, and (3) compared the models both algorithmically and experimentally.In this multicenter multi-vendor study, we collected n=1591 chest CT scans of COVID-19 (n=762) and nCP (n=829) patients from China and Germany. In Germany, the data was collected from three RACOON sites. We trained and validated three COVID-19 AI models with different architectures: COVNet based on 2D-CNN, DeCoVnet based on 3D-CNN, and AD3D-MIL based on 3D-CNN with attention module. 991 CT scans were used for training the AI models using 5-fold cross-validation. 600 CT scans from 6 different centers were used for independent testing. The models' performance was evaluated using accuracy (Acc), sensitivity (Se), and specificity (Sp).The average validation accuracy of the COVNet, DeCoVnet, and AD3D-MIL models over the 5 folds was 80.9%, 82.0%, and 84.3%, respectively. On the independent test set with n=600 CT scans, COVNet yielded Acc=76.6%, Se=67.8%, Sp=85.7%; DeCoVnet provided Acc=75.1%, Se=61.2%, Sp=89.7%; and AD3D-MIL achieved Acc=73.9%, Se=57.7%, Sp=90.8%.The classification performance of the evaluated AI models is highly dependent on the training data rather than the architecture itself. Our results demonstrate a high specificity and moderate sensitivity. The AI classification models should not be used unsupervised but could potentially assist radiologists in COVID-19 and nCP identification. · This study compares AI approaches for diagnosing COVID-19 in chest CT scans, which is essential for further optimizing the delivery of healthcare and for pandemic preparedness.. · Our experiments using a multicenter, multi-vendor, diverse dataset show that the training data is the key factor in determining the diagnostic performance.. · The AI models should not be used unsupervised but as a tool to assist radiologists.. · Jaiswal A, Fervers P, Meng F et al. Performance of AI Approaches for COVID-19 Diagnosis Using Chest CT Scans: The Impact of Architecture and Dataset. Rofo 2025; DOI 10.1055/a-2577-3928.
Collapse
Affiliation(s)
- Astha Jaiswal
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Philipp Fervers
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Fanyang Meng
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| | - Huimao Zhang
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| | - Dorottya Móré
- Department of Diagnostic and Interventional Radiology, University Hospital Heidelberg, University of Heidelberg, Heidelberg, Germany
| | - Athanasios Giannakis
- Department of Diagnostic and Interventional Radiology, University Hospital Heidelberg, University of Heidelberg, Heidelberg, Germany
| | - Jasmin Wailzer
- Institute for Diagnostic and Interventional Radiology, Frankfurt University Hospital, Frankfurt, Germany
| | - Andreas Michael Bucher
- Institute for Diagnostic and Interventional Radiology, Frankfurt University Hospital, Frankfurt, Germany
| | - David Maintz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Jonathan Kottlors
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Rahil Shahzad
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Philips Healthcare, Innovative Technologies, Aachen, Germany
| | - Thorsten Persigehl
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
2
|
Cao S, Liu L, Yang L, Li H, Zhu R, Yu G, Jiao N, Wu D. Assessing severe pneumonia risk in children via clinical prognostic model based on laboratory markers. Int Immunopharmacol 2025; 151:114317. [PMID: 39983420 DOI: 10.1016/j.intimp.2025.114317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 01/09/2025] [Accepted: 02/13/2025] [Indexed: 02/23/2025]
Abstract
Pneumonia represents a significant cause of mortality in children globally, emphasizing the importance of identifying high-risk patients to improve clinical outcomes. There is a lack of reliable laboratory markers and robust risk stratification models for clinical decision support in pediatric pneumonia. This study extracted data from the Paediatric Intensive Care database for 749 children under 3 years with severe pneumonia. The relationship between laboratory parameters and prognostic outcomes was evaluated using Cox proportional hazards regression analyses. Oxygen saturation, hemoglobin, lipase, urea, and uric acid were identified as laboratory parameters significantly associated with severe pneumonia outcomes. Leveraging these laboratory markers, a prognosis model was constructed employing the XGBoost classifier. The model was validated in a hold-out test cohort and an external validation cohort, with its performance assessed by the area under the receiver operating characteristic curve (AUC). The validation cohort was derived from 129 children with severe pneumonia admitted to the PICU of the Children's Hospital, Zhejiang University School of Medicine in 2019. The model demonstrated efficacy in predicting the death and survival of patients (AUC = 0.943), as well as in distinguishing between children at high- and low-risk of death in advance (HR = 2.930, 95 % CI: 2.551-3.366, P < 0.001). The robust performance of this model was further validated in the test cohort (AUC = 0.871), and the validation cohort (AUC = 0.872). In conclusion, this novel model enables the prediction of individualized mortality risk in children diagnosed with severe pneumonia, offering personalized risk assessments to inform and enhance clinical decision-making processes.
Collapse
Affiliation(s)
- Suqi Cao
- The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Heath, Hangzhou 310053, PR China
| | - Lei Liu
- The Shanghai Tenth People's Hospital, School of Life Sciences and Technology, Tongji University, Shanghai 200072, PR China
| | - Liu Yang
- The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Heath, Hangzhou 310053, PR China
| | - Haomin Li
- The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Heath, Hangzhou 310053, PR China
| | - Ruixin Zhu
- The Shanghai Tenth People's Hospital, School of Life Sciences and Technology, Tongji University, Shanghai 200072, PR China
| | - Gang Yu
- The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Heath, Hangzhou 310053, PR China
| | - Na Jiao
- The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Heath, Hangzhou 310053, PR China.
| | - Dingfeng Wu
- The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Heath, Hangzhou 310053, PR China.
| |
Collapse
|
3
|
Dai X, Lu H, Wang X, Liu Y, Zang J, Liu Z, Sun T, Gao F, Sui X. Ultrasound-based artificial intelligence model for prediction of Ki-67 proliferation index in soft tissue tumors. Acad Radiol 2025; 32:1178-1188. [PMID: 39406581 DOI: 10.1016/j.acra.2024.09.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 09/13/2024] [Accepted: 09/30/2024] [Indexed: 03/03/2025]
Abstract
RATIONALE AND OBJECTIVES To investigate the value of deep learning (DL) combined with radiomics and clinical and imaging features in predicting the Ki-67 proliferation index of soft tissue tumors (STTs). MATERIALS AND METHODS In this retrospective study, a total of 394 patients with STTs admitted from January 2021 to December 2023 in two separate hospitals were collected. Hospital-1 was the training cohort (323 cases, of which 89 and 234 were high and low Ki-67, respectively) and Hospital-2 was the external validation cohort (71 cases, of which 23 and 48 were high and low Ki-67, respectively). Clinical and ultrasound characteristics including age, sex, tumor size, morphology, margins, internal echoes and blood flow were assessed. Risk factors with significant correlations were screened by univariate and multivariate logistic regression analyses. After extracting the radiomics and DL features, the feature fusion model is constructed by Support Vector Machine. The prediction results obtained from separate clinical features, radiomics features and DL features were combined to construct decision fusion models. Finally, the DeLong test was used to compare whether the AUCs between the models were significantly different. RESULTS The three feature fusion models and three decision fusion models constructed demonstrated excellent diagnostic performance in predicting Ki-67 expression levels in STTs. Among them, the feature fusion model based on clinical, radiomics, and DL performed the best with an AUC of 0.911 (95% CI: 0.886-0.935) in the training cohort and 0.923 (95% CI: 0.873-0.972) in the validation cohort, and proved to be well-calibrated and clinically useful. The DeLong test showed that the decision fusion models based on clinical, radiomics and DL performed significantly worse than the three feature fusion models on the validation set. There was no statistical difference in diagnostic performance between the other models. CONCLUSION The ultrasound-based fusion model of clinical, radiomics, and DL features showed good performance in predicting Ki-67 expression levels in STTs.
Collapse
Affiliation(s)
- Xinpeng Dai
- Department of Ultrasound, Hebei Medical University Third Hospital, Shijiazhuang, Hebei province, China (X.D., X.W., Y.L., Z.L., X.S.).
| | - Haiyong Lu
- Department of Ultrasound, The First Affiliated Hospital of Hebei North University, Zhangjiakou, Hebei, China (H.L.).
| | - Xinying Wang
- Department of Ultrasound, Hebei Medical University Third Hospital, Shijiazhuang, Hebei province, China (X.D., X.W., Y.L., Z.L., X.S.).
| | - Yujia Liu
- Department of Ultrasound, Hebei Medical University Third Hospital, Shijiazhuang, Hebei province, China (X.D., X.W., Y.L., Z.L., X.S.).
| | - Jiangnan Zang
- Hebei Medical University, Shijiazhuang, Hebei province, China (J.Z.).
| | - Zongjie Liu
- Department of Ultrasound, Hebei Medical University Third Hospital, Shijiazhuang, Hebei province, China (X.D., X.W., Y.L., Z.L., X.S.).
| | - Tao Sun
- Department of Orthopaedic Oncology, Hebei Medical University Third Hospital, Shijiazhuang, Hebei province, China (T.S.).
| | - Feng Gao
- Department of Pathology, The Thrid Hospital of Hebei Medical University, Shijiazhuang, Hebei province, China (G.F.).
| | - Xin Sui
- Department of Ultrasound, Hebei Medical University Third Hospital, Shijiazhuang, Hebei province, China (X.D., X.W., Y.L., Z.L., X.S.).
| |
Collapse
|
4
|
Chen Y, Zhou B, Xiaopeng C, Ma C, Cui L, Lei F, Han X, Chen L, Wu S, Ye D. A method of deep network auto-training based on the MTPI auto-transfer learning and a reinforcement learning algorithm for vegetation detection in a dry thermal valley environment. FRONTIERS IN PLANT SCIENCE 2025; 15:1448669. [PMID: 40017619 PMCID: PMC11864880 DOI: 10.3389/fpls.2024.1448669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Accepted: 10/16/2024] [Indexed: 03/01/2025]
Abstract
UAV image acquisition and deep learning techniques have been widely used in field hydrological monitoring to meet the increasing data volume demand and refined quality. However, manual parameter training requires trial-and-error costs (T&E), and existing auto-trainings adapt to simple datasets and network structures, which is low practicality in unstructured environments, e.g., dry thermal valley environment (DTV). Therefore, this research combined a transfer learning (MTPI, maximum transfer potential index method) and an RL (the MTSA reinforcement learning, Multi-Thompson Sampling Algorithm) in dataset auto-augmentation and networks auto-training to reduce human experience and T&E. Firstly, to maximize the iteration speed and minimize the dataset consumption, the best iteration conditions (MTPI conditions) were derived with the improved MTPI method, which shows that subsequent iterations required only 2.30% dataset and 6.31% time cost. Then, the MTSA was improved under MTPI conditions (MTSA-MTPI) to auto-augmented datasets, and the results showed a 16.0% improvement in accuracy (human error) and a 20.9% reduction in standard error (T&E cost). Finally, the MTPI-MTSA was used for four networks auto-training (e.g., FCN, Seg-Net, U-Net, and Seg-Res-Net 50) and showed that the best Seg-Res-Net 50 gained 95.2% WPA (accuracy) and 90.9% WIoU. This study provided an effective auto-training method for complex vegetation information collection, which provides a reference for reducing the manual intervention of deep learning.
Collapse
Affiliation(s)
- Yayong Chen
- State Key Laboratory of Eco-hydraulics in Northwest Arid Region, Xi’an University of Technology, Xi’an, China
- School of water resources and hydro-electric engineering of XUT, Xi’an University of Technology, Xi’an, China
| | - Beibei Zhou
- State Key Laboratory of Eco-hydraulics in Northwest Arid Region, Xi’an University of Technology, Xi’an, China
- School of water resources and hydro-electric engineering of XUT, Xi’an University of Technology, Xi’an, China
| | - Chen Xiaopeng
- State Key Laboratory of Eco-hydraulics in Northwest Arid Region, Xi’an University of Technology, Xi’an, China
- School of water resources and hydro-electric engineering of XUT, Xi’an University of Technology, Xi’an, China
| | - Changkun Ma
- State Key Laboratory of Eco-hydraulics in Northwest Arid Region, Xi’an University of Technology, Xi’an, China
- School of water resources and hydro-electric engineering of XUT, Xi’an University of Technology, Xi’an, China
| | - Lei Cui
- China Renewable Energy Engineering Institute, Beijing, China
| | - Feng Lei
- Central South Survey and Design Institute Group Co., Ltd., Changsha, China
| | - Xiaojie Han
- China Electric Construction Group Beijing Survey and Design Institute Co., Beijing, China
| | - Linjie Chen
- Center for Artificial Intelligence in Agriculture, School of Future Technology, Fujian Agriculture and Forestry University, Fuzhou, China
- Fujian Key Laboratory of Agricultural Information Sensoring Technology, Fujian Agriculture and Forestry University, Fuzhou, China
| | - Shanshan Wu
- State Key Laboratory of Eco-hydraulics in Northwest Arid Region, Xi’an University of Technology, Xi’an, China
- School of water resources and hydro-electric engineering of XUT, Xi’an University of Technology, Xi’an, China
| | - Dapeng Ye
- Fujian Key Laboratory of Agricultural Information Sensoring Technology, Fujian Agriculture and Forestry University, Fuzhou, China
- College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou, China
| |
Collapse
|
5
|
Ahmad IS, Dai J, Xie Y, Liang X. Deep learning models for CT image classification: a comprehensive literature review. Quant Imaging Med Surg 2025; 15:962-1011. [PMID: 39838987 PMCID: PMC11744119 DOI: 10.21037/qims-24-1400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 10/18/2024] [Indexed: 01/23/2025]
Abstract
Background and Objective Computed tomography (CT) imaging plays a crucial role in the early detection and diagnosis of life-threatening diseases, particularly in respiratory illnesses and oncology. The rapid advancement of deep learning (DL) has revolutionized CT image analysis, enhancing diagnostic accuracy and efficiency. This review explores the impact of advanced DL methodologies in CT imaging, with a particular focus on their applications in coronavirus disease 2019 (COVID-19) detection and lung nodule classification. Methods A comprehensive literature search was conducted, examining the evolution of DL architectures in medical imaging from conventional convolutional neural networks (CNNs) to sophisticated foundational models (FMs). We reviewed publications from major databases, focusing on developments in CT image analysis using DL from 2013 to 2023. Our search criteria included all types of articles, with a focus on peer-reviewed research papers and review articles in English. Key Content and Findings The review reveals that DL, particularly advanced architectures like FMs, has transformed CT image analysis by streamlining interpretation processes and enhancing diagnostic capabilities. We found significant advancements in addressing global health challenges, especially during the COVID-19 pandemic, and in ongoing efforts for lung cancer screening. The review also addresses technical challenges in CT image analysis, including data variability, the need for large high-quality datasets, and computational demands. Innovative strategies such as transfer learning, data augmentation, and distributed computing are explored as solutions to these challenges. Conclusions This review underscores the pivotal role of DL in advancing CT image analysis, particularly for COVID-19 and lung nodule detection. The integration of DL models into clinical workflows shows promising potential to enhance diagnostic accuracy and efficiency. However, challenges remain in areas of interpretability, validation, and regulatory compliance. The review advocates for continued research, interdisciplinary collaboration, and ethical considerations as DL technologies become integral to clinical practice. While traditional imaging techniques remain vital, the integration of DL represents a significant advancement in medical diagnostics, with far-reaching implications for future research, clinical practice, and healthcare policy.
Collapse
Affiliation(s)
- Isah Salim Ahmad
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jingjing Dai
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
6
|
V AC, B SK, Pradeep S, Suraksha P, Lin M. Leveraging compact convolutional transformers for enhanced COVID-19 detection in chest X-rays: a grad-CAM visualization approach. Front Big Data 2024; 7:1489020. [PMID: 39736985 PMCID: PMC11683681 DOI: 10.3389/fdata.2024.1489020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Accepted: 11/29/2024] [Indexed: 01/01/2025] Open
Affiliation(s)
- Aravinda C. V
- Department of Computer Science and Engineering, NITTE Mahalinga Adyantaya Memorial Institute of Technology, NITTE Deemed to Be University, Karkala, Karnataka, India
| | - Sudeepa K. B
- Department of Computer Science and Engineering, NITTE Mahalinga Adyantaya Memorial Institute of Technology, NITTE Deemed to Be University, Karkala, Karnataka, India
| | - S. Pradeep
- Department of Computer Science and Engineering, Government Engineering College, Chamarajanagar, Karnataka, India
| | - P. Suraksha
- Department of Computer Science and Engineering, Vidhya Vardhaka College of Engineering, Mysore, Karnataka, India
| | - Meng Lin
- Department of Electronic and Computer Engineering (The Graduate School of Science and Engineering), Ritsumeikan University, Kusatsu, Shiga, Japan
| |
Collapse
|
7
|
Rai S, Bhatt JS, Patra SK. An AI-Based Low-Risk Lung Health Image Visualization Framework Using LR-ULDCT. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2047-2062. [PMID: 38491236 PMCID: PMC11522248 DOI: 10.1007/s10278-024-01062-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/18/2024] [Accepted: 02/12/2024] [Indexed: 03/18/2024]
Abstract
In this article, we propose an AI-based low-risk visualization framework for lung health monitoring using low-resolution ultra-low-dose CT (LR-ULDCT). We present a novel deep cascade processing workflow to achieve diagnostic visualization on LR-ULDCT (<0.3 mSv) at par high-resolution CT (HRCT) of 100 mSV radiation technology. To this end, we build a low-risk and affordable deep cascade network comprising three sequential deep processes: restoration, super-resolution (SR), and segmentation. Given degraded LR-ULDCT, the first novel network unsupervisedly learns restoration function from augmenting patch-based dictionaries and residuals. The restored version is then super-resolved (SR) for target (sensor) resolution. Here, we combine perceptual and adversarial losses in novel GAN to establish the closeness between probability distributions of generated SR-ULDCT and restored LR-ULDCT. Thus SR-ULDCT is presented to the segmentation network that first separates the chest portion from SR-ULDCT followed by lobe-wise colorization. Finally, we extract five lobes to account for the presence of ground glass opacity (GGO) in the lung. Hence, our AI-based system provides low-risk visualization of input degraded LR-ULDCT to various stages, i.e., restored LR-ULDCT, restored SR-ULDCT, and segmented SR-ULDCT, and achieves diagnostic power of HRCT. We perform case studies by experimenting on real datasets of COVID-19, pneumonia, and pulmonary edema/congestion while comparing our results with state-of-the-art. Ablation experiments are conducted for better visualizing different operating pipelines. Finally, we present a verification report by fourteen (14) experienced radiologists and pulmonologists.
Collapse
Affiliation(s)
- Swati Rai
- Indian Institute of Information Technology Vadodara, Vadodara, India.
| | - Jignesh S Bhatt
- Indian Institute of Information Technology Vadodara, Vadodara, India
| | | |
Collapse
|
8
|
Zhu Y, Wang S, Yu H, Li W, Tian J. SFPL: Sample-specific fine-grained prototype learning for imbalanced medical image classification. Med Image Anal 2024; 97:103281. [PMID: 39106764 DOI: 10.1016/j.media.2024.103281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 06/09/2024] [Accepted: 07/15/2024] [Indexed: 08/09/2024]
Abstract
Imbalanced classification is a common and difficult task in many medical image analysis applications. However, most existing approaches focus on balancing feature distribution and classifier weights between classes, while ignoring the inner-class heterogeneity and the individuality of each sample. In this paper, we proposed a sample-specific fine-grained prototype learning (SFPL) method to learn the fine-grained representation of the majority class and learn a cosine classifier specifically for each sample such that the classification model is highly tuned to the individual's characteristic. SFPL first builds multiple prototypes to represent the majority class, and then updates the prototypes through a mixture weighting strategy. Moreover, we proposed a uniform loss based on set representations to make the fine-grained prototypes distribute uniformly. To establish associations between fine-grained prototypes and cosine classifier, we propose a selective attention aggregation module to select the effective fine-grained prototypes for final classification. Extensive experiments on three different tasks demonstrate that SFPL outperforms the state-of-the-art (SOTA) methods. Importantly, as the imbalance ratio increases from 10 to 100, the improvement of SFPL over SOTA methods increases from 2.2% to 2.4%; as the training data decreases from 800 to 100, the improvement of SFPL over SOTA methods increases from 2.2% to 3.8%.
Collapse
Affiliation(s)
- Yongbei Zhu
- Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, School of Engineering Medicine, Beihang University, China; CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, China
| | - Shuo Wang
- Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, School of Engineering Medicine, Beihang University, China; CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, China.
| | - He Yu
- Department of Critical Care and Respiratory Medicine, West China Hospital of Sichuan University, Chengdu, China
| | - Weimin Li
- Department of Critical Care and Respiratory Medicine, West China Hospital of Sichuan University, Chengdu, China
| | - Jie Tian
- Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, School of Engineering Medicine, Beihang University, China; CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, China
| |
Collapse
|
9
|
Zhao M, Song L, Zhu J, Zhou T, Zhang Y, Chen SC, Li H, Cao D, Jiang YQ, Ho W, Cai J, Ren G. Non-contrasted computed tomography (NCCT) based chronic thromboembolic pulmonary hypertension (CTEPH) automatic diagnosis using cascaded network with multiple instance learning. Phys Med Biol 2024; 69:185011. [PMID: 39191289 DOI: 10.1088/1361-6560/ad7455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 08/27/2024] [Indexed: 08/29/2024]
Abstract
Objective.The diagnosis of chronic thromboembolic pulmonary hypertension (CTEPH) is challenging due to nonspecific early symptoms, complex diagnostic processes, and small lesion sizes. This study aims to develop an automatic diagnosis method for CTEPH using non-contrasted computed tomography (NCCT) scans, enabling automated diagnosis without precise lesion annotation.Approach.A novel cascade network (CN) with multiple instance learning (CNMIL) framework was developed to improve the diagnosis of CTEPH. This method uses a CN architecture combining two Resnet-18 CNN networks to progressively distinguish between normal and CTEPH cases. Multiple instance learning (MIL) is employed to treat each 3D CT case as a 'bag' of image slices, using attention scoring to identify the most important slices. An attention module helps the model focus on diagnostically relevant regions within each slice. The dataset comprised NCCT scans from 300 subjects, including 117 males and 183 females, with an average age of 52.5 ± 20.9 years, consisting of 132 normal cases and 168 cases of lung diseases, including 88 cases of CTEPH. The CNMIL framework was evaluated using sensitivity, specificity, and the area under the curve (AUC) metrics, and compared with common 3D supervised classification networks and existing CTEPH automatic diagnosis networks.Main results. The CNMIL framework demonstrated high diagnostic performance, achieving an AUC of 0.807, accuracy of 0.833, sensitivity of 0.795, and specificity of 0.849 in distinguishing CTEPH cases. Ablation studies revealed that integrating MIL and the CN significantly enhanced performance, with the model achieving an AUC of 0.978 and perfect sensitivity (1.000) in normal classification. Comparisons with other 3D network architectures confirmed that the integrated model outperformed others, achieving the highest AUC of 0.8419.Significance. The CNMIL network requires no additional scans or annotations, relying solely on NCCT. This approach can improve timely and accurate CTEPH detection, resulting in better patient outcomes.
Collapse
Affiliation(s)
- Mayang Zhao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Liming Song
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Ta Zhou
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Yuanpeng Zhang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Shu-Cheng Chen
- School of Nursing, Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Haojiang Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Centre, Guangzhou, People's Republic of China
| | - Di Cao
- Department of Radiology, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Centre, Guangzhou, People's Republic of China
| | - Yi-Quan Jiang
- Department of Minimally Invasive Interventional Therapy, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, People's Republic of China
| | - Waiyin Ho
- Department of Nuclear Medicine, Queen Mary Hospital, Hong Kong Special Administrative Region of China , People's Republic of China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| |
Collapse
|
10
|
Cai Y, Fu H, Yin J, Ding Y, Hu Y, He H, Huang J. A novel AI-based diagnostic model for pertussis pneumonia. Medicine (Baltimore) 2024; 103:e39457. [PMID: 39183423 PMCID: PMC11346885 DOI: 10.1097/md.0000000000039457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 05/15/2024] [Accepted: 08/05/2024] [Indexed: 08/27/2024] Open
Abstract
It is still very difficult to diagnose pertussis based on a doctor's experience. Our aim is to develop a model based on machine learning algorithms combined with biochemical blood tests to diagnose pertussis. A total of 295 patients with pertussis and 295 patients with non-pertussis lower respiratory infections between January 2022 and January 2023, matched for age and gender ratio, were included in our study. Patients underwent a reverse transcription polymerase chain reaction test for pertussis and other viruses. Univariate logistic regression analysis was used to screen for clinical and blood biochemical features associated with pertussis. The optimal features and 3 machine learning algorithms including K-nearest neighbor, support vector machine, and eXtreme Gradient Boosting (XGBoost) were used to develop diagnostic models. Using univariate logistic regression analysis, 18 out of the 27 features were considered optimal features associated with pertussis The XGBoost model was significantly superior to both the support vector machine model (Delong test, P = .01) and the K-nearest neighbor model (Delong test, P = .01), with the area under the receiver operating characteristic curve of 0.96 and an accuracy of 0.923. Our diagnostic model based on blood biochemical test results at admission and XGBoost algorithm can help doctors effectively diagnose pertussis.
Collapse
Affiliation(s)
- Yihong Cai
- Department of Pediatrics, Chongqing University Jiangjin Hospital, Chongqing, P.R. China
| | - Hong Fu
- Department of Pediatrics, Chongqing University Jiangjin Hospital, Chongqing, P.R. China
| | - Jun Yin
- Department of Pediatrics, Chongqing University Jiangjin Hospital, Chongqing, P.R. China
| | - Yang Ding
- Department of Pediatrics, Chongqing University Jiangjin Hospital, Chongqing, P.R. China
| | - Yanghong Hu
- Department of Pediatrics, Chongqing University Jiangjin Hospital, Chongqing, P.R. China
| | - Hong He
- Department of Pediatrics, Chongqing University Jiangjin Hospital, Chongqing, P.R. China
| | - Jing Huang
- Department of Pediatrics, Chongqing University Jiangjin Hospital, Chongqing, P.R. China
| |
Collapse
|
11
|
Shiri I, Salimi Y, Sirjani N, Razeghi B, Bagherieh S, Pakbin M, Mansouri Z, Hajianfar G, Avval AH, Askari D, Ghasemian M, Sandoughdaran S, Sohrabi A, Sadati E, Livani S, Iranpour P, Kolahi S, Khosravi B, Bijari S, Sayfollahi S, Atashzar MR, Hasanian M, Shahhamzeh A, Teimouri A, Goharpey N, Shirzad-Aski H, Karimi J, Radmard AR, Rezaei-Kalantari K, Oghli MG, Oveisi M, Vafaei Sadr A, Voloshynovskiy S, Zaidi H. Differential privacy preserved federated learning for prognostic modeling in COVID-19 patients using large multi-institutional chest CT dataset. Med Phys 2024; 51:4736-4747. [PMID: 38335175 DOI: 10.1002/mp.16964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 01/10/2024] [Accepted: 01/21/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND Notwithstanding the encouraging results of previous studies reporting on the efficiency of deep learning (DL) in COVID-19 prognostication, clinical adoption of the developed methodology still needs to be improved. To overcome this limitation, we set out to predict the prognosis of a large multi-institutional cohort of patients with COVID-19 using a DL-based model. PURPOSE This study aimed to evaluate the performance of deep privacy-preserving federated learning (DPFL) in predicting COVID-19 outcomes using chest CT images. METHODS After applying inclusion and exclusion criteria, 3055 patients from 19 centers, including 1599 alive and 1456 deceased, were enrolled in this study. Data from all centers were split (randomly with stratification respective to each center and class) into a training/validation set (70%/10%) and a hold-out test set (20%). For the DL model, feature extraction was performed on 2D slices, and averaging was performed at the final layer to construct a 3D model for each scan. The DensNet model was used for feature extraction. The model was developed using centralized and FL approaches. For FL, we employed DPFL approaches. Membership inference attack was also evaluated in the FL strategy. For model evaluation, different metrics were reported in the hold-out test sets. In addition, models trained in two scenarios, centralized and FL, were compared using the DeLong test for statistical differences. RESULTS The centralized model achieved an accuracy of 0.76, while the DPFL model had an accuracy of 0.75. Both the centralized and DPFL models achieved a specificity of 0.77. The centralized model achieved a sensitivity of 0.74, while the DPFL model had a sensitivity of 0.73. A mean AUC of 0.82 and 0.81 with 95% confidence intervals of (95% CI: 0.79-0.85) and (95% CI: 0.77-0.84) were achieved by the centralized model and the DPFL model, respectively. The DeLong test did not prove statistically significant differences between the two models (p-value = 0.98). The AUC values for the inference attacks fluctuate between 0.49 and 0.51, with an average of 0.50 ± 0.003 and 95% CI for the mean AUC of 0.500 to 0.501. CONCLUSION The performance of the proposed model was comparable to centralized models while operating on large and heterogeneous multi-institutional datasets. In addition, the model was resistant to inference attacks, ensuring the privacy of shared data during the training process.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Nasim Sirjani
- Research and Development Department, Med Fanavarn Plus Co, Karaj, Iran
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Sara Bagherieh
- School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Masoumeh Pakbin
- Imaging Department, Qom University of Medical Sciences, Qom, Iran
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Dariush Askari
- Department of Radiology Technology, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammadreza Ghasemian
- Department of Radiology, Shahid Beheshti Hospital, Qom University of Medical Sciences, Qom, Iran
| | - Saleh Sandoughdaran
- Department of Clinical Oncology, Royal Surrey County Hospital, Guildford, UK
| | - Ahmad Sohrabi
- Radin Makian Azma Mehr Ltd., Radinmehr Veterinary Laboratory, Iran University of Medical Sciences, Gorgan, Iran
| | - Elham Sadati
- Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran
| | - Somayeh Livani
- Clinical Research Development Unit (CRDU), Sayad Shirazi Hospital, Golestan University of Medical Sciences, Gorgan, Iran
| | - Pooya Iranpour
- Medical Imaging Research Center, Department of Radiology, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Shahriar Kolahi
- Department of Radiology, School of Medicine, Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Bardia Khosravi
- Digestive Diseases Research Center, Digestive Diseases Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Salar Bijari
- Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran
| | - Sahar Sayfollahi
- Department of Neurosurgery, Faculty of Medical Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Mohammad Reza Atashzar
- Department of Immunology, School of Medicine, Fasa University of Medical Sciences, Fasa, Iran
| | - Mohammad Hasanian
- Department of Radiology, Arak University of Medical Sciences, Arak, Iran
| | - Alireza Shahhamzeh
- Clinical research development center, Qom University of Medical Sciences, Qom, Iran
| | - Arash Teimouri
- Medical Imaging Research Center, Department of Radiology, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Neda Goharpey
- Department of radiation oncology, Shohada-e Tajrish Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | | - Jalal Karimi
- Department of Infectious Disease, School of Medicine, Fasa University of Medical Sciences, Fasa, Iran
| | - Amir Reza Radmard
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Kiara Rezaei-Kalantari
- Rajaie Cardiovascular, Medical & Research Center, Iran University of Medical Science, Tehran, Iran
| | | | - Mehrdad Oveisi
- Department of Computer Science, University of British Columbia, Vancouver, British Columbia, Canada
| | - Alireza Vafaei Sadr
- Department of Public Health Sciences, College of Medicine, Pennsylvania State University, Hershey, Pennsylvania, USA
| | | | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
- University Research and Innovation Center, Óbuda University, Budapest, Hungary
| |
Collapse
|
12
|
Zhou J, Zhou L, Wang D, Xu X, Li H, Chu Y, Han W, Gao X. Personalized and privacy-preserving federated heterogeneous medical image analysis with PPPML-HMI. Comput Biol Med 2024; 169:107861. [PMID: 38141449 DOI: 10.1016/j.compbiomed.2023.107861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 12/13/2023] [Accepted: 12/14/2023] [Indexed: 12/25/2023]
Abstract
Heterogeneous data is endemic due to the use of diverse models and settings of devices by hospitals in the field of medical imaging. However, there are few open-source frameworks for federated heterogeneous medical image analysis with personalization and privacy protection without the demand to modify the existing model structures or to share any private data. Here, we proposed PPPML-HMI, a novel open-source learning paradigm for personalized and privacy-preserving federated heterogeneous medical image analysis. To our best knowledge, personalization and privacy protection were discussed simultaneously for the first time under the federated scenario by integrating the PerFedAvg algorithm and designing the novel cyclic secure aggregation with the homomorphic encryption algorithm. To show the utility of PPPML-HMI, we applied it to a simulated classification task namely the classification of healthy people and patients from the RAD-ChestCT Dataset, and one real-world segmentation task namely the segmentation of lung infections from COVID-19 CT scans. Meanwhile, we applied the improved deep leakage from gradients to simulate adversarial attacks and showed the strong privacy-preserving capability of PPPML-HMI. By applying PPPML-HMI to both tasks with different neural networks, a varied number of users, and sample sizes, we demonstrated the strong generalizability of PPPML-HMI in privacy-preserving federated learning on heterogeneous medical images.
Collapse
Affiliation(s)
- Juexiao Zhou
- Computer Science Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia; Computational Bioscience Research Center, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia
| | - Longxi Zhou
- Computer Science Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia; Computational Bioscience Research Center, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia
| | - Di Wang
- Computer Science Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia; Computational Bioscience Research Center, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia
| | - Xiaopeng Xu
- Computer Science Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia; Computational Bioscience Research Center, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia
| | - Haoyang Li
- Computer Science Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia; Computational Bioscience Research Center, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia
| | - Yuetan Chu
- Computer Science Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia; Computational Bioscience Research Center, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia
| | - Wenkai Han
- Computer Science Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia; Computational Bioscience Research Center, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia
| | - Xin Gao
- Computer Science Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia; Computational Bioscience Research Center, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia.
| |
Collapse
|
13
|
Viderman D, Kotov A, Popov M, Abdildin Y. Machine and deep learning methods for clinical outcome prediction based on physiological data of COVID-19 patients: a scoping review. Int J Med Inform 2024; 182:105308. [PMID: 38091862 DOI: 10.1016/j.ijmedinf.2023.105308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 11/20/2023] [Accepted: 12/03/2023] [Indexed: 01/07/2024]
Abstract
INTRODUCTION Since the beginning of the COVID-19 pandemic, numerous machine and deep learning (MDL) methods have been proposed in the literature to analyze patient physiological data. The objective of this review is to summarize various aspects of these methods and assess their practical utility for predicting various clinical outcomes. METHODS We searched PubMed, Scopus, and Cochrane Library, screened and selected the studies matching the inclusion criteria. The clinical analysis focused on the characteristics of the patient cohorts in the studies included in this review, the specific tasks in the context of the COVID-19 pandemic that machine and deep learning methods were used for, and their practical limitations. The technical analysis focused on the details of specific MDL methods and their performance. RESULTS Analysis of the 48 selected studies revealed that the majority (∼54 %) of them examined the application of MDL methods for the prediction of survival/mortality-related patient outcomes, while a smaller fraction (∼13 %) of studies also examined applications to the prediction of patients' physiological outcomes and hospital resource utilization. 21 % of the studies examined the application of MDL methods to multiple clinical tasks. Machine and deep learning methods have been shown to be effective at predicting several outcomes of COVID-19 patients, such as disease severity, complications, intensive care unit (ICU) transfer, and mortality. MDL methods also achieved high accuracy in predicting the required number of ICU beds and ventilators. CONCLUSION Machine and deep learning methods have been shown to be valuable tools for predicting disease severity, organ dysfunction and failure, patient outcomes, and hospital resource utilization during the COVID-19 pandemic. The discovered knowledge and our conclusions and recommendations can also be useful to healthcare professionals and artificial intelligence researchers in managing future pandemics.
Collapse
Affiliation(s)
- Dmitriy Viderman
- Department of Surgery, School of Medicine, Nazarbayev University, Astana, Kazakhstan; Department of Anesthesiology, Intensive Care, and Pain Medicine, National Research Oncology Center, Astana, Kazakhstan.
| | - Alexander Kotov
- Department of Computer Science, College of Engineering, Wayne State University, Detroit, USA.
| | - Maxim Popov
- Department of Computer Science, School of Engineering and Digital Sciences, Nazarbayev University, Astana, Kazakhstan.
| | - Yerkin Abdildin
- Department of Mechanical and Aerospace Engineering, School of Engineering and Digital Sciences, Nazarbayev University, Astana, Kazakhstan.
| |
Collapse
|
14
|
Taddese AA, Tilahun BC, Awoke T, Atnafu A, Mamuye A, Mengiste SA. Deep-learning models for image-based gynecological cancer diagnosis: a systematic review and meta- analysis. Front Oncol 2024; 13:1216326. [PMID: 38273847 PMCID: PMC10809847 DOI: 10.3389/fonc.2023.1216326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 11/13/2023] [Indexed: 01/27/2024] Open
Abstract
Introduction Gynecological cancers pose a significant threat to women worldwide, especially those in resource-limited settings. Human analysis of images remains the primary method of diagnosis, but it can be inconsistent and inaccurate. Deep learning (DL) can potentially enhance image-based diagnosis by providing objective and accurate results. This systematic review and meta-analysis aimed to summarize the recent advances of deep learning (DL) techniques for gynecological cancer diagnosis using various images and explore their future implications. Methods The study followed the PRISMA-2 guidelines, and the protocol was registered in PROSPERO. Five databases were searched for articles published from January 2018 to December 2022. Articles that focused on five types of gynecological cancer and used DL for diagnosis were selected. Two reviewers assessed the articles for eligibility and quality using the QUADAS-2 tool. Data was extracted from each study, and the performance of DL techniques for gynecological cancer classification was estimated by pooling and transforming sensitivity and specificity values using a random-effects model. Results The review included 48 studies, and the meta-analysis included 24 studies. The studies used different images and models to diagnose different gynecological cancers. The most popular models were ResNet, VGGNet, and UNet. DL algorithms showed more sensitivity but less specificity compared to machine learning (ML) methods. The AUC of the summary receiver operating characteristic plot was higher for DL algorithms than for ML methods. Of the 48 studies included, 41 were at low risk of bias. Conclusion This review highlights the potential of DL in improving the screening and diagnosis of gynecological cancer, particularly in resource-limited settings. However, the high heterogeneity and quality of the studies could affect the validity of the results. Further research is necessary to validate the findings of this study and to explore the potential of DL in improving gynecological cancer diagnosis.
Collapse
Affiliation(s)
- Asefa Adimasu Taddese
- Department of Health Informatics, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
- eHealthlab Ethiopia Research Center, University of Gondar, Gondar, Ethiopia
| | - Binyam Chakilu Tilahun
- Department of Health Informatics, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
- eHealthlab Ethiopia Research Center, University of Gondar, Gondar, Ethiopia
| | - Tadesse Awoke
- Department of Epidemiology and Biostatistics, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
| | - Asmamaw Atnafu
- eHealthlab Ethiopia Research Center, University of Gondar, Gondar, Ethiopia
- Department of Health Systems and Policy, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
| | - Adane Mamuye
- eHealthlab Ethiopia Research Center, University of Gondar, Gondar, Ethiopia
- School of Information Technology and Engineering, Addis Ababa University, Addis Ababa, Ethiopia
| | - Shegaw Anagaw Mengiste
- Department of Business, History and Social Sciences, University of Southeastern Norway, Vestfold, Vestfold, Norway
| |
Collapse
|
15
|
Singh K, Kaur N, Prabhu A. Combating COVID-19 Crisis using Artificial Intelligence (AI) Based Approach: Systematic Review. Curr Top Med Chem 2024; 24:737-753. [PMID: 38318824 DOI: 10.2174/0115680266282179240124072121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 12/19/2023] [Accepted: 12/27/2023] [Indexed: 02/07/2024]
Abstract
BACKGROUND SARS-CoV-2, the unique coronavirus that causes COVID-19, has wreaked damage around the globe, with victims displaying a wide range of difficulties that have encouraged medical professionals to look for innovative technical solutions and therapeutic approaches. Artificial intelligence-based methods have contributed a significant part in tackling complicated issues, and some institutions have been quick to embrace and tailor these solutions in response to the COVID-19 pandemic's obstacles. Here, in this review article, we have covered a few DL techniques for COVID-19 detection and diagnosis, as well as ML techniques for COVID-19 identification, severity classification, vaccine and drug development, mortality rate prediction, contact tracing, risk assessment, and public distancing. This review illustrates the overall impact of AI/ML tools on tackling and managing the outbreak. PURPOSE The focus of this research was to undertake a thorough evaluation of the literature on the part of Artificial Intelligence (AI) as a complete and efficient solution in the battle against the COVID-19 epidemic in the domains of detection and diagnostics of disease, mortality prediction and vaccine as well as drug development. METHODS A comprehensive exploration of PubMed, Web of Science, and Science Direct was conducted using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) regulations to find all possibly suitable papers conducted and made publicly available between December 1, 2019, and August 2023. COVID-19, along with AI-specific words, was used to create the query syntax. RESULTS During the period covered by the search strategy, 961 articles were published and released online. Out of these, a total of 135 papers were chosen for additional investigation. Mortality rate prediction, early detection and diagnosis, vaccine as well as drug development, and lastly, incorporation of AI for supervising and controlling the COVID-19 pandemic were the four main topics focused entirely on AI applications used to tackle the COVID-19 crisis. Out of 135, 60 research papers focused on the detection and diagnosis of the COVID-19 pandemic. Next, 19 of the 135 studies applied a machine-learning approach for mortality rate prediction. Another 22 research publications emphasized the vaccine as well as drug development. Finally, the remaining studies were concentrated on controlling the COVID-19 pandemic by applying AI AI-based approach to it. CONCLUSION We compiled papers from the available COVID-19 literature that used AI-based methodologies to impart insights into various COVID-19 topics in this comprehensive study. Our results suggest crucial characteristics, data types, and COVID-19 tools that can aid in medical and translational research facilitation.
Collapse
Affiliation(s)
- Kavya Singh
- Department of Biotechnology, Banasthali University, Banasthali Vidyapith, Banasthali, 304022, Rajasthan, India
| | - Navjeet Kaur
- Department of Chemistry & Division of Research and Development, Lovely Professional University, Phagwara, 144411, Punjab, India
| | - Ashish Prabhu
- Biotechnology Department, NIT Warangal, Warangal, 506004, Telangana, India
| |
Collapse
|
16
|
Li Y, Chen D, Liu S, Lin J, Wang W, Huang J, Tan L, Liang L, Wang Z, Peng K, Li Q, Jian W, Zhang Y, Peng C, Chen H, Zhang X, Zheng J. Supervised training models with or without manual lesion delineation outperform clinicians in distinguishing pulmonary cryptococcosis from lung adenocarcinoma on chest CT. Mycoses 2024; 67:e13692. [PMID: 38214431 DOI: 10.1111/myc.13692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 12/16/2023] [Accepted: 12/22/2023] [Indexed: 01/13/2024]
Abstract
BACKGROUND The role of artificial intelligence (AI) in the discrimination between pulmonary cryptococcosis (PC) and lung adenocarcinoma (LA) warrants further research. OBJECTIVES To compare the performances of AI models with clinicians in distinguishing PC from LA on chest CT. METHODS Patients diagnosed with confirmed PC or LA were retrospectively recruited from three tertiary hospitals in Guangzhou. A deep learning framework was employed to develop two models: an undelineated supervised training (UST) model utilising original CT images, and a delineated supervised training (DST) model utilising CT images with manual lesion annotations provided by physicians. A subset of 20 cases was randomly selected from the entire dataset and reviewed by clinicians through a network questionnaire. The sensitivity, specificity and accuracy of the models and the clinicians were calculated. RESULTS A total of 395 PC cases and 249 LA cases were included in the final analysis. The internal validation results for the UST model showed a sensitivity of 85.3%, specificity of 81.0%, accuracy of 83.6% and an area under the curve (AUC) of 0.93. Similarly, the DST model exhibited a sensitivity of 88.2%, specificity of 88.1%, accuracy of 88.2% and an AUC of 0.94. The external validation of the two models yielded AUC values of 0.74 and 0.77, respectively. The average sensitivity, specificity and accuracy of 102 clinicians were determined to be 63.1%, 53.7% and 59.3%, respectively. CONCLUSIONS Both models outperformed the clinicians in distinguishing between PC and LA on chest CT, with the UST model exhibiting comparable performance to the DST model.
Collapse
Affiliation(s)
- Yun Li
- National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Deyan Chen
- Shenyang Neusoft Intelligent Medical Technology Research Institute Co., Ltd, Shenyang, China
| | - Shuyi Liu
- National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Junfeng Lin
- National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Wei Wang
- School of Biomedical Sciences and Engineering, South China University of Technology, Guangzhou International Campus, Guangzhou, China
- Department of Information, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Jinhai Huang
- National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Lunfang Tan
- National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Lina Liang
- National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Zhufeng Wang
- National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Kang Peng
- National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Qiasheng Li
- National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Wenhua Jian
- National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Youwen Zhang
- Department of Neurology, Gaozhou People's Hospital, Gaozhou, China
| | - Chengbao Peng
- Shenyang Neusoft Intelligent Medical Technology Research Institute Co., Ltd, Shenyang, China
| | - Huai Chen
- Department of Radiology, the Second Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Xia Zhang
- Shenyang Neusoft Intelligent Medical Technology Research Institute Co., Ltd, Shenyang, China
| | - Jinping Zheng
- National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| |
Collapse
|
17
|
Cha MJ, Solomon JJ, Lee JE, Choi H, Chae KJ, Lee KS, Lynch DA. Chronic Lung Injury after COVID-19 Pneumonia: Clinical, Radiologic, and Histopathologic Perspectives. Radiology 2024; 310:e231643. [PMID: 38193836 PMCID: PMC10831480 DOI: 10.1148/radiol.231643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 09/06/2023] [Accepted: 09/26/2023] [Indexed: 01/10/2024]
Abstract
With the COVID-19 pandemic having lasted more than 3 years, concerns are growing about prolonged symptoms and respiratory complications in COVID-19 survivors, collectively termed post-COVID-19 condition (PCC). Up to 50% of patients have residual symptoms and physiologic impairment, particularly dyspnea and reduced diffusion capacity. Studies have also shown that 24%-54% of patients hospitalized during the 1st year of the pandemic exhibit radiologic abnormalities, such as ground-glass opacity, reticular opacity, bronchial dilatation, and air trapping, when imaged more than 1 year after infection. In patients with persistent respiratory symptoms but normal results at chest CT, dual-energy contrast-enhanced CT, xenon 129 MRI, and low-field-strength MRI were reported to show abnormal ventilation and/or perfusion, suggesting that some lung injury may not be detectable with standard CT. Histologic patterns in post-COVID-19 lung disease include fibrosis, organizing pneumonia, and vascular abnormality, indicating that different pathologic mechanisms may contribute to PCC. Therefore, a comprehensive imaging approach is necessary to evaluate and diagnose patients with persistent post-COVID-19 symptoms. This review will focus on the long-term findings of clinical and radiologic abnormalities and describe histopathologic perspectives. It also addresses advanced imaging techniques and deep learning approaches that can be applied to COVID-19 survivors. This field remains an active area of research, and further follow-up studies are warranted for a better understanding of the chronic stage of the disease and developing a multidisciplinary approach for patient management.
Collapse
Affiliation(s)
- Min Jae Cha
- From the Department of Radiology, Chung-Ang University Hospital,
Seoul, Korea (M.J.C., H.C.); Departments of Medicine (J.J.S.) and Radiology
(K.J.C., D.A.L.), National Jewish Health, 1400 Jackson St, Denver, CO 80206;
Department of Radiology, Chonnam National University Hospital, Gwangju, Republic
of Korea (J.E.L.); Department of Radiology, Research Institute of Clinical
Medicine of Jeonbuk National University, Biomedical Research Institute of
Jeonbuk National University Hospital, Jeonju, Republic of Korea (K.J.C); and
Department of Radiology, Sungkyunkwan University School of Medicine and Samsung
ChangWon Hospital, Gyeongsangnam, Republic of Korea (K.S.L.)
| | - Joshua J. Solomon
- From the Department of Radiology, Chung-Ang University Hospital,
Seoul, Korea (M.J.C., H.C.); Departments of Medicine (J.J.S.) and Radiology
(K.J.C., D.A.L.), National Jewish Health, 1400 Jackson St, Denver, CO 80206;
Department of Radiology, Chonnam National University Hospital, Gwangju, Republic
of Korea (J.E.L.); Department of Radiology, Research Institute of Clinical
Medicine of Jeonbuk National University, Biomedical Research Institute of
Jeonbuk National University Hospital, Jeonju, Republic of Korea (K.J.C); and
Department of Radiology, Sungkyunkwan University School of Medicine and Samsung
ChangWon Hospital, Gyeongsangnam, Republic of Korea (K.S.L.)
| | - Jong Eun Lee
- From the Department of Radiology, Chung-Ang University Hospital,
Seoul, Korea (M.J.C., H.C.); Departments of Medicine (J.J.S.) and Radiology
(K.J.C., D.A.L.), National Jewish Health, 1400 Jackson St, Denver, CO 80206;
Department of Radiology, Chonnam National University Hospital, Gwangju, Republic
of Korea (J.E.L.); Department of Radiology, Research Institute of Clinical
Medicine of Jeonbuk National University, Biomedical Research Institute of
Jeonbuk National University Hospital, Jeonju, Republic of Korea (K.J.C); and
Department of Radiology, Sungkyunkwan University School of Medicine and Samsung
ChangWon Hospital, Gyeongsangnam, Republic of Korea (K.S.L.)
| | - Hyewon Choi
- From the Department of Radiology, Chung-Ang University Hospital,
Seoul, Korea (M.J.C., H.C.); Departments of Medicine (J.J.S.) and Radiology
(K.J.C., D.A.L.), National Jewish Health, 1400 Jackson St, Denver, CO 80206;
Department of Radiology, Chonnam National University Hospital, Gwangju, Republic
of Korea (J.E.L.); Department of Radiology, Research Institute of Clinical
Medicine of Jeonbuk National University, Biomedical Research Institute of
Jeonbuk National University Hospital, Jeonju, Republic of Korea (K.J.C); and
Department of Radiology, Sungkyunkwan University School of Medicine and Samsung
ChangWon Hospital, Gyeongsangnam, Republic of Korea (K.S.L.)
| | - Kum Ju Chae
- From the Department of Radiology, Chung-Ang University Hospital,
Seoul, Korea (M.J.C., H.C.); Departments of Medicine (J.J.S.) and Radiology
(K.J.C., D.A.L.), National Jewish Health, 1400 Jackson St, Denver, CO 80206;
Department of Radiology, Chonnam National University Hospital, Gwangju, Republic
of Korea (J.E.L.); Department of Radiology, Research Institute of Clinical
Medicine of Jeonbuk National University, Biomedical Research Institute of
Jeonbuk National University Hospital, Jeonju, Republic of Korea (K.J.C); and
Department of Radiology, Sungkyunkwan University School of Medicine and Samsung
ChangWon Hospital, Gyeongsangnam, Republic of Korea (K.S.L.)
| | - Kyung Soo Lee
- From the Department of Radiology, Chung-Ang University Hospital,
Seoul, Korea (M.J.C., H.C.); Departments of Medicine (J.J.S.) and Radiology
(K.J.C., D.A.L.), National Jewish Health, 1400 Jackson St, Denver, CO 80206;
Department of Radiology, Chonnam National University Hospital, Gwangju, Republic
of Korea (J.E.L.); Department of Radiology, Research Institute of Clinical
Medicine of Jeonbuk National University, Biomedical Research Institute of
Jeonbuk National University Hospital, Jeonju, Republic of Korea (K.J.C); and
Department of Radiology, Sungkyunkwan University School of Medicine and Samsung
ChangWon Hospital, Gyeongsangnam, Republic of Korea (K.S.L.)
| | - David A. Lynch
- From the Department of Radiology, Chung-Ang University Hospital,
Seoul, Korea (M.J.C., H.C.); Departments of Medicine (J.J.S.) and Radiology
(K.J.C., D.A.L.), National Jewish Health, 1400 Jackson St, Denver, CO 80206;
Department of Radiology, Chonnam National University Hospital, Gwangju, Republic
of Korea (J.E.L.); Department of Radiology, Research Institute of Clinical
Medicine of Jeonbuk National University, Biomedical Research Institute of
Jeonbuk National University Hospital, Jeonju, Republic of Korea (K.J.C); and
Department of Radiology, Sungkyunkwan University School of Medicine and Samsung
ChangWon Hospital, Gyeongsangnam, Republic of Korea (K.S.L.)
| |
Collapse
|
18
|
Zysman M, Asselineau J, Saut O, Frison E, Oranger M, Maurac A, Charriot J, Achkir R, Regueme S, Klein E, Bommart S, Bourdin A, Dournes G, Casteigt J, Blum A, Ferretti G, Degano B, Thiébaut R, Chabot F, Berger P, Laurent F, Benlala I. Development and external validation of a prediction model for the transition from mild to moderate or severe form of COVID-19. Eur Radiol 2023; 33:9262-9274. [PMID: 37405504 PMCID: PMC10667132 DOI: 10.1007/s00330-023-09759-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/22/2023] [Accepted: 04/04/2023] [Indexed: 07/06/2023]
Abstract
OBJECTIVES COVID-19 pandemic seems to be under control. However, despite the vaccines, 5 to 10% of the patients with mild disease develop moderate to critical forms with potential lethal evolution. In addition to assess lung infection spread, chest CT helps to detect complications. Developing a prediction model to identify at-risk patients of worsening from mild COVID-19 combining simple clinical and biological parameters with qualitative or quantitative data using CT would be relevant to organizing optimal patient management. METHODS Four French hospitals were used for model training and internal validation. External validation was conducted in two independent hospitals. We used easy-to-obtain clinical (age, gender, smoking, symptoms' onset, cardiovascular comorbidities, diabetes, chronic respiratory diseases, immunosuppression) and biological parameters (lymphocytes, CRP) with qualitative or quantitative data (including radiomics) from the initial CT in mild COVID-19 patients. RESULTS Qualitative CT scan with clinical and biological parameters can predict which patients with an initial mild presentation would develop a moderate to critical form of COVID-19, with a c-index of 0.70 (95% CI 0.63; 0.77). CT scan quantification improved the performance of the prediction up to 0.73 (95% CI 0.67; 0.79) and radiomics up to 0.77 (95% CI 0.71; 0.83). Results were similar in both validation cohorts, considering CT scans with or without injection. CONCLUSION Adding CT scan quantification or radiomics to simple clinical and biological parameters can better predict which patients with an initial mild COVID-19 would worsen than qualitative analyses alone. This tool could help to the fair use of healthcare resources and to screen patients for potential new drugs to prevent a pejorative evolution of COVID-19. CLINICAL TRIAL REGISTRATION NCT04481620. CLINICAL RELEVANCE STATEMENT CT scan quantification or radiomics analysis is superior to qualitative analysis, when used with simple clinical and biological parameters, to determine which patients with an initial mild presentation of COVID-19 would worsen to a moderate to critical form. KEY POINTS • Qualitative CT scan analyses with simple clinical and biological parameters can predict which patients with an initial mild COVID-19 and respiratory symptoms would worsen with a c-index of 0.70. • Adding CT scan quantification improves the performance of the clinical prediction model to an AUC of 0.73. • Radiomics analyses slightly improve the performance of the model to a c-index of 0.77.
Collapse
Affiliation(s)
- Maéva Zysman
- CHU Bordeaux, 33600, Pessac, France.
- Univ. Bordeaux, Centre de Recherche Cardio-Thoracique de Bordeaux, 33600, Bordeaux, France.
- Centre de Recherche Cardio-Thoracique de Bordeaux (U1045), Centre d'Investigation Clinique, INSERM, Bordeaux Population Health (U1219), (CIC-P 1401), 33600, Pessac, France.
| | | | - Olivier Saut
- "Institut de Mathématiques de Bordeaux" (IMB), UMR5251, CNRS, University of Bordeaux, 351 Cours Libération, 33400, Talence, France
- MONC Team & SISTM Team, INRIA Bordeaux Sud-Ouest, 200 Av Vieille Tour, 33400, Talence, France
| | | | - Mathilde Oranger
- Pôle Des Spécialités Médicales/Département de Pneumologie, Université de Lorraine, Centre Hospitalier Régional Universitaire (CHRU) Nancy, Service de Radiologie Et d'Imagerie, Nancy, France
- Faculté de Médecine de Nancy, Université de Lorraine, Institut National de La Santé Et de La Recherche Médicale (INSERM) Unité Médicale de Recherche (UMR), S 1116, Vandœuvre-Lès-Nancy, France
| | - Arnaud Maurac
- CHU Bordeaux, 33600, Pessac, France
- Univ. Bordeaux, Centre de Recherche Cardio-Thoracique de Bordeaux, 33600, Bordeaux, France
- Centre de Recherche Cardio-Thoracique de Bordeaux (U1045), Centre d'Investigation Clinique, INSERM, Bordeaux Population Health (U1219), (CIC-P 1401), 33600, Pessac, France
| | - Jeremy Charriot
- Department of Respiratory Diseases, Arnaud de Villeneuve Hospital, Montpellier University Hospital, CEDEX 5, 34295, Montpellier, France
- PhyMedExp, University of Montpellier, INSERM U1046, CEDEX 5, 34295, Montpellier, France
| | | | | | | | - Sébastien Bommart
- Department of Respiratory Diseases, Arnaud de Villeneuve Hospital, Montpellier University Hospital, CEDEX 5, 34295, Montpellier, France
- PhyMedExp, University of Montpellier, INSERM U1046, CEDEX 5, 34295, Montpellier, France
| | - Arnaud Bourdin
- Department of Respiratory Diseases, Arnaud de Villeneuve Hospital, Montpellier University Hospital, CEDEX 5, 34295, Montpellier, France
- PhyMedExp, University of Montpellier, INSERM U1046, CEDEX 5, 34295, Montpellier, France
| | - Gael Dournes
- CHU Bordeaux, 33600, Pessac, France
- Univ. Bordeaux, Centre de Recherche Cardio-Thoracique de Bordeaux, 33600, Bordeaux, France
- Centre de Recherche Cardio-Thoracique de Bordeaux (U1045), Centre d'Investigation Clinique, INSERM, Bordeaux Population Health (U1219), (CIC-P 1401), 33600, Pessac, France
| | | | - Alain Blum
- Pôle Des Spécialités Médicales/Département de Pneumologie, Université de Lorraine, Centre Hospitalier Régional Universitaire (CHRU) Nancy, Service de Radiologie Et d'Imagerie, Nancy, France
| | - Gilbert Ferretti
- France Service de Radiologie Diagnostique Et Interventionnelle, Université Grenoble Alpes, CHU Grenoble-Alpes, Grenoble, France
| | - Bruno Degano
- France Service de Radiologie Diagnostique Et Interventionnelle, Université Grenoble Alpes, CHU Grenoble-Alpes, Grenoble, France
| | - Rodolphe Thiébaut
- CHU Bordeaux, 33600, Pessac, France
- Univ. Bordeaux, Centre de Recherche Cardio-Thoracique de Bordeaux, 33600, Bordeaux, France
- Centre de Recherche Cardio-Thoracique de Bordeaux (U1045), Centre d'Investigation Clinique, INSERM, Bordeaux Population Health (U1219), (CIC-P 1401), 33600, Pessac, France
- MONC Team & SISTM Team, INRIA Bordeaux Sud-Ouest, 200 Av Vieille Tour, 33400, Talence, France
| | - Francois Chabot
- Pôle Des Spécialités Médicales/Département de Pneumologie, Université de Lorraine, Centre Hospitalier Régional Universitaire (CHRU) Nancy, Service de Radiologie Et d'Imagerie, Nancy, France
- Faculté de Médecine de Nancy, Université de Lorraine, Institut National de La Santé Et de La Recherche Médicale (INSERM) Unité Médicale de Recherche (UMR), S 1116, Vandœuvre-Lès-Nancy, France
| | - Patrick Berger
- CHU Bordeaux, 33600, Pessac, France
- Univ. Bordeaux, Centre de Recherche Cardio-Thoracique de Bordeaux, 33600, Bordeaux, France
- Centre de Recherche Cardio-Thoracique de Bordeaux (U1045), Centre d'Investigation Clinique, INSERM, Bordeaux Population Health (U1219), (CIC-P 1401), 33600, Pessac, France
| | - Francois Laurent
- CHU Bordeaux, 33600, Pessac, France
- Univ. Bordeaux, Centre de Recherche Cardio-Thoracique de Bordeaux, 33600, Bordeaux, France
- Centre de Recherche Cardio-Thoracique de Bordeaux (U1045), Centre d'Investigation Clinique, INSERM, Bordeaux Population Health (U1219), (CIC-P 1401), 33600, Pessac, France
| | - Ilyes Benlala
- CHU Bordeaux, 33600, Pessac, France
- Univ. Bordeaux, Centre de Recherche Cardio-Thoracique de Bordeaux, 33600, Bordeaux, France
- Centre de Recherche Cardio-Thoracique de Bordeaux (U1045), Centre d'Investigation Clinique, INSERM, Bordeaux Population Health (U1219), (CIC-P 1401), 33600, Pessac, France
| |
Collapse
|
19
|
Kawata N, Iwao Y, Matsuura Y, Suzuki M, Ema R, Sekiguchi Y, Sato H, Nishiyama A, Nagayoshi M, Takiguchi Y, Suzuki T, Haneishi H. Prediction of oxygen supplementation by a deep-learning model integrating clinical parameters and chest CT images in COVID-19. Jpn J Radiol 2023; 41:1359-1372. [PMID: 37440160 PMCID: PMC10687147 DOI: 10.1007/s11604-023-01466-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 06/28/2023] [Indexed: 07/14/2023]
Abstract
PURPOSE As of March 2023, the number of patients with COVID-19 worldwide is declining, but the early diagnosis of patients requiring inpatient treatment and the appropriate allocation of limited healthcare resources remain unresolved issues. In this study we constructed a deep-learning (DL) model to predict the need for oxygen supplementation using clinical information and chest CT images of patients with COVID-19. MATERIALS AND METHODS We retrospectively enrolled 738 patients with COVID-19 for whom clinical information (patient background, clinical symptoms, and blood test findings) was available and chest CT imaging was performed. The initial data set was divided into 591 training and 147 evaluation data. We developed a DL model that predicted oxygen supplementation by integrating clinical information and CT images. The model was validated at two other facilities (n = 191 and n = 230). In addition, the importance of clinical information for prediction was assessed. RESULTS The proposed DL model showed an area under the curve (AUC) of 89.9% for predicting oxygen supplementation. Validation from the two other facilities showed an AUC > 80%. With respect to interpretation of the model, the contribution of dyspnea and the lactate dehydrogenase level was higher in the model. CONCLUSIONS The DL model integrating clinical information and chest CT images had high predictive accuracy. DL-based prediction of disease severity might be helpful in the clinical management of patients with COVID-19.
Collapse
Affiliation(s)
- Naoko Kawata
- Department of Respirology, Graduate School of Medicine, Chiba University, 1-8-1, Inohana, Chuo-ku, Chiba-shi, Chiba, 260-8677, Japan.
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan.
- Medical Mycology Research Center (MMRC), Chiba University, Chiba, 260-8673, Japan.
| | - Yuma Iwao
- Center for Frontier Medical Engineering, Chiba University, 1-33, Yayoi-cho, Inage-ku, Chiba-shi, Chiba, 263-8522, Japan
- Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba-shi, Chiba, 263-8555, Japan
| | - Yukiko Matsuura
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-cho, Chuo-ku, Chiba-shi, Chiba, 260-0852, Japan
| | - Masaki Suzuki
- Department of Respirology, Kashiwa Kousei General Hospital, 617 Shikoda, Kashiwa-shi, Chiba, 277-8551, Japan
| | - Ryogo Ema
- Department of Respirology, Eastern Chiba Medical Center, 3-6-2, Okayamadai, Togane-shi, Chiba, 283-8686, Japan
| | - Yuki Sekiguchi
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan
| | - Hirotaka Sato
- Department of Respirology, Graduate School of Medicine, Chiba University, 1-8-1, Inohana, Chuo-ku, Chiba-shi, Chiba, 260-8677, Japan
- Department of Radiology, Soka Municipal Hospital, 2-21-1, Souka, Souka-shi, Saitama, 340-8560, Japan
| | - Akira Nishiyama
- Department of Radiology, Chiba University Hospital, 1-8-1, Inohana, Chuo-ku, Chiba-shi, Chiba, 260-8677, Japan
| | - Masaru Nagayoshi
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-cho, Chuo-ku, Chiba-shi, Chiba, 260-0852, Japan
| | - Yasuo Takiguchi
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-cho, Chuo-ku, Chiba-shi, Chiba, 260-0852, Japan
| | - Takuji Suzuki
- Department of Respirology, Graduate School of Medicine, Chiba University, 1-8-1, Inohana, Chuo-ku, Chiba-shi, Chiba, 260-8677, Japan
| | - Hideaki Haneishi
- Center for Frontier Medical Engineering, Chiba University, 1-33, Yayoi-cho, Inage-ku, Chiba-shi, Chiba, 263-8522, Japan
| |
Collapse
|
20
|
Wang F, Li X, Wen R, Luo H, Liu D, Qi S, Jing Y, Wang P, Deng G, Huang C, Du T, Wang L, Liang H, Wang J, Liu C. Pneumonia-Plus: a deep learning model for the classification of bacterial, fungal, and viral pneumonia based on CT tomography. Eur Radiol 2023; 33:8869-8878. [PMID: 37389609 DOI: 10.1007/s00330-023-09833-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 03/17/2023] [Accepted: 03/30/2023] [Indexed: 07/01/2023]
Abstract
OBJECTIVES This study aims to develop a deep learning algorithm, Pneumonia-Plus, based on computed tomography (CT) images for accurate classification of bacterial, fungal, and viral pneumonia. METHODS A total of 2763 participants with chest CT images and definite pathogen diagnosis were included to train and validate an algorithm. Pneumonia-Plus was prospectively tested on a nonoverlapping dataset of 173 patients. The algorithm's performance in classifying three types of pneumonia was compared to that of three radiologists using the McNemar test to verify its clinical usefulness. RESULTS Among the 173 patients, area under the curve (AUC) values for viral, fungal, and bacterial pneumonia were 0.816, 0.715, and 0.934, respectively. Viral pneumonia was accurately classified with sensitivity, specificity, and accuracy of 0.847, 0.919, and 0.873. Three radiologists also showed good consistency with Pneumonia-Plus. The AUC values of bacterial, fungal, and viral pneumonia were 0.480, 0.541, and 0.580 (radiologist 1: 3-year experience); 0.637, 0.693, and 0.730 (radiologist 2: 7-year experience); and 0.734, 0.757, and 0.847 (radiologist 3: 12-year experience), respectively. The McNemar test results for sensitivity showed that the diagnostic performance of the algorithm was significantly better than that of radiologist 1 and radiologist 2 (p < 0.05) in differentiating bacterial and viral pneumonia. Radiologist 3 had a higher diagnostic accuracy than the algorithm. CONCLUSIONS The Pneumonia-Plus algorithm is used to differentiate between bacterial, fungal, and viral pneumonia, which has reached the level of an attending radiologist and reduce the risk of misdiagnosis. The Pneumonia-Plus is important for appropriate treatment and avoiding the use of unnecessary antibiotics, and provide timely information to guide clinical decision-making and improve patient outcomes. CLINICAL RELEVANCE STATEMENT Pneumonia-Plus algorithm could assist in the accurate classification of pneumonia based on CT images, which has great clinical value in avoiding the use of unnecessary antibiotics, and providing timely information to guide clinical decision-making and improve patient outcomes. KEY POINTS • The Pneumonia-Plus algorithm trained from data collected from multiple centers can accurately identify bacterial, fungal, and viral pneumonia. • The Pneumonia-Plus algorithm was found to have better sensitivity in classifying viral and bacterial pneumonia in comparison to radiologist 1 (5-year experience) and radiologist 2 (7-year experience). • The Pneumonia-Plus algorithm is used to differentiate between bacterial, fungal, and viral pneumonia, which has reached the level of an attending radiologist.
Collapse
Affiliation(s)
- Fang Wang
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China
| | - Xiaoming Li
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China
| | - Ru Wen
- Medical College, Guizhou University, Guiyang, Guizhou Province, 550000, China
| | - Hu Luo
- No 1. Intensive Care Unit, Huoshenshan Hospital, Wuhan, China
- Department of Respiratory and Critical Care Medicine, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Dong Liu
- Huiying Medical Technology Co., Ltd, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Shuai Qi
- Huiying Medical Technology Co., Ltd, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Yang Jing
- Huiying Medical Technology Co., Ltd, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Peng Wang
- Medical Big Data and Artificial Intelligence Center, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Gang Deng
- Department of Radiology, Maternal and Child Health Hospital of Hubei Province, Guanggu District, Wuhan, China
| | - Cong Huang
- Department of Radiology, The 926 Hospital of PLA, Kaiyuan, China
| | - Tingting Du
- Department of Radiology, Chongqing Traditional Chinese Medicine Hospital, Chongqing, China
| | - Limei Wang
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China
| | - Hongqin Liang
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China.
| | - Jian Wang
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China.
| | - Chen Liu
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China.
| |
Collapse
|
21
|
Nur-A-Alam M, Nasir MK, Ahsan M, Based MA, Haider J, Kowalski M. Ensemble classification of integrated CT scan datasets in detecting COVID-19 using feature fusion from contourlet transform and CNN. Sci Rep 2023; 13:20063. [PMID: 37973820 PMCID: PMC10654719 DOI: 10.1038/s41598-023-47183-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 11/09/2023] [Indexed: 11/19/2023] Open
Abstract
The COVID-19 disease caused by coronavirus is constantly changing due to the emergence of different variants and thousands of people are dying every day worldwide. Early detection of this new form of pulmonary disease can reduce the mortality rate. In this paper, an automated method based on machine learning (ML) and deep learning (DL) has been developed to detect COVID-19 using computed tomography (CT) scan images extracted from three publicly available datasets (A total of 11,407 images; 7397 COVID-19 images and 4010 normal images). An unsupervised clustering approach that is a modified region-based clustering technique for segmenting COVID-19 CT scan image has been proposed. Furthermore, contourlet transform and convolution neural network (CNN) have been employed to extract features individually from the segmented CT scan images and to fuse them in one feature vector. Binary differential evolution (BDE) approach has been employed as a feature optimization technique to obtain comprehensible features from the fused feature vector. Finally, a ML/DL-based ensemble classifier considering bagging technique has been employed to detect COVID-19 from the CT images. A fivefold and generalization cross-validation techniques have been used for the validation purpose. Classification experiments have also been conducted with several pre-trained models (AlexNet, ResNet50, GoogleNet, VGG16, VGG19) and found that the ensemble classifier technique with fused feature has provided state-of-the-art performance with an accuracy of 99.98%.
Collapse
Affiliation(s)
- Md Nur-A-Alam
- Department of Computer Science & Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Bangladesh
| | - Mostofa Kamal Nasir
- Department of Computer Science & Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, York, YO10 5GH, UK
| | - Md Abdul Based
- Department of Computer Science & Engineering, Dhaka International University, Dhaka, 1205, Bangladesh
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester, M1 5GD, UK
| | - Marcin Kowalski
- Institute of Optoelectronics, Military University of Technology, Gen. S. Kaliskiego 2, Warsaw, Poland.
| |
Collapse
|
22
|
Murphy K, Muhairwe J, Schalekamp S, van Ginneken B, Ayakaka I, Mashaete K, Katende B, van Heerden A, Bosman S, Madonsela T, Gonzalez Fernandez L, Signorell A, Bresser M, Reither K, Glass TR. COVID-19 screening in low resource settings using artificial intelligence for chest radiographs and point-of-care blood tests. Sci Rep 2023; 13:19692. [PMID: 37952026 PMCID: PMC10640556 DOI: 10.1038/s41598-023-46461-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 11/01/2023] [Indexed: 11/14/2023] Open
Abstract
Artificial intelligence (AI) systems for detection of COVID-19 using chest X-Ray (CXR) imaging and point-of-care blood tests were applied to data from four low resource African settings. The performance of these systems to detect COVID-19 using various input data was analysed and compared with antigen-based rapid diagnostic tests. Participants were tested using the gold standard of RT-PCR test (nasopharyngeal swab) to determine whether they were infected with SARS-CoV-2. A total of 3737 (260 RT-PCR positive) participants were included. In our cohort, AI for CXR images was a poor predictor of COVID-19 (AUC = 0.60), since the majority of positive cases had mild symptoms and no visible pneumonia in the lungs. AI systems using differential white blood cell counts (WBC), or a combination of WBC and C-Reactive Protein (CRP) both achieved an AUC of 0.74 with a suggested optimal cut-off point at 83% sensitivity and 63% specificity. The antigen-RDT tests in this trial obtained 65% sensitivity at 98% specificity. This study is the first to validate AI tools for COVID-19 detection in an African setting. It demonstrates that screening for COVID-19 using AI with point-of-care blood tests is feasible and can operate at a higher sensitivity level than antigen testing.
Collapse
Affiliation(s)
- Keelin Murphy
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands.
| | | | - Steven Schalekamp
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands
| | - Irene Ayakaka
- SolidarMed, Partnerships for Health, Maseru, Lesotho
| | | | | | - Alastair van Heerden
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
- SAMRC/WITS Developmental Pathways for Health Research Unit, Department of Paediatrics, School of Clinical Medicine, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, Gauteng, South Africa
| | - Shannon Bosman
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Thandanani Madonsela
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Lucia Gonzalez Fernandez
- Department of Infectious Diseases and Hospital Epidemiology, University Hospital Basel, Basel, Switzerland
- SolidarMed, Partnerships for Health, Lucerne, Switzerland
| | - Aita Signorell
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Moniek Bresser
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Klaus Reither
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Tracy R Glass
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| |
Collapse
|
23
|
Long B, Zhang H, Zhang H, Chen W, Sun Y, Tang R, Lin Y, Fu Q, Yang X, Cui L, Wang K. Deep learning models of ultrasonography significantly improved the differential diagnosis performance for superficial soft-tissue masses: a retrospective multicenter study. BMC Med 2023; 21:405. [PMID: 37880716 PMCID: PMC10601110 DOI: 10.1186/s12916-023-03099-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 09/29/2023] [Indexed: 10/27/2023] Open
Abstract
BACKGROUND Most of superficial soft-tissue masses are benign tumors, and very few are malignant tumors. However, persistent growth, of both benign and malignant tumors, can be painful and even life-threatening. It is necessary to improve the differential diagnosis performance for superficial soft-tissue masses by using deep learning models. This study aimed to propose a new ultrasonic deep learning model (DLM) system for the differential diagnosis of superficial soft-tissue masses. METHODS Between January 2015 and December 2022, data for 1615 patients with superficial soft-tissue masses were retrospectively collected. Two experienced radiologists (radiologists 1 and 2 with 8 and 30 years' experience, respectively) analyzed the ultrasound images of each superficial soft-tissue mass and made a diagnosis of malignant mass or one of the five most common benign masses. After referring to the DLM results, they re-evaluated the diagnoses. The diagnostic performance and concerns of the radiologists were analyzed before and after referring to the results of the DLM results. RESULTS In the validation cohort, DLM-1 was trained to distinguish between benign and malignant masses, with an AUC of 0.992 (95% CI: 0.980, 1.0) and an ACC of 0.987 (95% CI: 0.968, 1.0). DLM-2 was trained to classify the five most common benign masses (lipomyoma, hemangioma, neurinoma, epidermal cyst, and calcifying epithelioma) with AUCs of 0.986, 0.993, 0.944, 0.973, and 0.903, respectively. In addition, under the condition of the DLM-assisted diagnosis, the radiologists greatly improved their accuracy of differential diagnosis between benign and malignant tumors. CONCLUSIONS The proposed DLM system has high clinical application value in the differential diagnosis of superficial soft-tissue masses.
Collapse
Affiliation(s)
- Bin Long
- Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China
- Department of Diagnostic Ultrasound, Peking University Third Hospital, Beijing, China
| | - Haoyan Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Han Zhang
- Department of Ultrasound, The Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Wen Chen
- Department of Diagnostic Ultrasound, Peking University Third Hospital, Beijing, China
| | - Yang Sun
- Department of Diagnostic Ultrasound, Peking University Third Hospital, Beijing, China
| | - Rui Tang
- Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China
- Department of Diagnostic Ultrasound, Peking University Third Hospital, Beijing, China
| | - Yuxuan Lin
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Qiang Fu
- Department of Ultrasound, Beijing Civil Aviation General Hospital, Beijing, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Ligang Cui
- Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China.
- Department of Diagnostic Ultrasound, Peking University Third Hospital, Beijing, China.
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
24
|
Liang H, Wang M, Wen Y, Du F, Jiang L, Geng X, Tang L, Yan H. Predicting acute pancreatitis severity with enhanced computed tomography scans using convolutional neural networks. Sci Rep 2023; 13:17514. [PMID: 37845380 PMCID: PMC10579320 DOI: 10.1038/s41598-023-44828-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 10/12/2023] [Indexed: 10/18/2023] Open
Abstract
This study aimed to evaluate acute pancreatitis (AP) severity using convolutional neural network (CNN) models with enhanced computed tomography (CT) scans. Three-dimensional DenseNet CNN models were developed and trained using the enhanced CT scans labeled with two severity assessment methods: the computed tomography severity index (CTSI) and Atlanta classification. Each labeling method was used independently for model training and validation. Model performance was evaluated using confusion matrices, areas under the receiver operating characteristic curve (AUC-ROC), accuracy, precision, recall, F1 score, and respective macro-average metrics. A total of 1,798 enhanced CT scans met the inclusion criteria were included in this study. The dataset was randomly divided into a training dataset (n = 1618) and a test dataset (n = 180) with a ratio of 9:1. The DenseNet model demonstrated promising predictions for both CTSI and Atlanta classification-labeled CT scans, with accuracy greater than 0.7 and AUC-ROC greater than 0.8. Specifically, when trained with CT scans labeled using CTSI, the DenseNet model achieved good performance, with a macro-average F1 score of 0.835 and a macro-average AUC-ROC of 0.980. The findings of this study affirm the feasibility of employing CNN models to predict the severity of AP using enhanced CT scans.
Collapse
Affiliation(s)
- Hongyin Liang
- Department of General Surgery, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
- Sichuan Provincial Key Laboratory of Pancreatic Injury and Repair, Chengdu, 610083, China
| | - Meng Wang
- Department of Traditional Chinese Medicine, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
| | - Yi Wen
- Department of General Surgery, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
- Sichuan Provincial Key Laboratory of Pancreatic Injury and Repair, Chengdu, 610083, China
| | - Feizhou Du
- Department of Radiology, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
| | - Li Jiang
- Department of Cardiac Surgery, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
| | - Xuelong Geng
- Department of Radiology, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
| | - Lijun Tang
- Department of General Surgery, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
- Sichuan Provincial Key Laboratory of Pancreatic Injury and Repair, Chengdu, 610083, China
| | - Hongtao Yan
- Department of Liver Transplantation and Hepato-biliary-pancreatic Surgery, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610016, China.
| |
Collapse
|
25
|
Mohammedain SA, Badran S, Elzouki AY, Salim H, Chalaby A, Siddiqui MYA, Hussein YY, Rahim HA, Thalib L, Alam MF, Al-Badriyeh D, Al-Maadeed S, Doi SAR. Validation of a risk prediction model for COVID-19: the PERIL prospective cohort study. Future Virol 2023:10.2217/fvl-2023-0036. [PMID: 37970094 PMCID: PMC10630949 DOI: 10.2217/fvl-2023-0036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 10/03/2023] [Indexed: 11/17/2023]
Abstract
Aim: This study aims to perform an external validation of a recently developed prognostic model for early prediction of the risk of progression to severe COVID-19. Patients & methods/materials: Patients were recruited at their initial diagnosis at two facilities within Hamad Medical Corporation in Qatar. 356 adults were included for analysis. Predictors for progression of COVID-19 were all measured at disease onset and first contact with the health system. Results: The C statistic was 83% (95% CI: 78%-87%) and the calibration plot showed that the model was well-calibrated. Conclusion: The published prognostic model for the progression of COVID-19 infection showed satisfactory discrimination and calibration and the model is easy to apply in clinical practice.d.
Collapse
Affiliation(s)
- Shahd A Mohammedain
- Department of Population Medicine, College of Medicine, QU Health, Qatar University, Doha, Qatar
| | - Saif Badran
- Department of Population Medicine, College of Medicine, QU Health, Qatar University, Doha, Qatar
- Department of Plastic Surgery, Hamad Medical Corporation, Doha, Qatar
| | - AbdelNaser Y Elzouki
- Department of Internal Medicine Hamad General Hospital Hamad Medical Corporation, Doha, Qatar
| | - Halla Salim
- Department of Internal Medicine Hamad General Hospital Hamad Medical Corporation, Doha, Qatar
| | - Ayesha Chalaby
- Department of Internal Medicine Hamad General Hospital Hamad Medical Corporation, Doha, Qatar
| | - MYA Siddiqui
- Department of Internal Medicine Hamad General Hospital Hamad Medical Corporation, Doha, Qatar
| | - Yehia Y Hussein
- Department of Population Medicine, College of Medicine, QU Health, Qatar University, Doha, Qatar
| | - Hanan Abdul Rahim
- Department of Public Health, College of Health Sciences, QU Health, Qatar University, Doha, Qatar
| | - Lukman Thalib
- Department of Biostatistics, Faculty of Medicine, Istanbul Aydin University, Istanbul, Turkey
| | - Mohammed Fasihul Alam
- Department of Public Health, College of Health Sciences, QU Health, Qatar University, Doha, Qatar
| | | | - Sumaya Al-Maadeed
- Department of Computer Science, College of Engineering, Qatar University, Doha, Qatar
| | - Suhail AR Doi
- Department of Population Medicine, College of Medicine, QU Health, Qatar University, Doha, Qatar
| |
Collapse
|
26
|
Van Laethem J, Pierreux J, Wuyts SC, De Geyter D, Allard SD, Dauby N. Using risk factors and markers to predict bacterial respiratory co-/superinfections in COVID-19 patients: is the antibiotic steward's toolbox full or empty? Acta Clin Belg 2023; 78:418-430. [PMID: 36724448 DOI: 10.1080/17843286.2023.2167328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Accepted: 01/07/2023] [Indexed: 02/03/2023]
Abstract
BACKGROUND Adequate diagnosis of bacterial respiratory tract co-/superinfection (bRTI) in coronavirus disease (COVID-19) patients is challenging, as there is insufficient knowledge about the role of risk factors and (para)clinical parameters in the identification of bacterial co-/superinfection in the COVID-19 setting. Empirical antibiotic therapy is mainly based on COVID-19 severity and expert opinion, rather than on scientific evidence generated since the start of the pandemic. PURPOSE We report the best available evidence regarding the predictive value of risk factors and (para)clinical markers in the diagnosis of bRTI in COVID-19 patients. METHODS A multidisciplinary team identified different potential risk factors and (para)clinical predictors of bRTI in COVID-19 and formulated one or two research questions per topic. After a thorough literature search, research gaps were identified, and suggestions concerning further research were formulated. The quality of this narrative review was ensured by following the Scale for the Assessment of Narrative Review Articles. RESULTS Taking into account the scarcity of scientific evidence for markers and risk factors of bRTI in COVID-19 patients, to date, COVID-19 severity is the only parameter which can be associated with higher risk of developing bRTI. CONCLUSIONS Evidence on the usefulness of risk factors and (para)clinical factors as predictors of bRTI in COVID-19 patients is scarce. Robust studies are needed to optimise antibiotic prescribing and stewardship activities in the context of COVID-19.
Collapse
Affiliation(s)
- Johan Van Laethem
- Department of Internal Medicine and Infectious Diseases, Vrije Universiteit Brussel (VUB), Universitair Ziekenhuis Brussel (UZ Brussel), Brussels, Belgium
| | - Jan Pierreux
- Department of Internal Medicine and Infectious Diseases, Vrije Universiteit Brussel (VUB), Universitair Ziekenhuis Brussel (UZ Brussel), Brussels, Belgium
| | - Stephanie Cm Wuyts
- Universitair Ziekenhuis Brussel (UZ Brussel), Hospital Pharmacy, Brussels, Belgium
- Research group Clinical Pharmacology and Pharmacotherapy, Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Deborah De Geyter
- Microbiology and Infection Control Department, Vrije Universiteit Brussel (VUB), Universitair Ziekenhuis Brussel (UZ Brussel), Brussels, Belgium
| | - Sabine D Allard
- Department of Internal Medicine and Infectious Diseases, Vrije Universiteit Brussel (VUB), Universitair Ziekenhuis Brussel (UZ Brussel), Brussels, Belgium
| | - Nicolas Dauby
- Institute for Medical Immunology, Université Libre de Bruxelles (ULB), Brussels, Belgium
- Centre for Environmental Health and Occupational Health, School of Public Health, Université Libre de Bruxelles (ULB), Brussels, Belgium
- Department of Infectious Diseases, CHU Saint-Pierre - Université Libre de Bruxelles (ULB), Brussels, Belgium
| |
Collapse
|
27
|
Tan M, Xia J, Luo H, Meng G, Zhu Z. Applying the digital data and the bioinformatics tools in SARS-CoV-2 research. Comput Struct Biotechnol J 2023; 21:4697-4705. [PMID: 37841328 PMCID: PMC10568291 DOI: 10.1016/j.csbj.2023.09.044] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 09/29/2023] [Accepted: 09/29/2023] [Indexed: 10/17/2023] Open
Abstract
Bioinformatics has been playing a crucial role in the scientific progress to fight against the pandemic of the coronavirus disease 2019 (COVID-19) caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The advances in novel algorithms, mega data technology, artificial intelligence and deep learning assisted the development of novel bioinformatics tools to analyze daily increasing SARS-CoV-2 data in the past years. These tools were applied in genomic analyses, evolutionary tracking, epidemiological analyses, protein structure interpretation, studies in virus-host interaction and clinical performance. To promote the in-silico analysis in the future, we conducted a review which summarized the databases, web services and software applied in SARS-CoV-2 research. Those digital resources applied in SARS-CoV-2 research may also potentially contribute to the research in other coronavirus and non-coronavirus viruses.
Collapse
Affiliation(s)
- Meng Tan
- School of Life Sciences, Chongqing University, Chongqing, China
| | - Jiaxin Xia
- School of Life Sciences, Chongqing University, Chongqing, China
| | - Haitao Luo
- School of Life Sciences, Chongqing University, Chongqing, China
| | - Geng Meng
- College of Veterinary Medicine, China Agricultural University, Beijing, China
| | - Zhenglin Zhu
- School of Life Sciences, Chongqing University, Chongqing, China
| |
Collapse
|
28
|
Mann M, Badoni RP, Soni H, Al-Shehri M, Kaushik AC, Wei DQ. Utilization of Deep Convolutional Neural Networks for Accurate Chest X-Ray Diagnosis and Disease Detection. Interdiscip Sci 2023; 15:374-392. [PMID: 36966476 PMCID: PMC10040177 DOI: 10.1007/s12539-023-00562-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 03/06/2023] [Accepted: 03/06/2023] [Indexed: 03/27/2023]
Abstract
Chest radiography is a widely used diagnostic imaging procedure in medical practice, which involves prompt reporting of future imaging tests and diagnosis of diseases in the images. In this study, a critical phase in the radiology workflow is automated using the three convolutional neural network (CNN) models, viz. DenseNet121, ResNet50, and EfficientNetB1 for fast and accurate detection of 14 class labels of thoracic pathology diseases based on chest radiography. These models were evaluated on an AUC score for normal versus abnormal chest radiographs using 112120 chest X-ray14 datasets containing various class labels of thoracic pathology diseases to predict the probability of individual diseases and warn clinicians of potential suspicious findings. With DenseNet121, the AUROC scores for hernia and emphysema were predicted as 0.9450 and 0.9120, respectively. Compared to the score values obtained for each class on the dataset, the DenseNet121 outperformed the other two models. This article also aims to develop an automated server to capture fourteen thoracic pathology disease results using a tensor processing unit (TPU). The results of this study demonstrate that our dataset can be used to train models with high diagnostic accuracy for predicting the likelihood of 14 different diseases in abnormal chest radiographs, enabling accurate and efficient discrimination between different types of chest radiographs. This has the potential to bring benefits to various stakeholders and improve patient care.
Collapse
Affiliation(s)
- Mukesh Mann
- Department of Computer Science and Engineering, Indian Institute of Information Technology, Sonepat, Haryana 131029 India
| | - Rakesh P. Badoni
- Department of Mathematics, École Centrale School of Engineering, Mahindra University, Hyderabad, 500043 India
| | - Harsh Soni
- Department of Information Technology, Indian Institute of Information Technology, Sonepat, Haryana, 131029 India
| | - Mohammed Al-Shehri
- Department of Biology, Faculty of Science, King Khalid University, Abha, Saudi Arabia
| | - Aman Chandra Kaushik
- State Key Laboratory of Microbial Metabolism, and School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, 200030 Shanghai, China
- School of Biomedical Informatics, University of Texas Health Science Centre at Houston, Houston, TX USA
| | - Dong-Qing Wei
- State Key Laboratory of Microbial Metabolism, and School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, 200030 Shanghai, China
| |
Collapse
|
29
|
Kim B, Lee GY, Park SH. Attention fusion network with self-supervised learning for staging of osteonecrosis of the femoral head (ONFH) using multiple MR protocols. Med Phys 2023; 50:5528-5540. [PMID: 36945733 DOI: 10.1002/mp.16380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 11/21/2022] [Accepted: 02/20/2023] [Indexed: 03/23/2023] Open
Abstract
BACKGROUND Osteonecrosis of the femoral head (ONFH) is characterized as bone cell death in the hip joint, involving a severe pain in the groin. The staging of ONFH is commonly based on Magnetic resonance imaging and computed tomography (CT), which are important for establishing effective treatment plans. There have been some attempts to automate ONFH staging using deep learning, but few of them used only MR images. PURPOSE To propose a deep learning model for MR-only ONFH staging, which can reduce additional cost and radiation exposure from the acquisition of CT images. METHODS We integrated information from the MR images of five different imaging protocols by a newly proposed attention fusion method, which was composed of intra-modality attention and inter-modality attention. In addition, a self-supervised learning was used to learn deep representations from a large amount of paired MR-CT dataset. The encoder part of the MR-CT translation network was used as a pretraining network for the staging, which aimed to overcome the lack of annotated data for staging. Ablation studies were performed to investigate the contributions of each proposed method. The area under the receiver operating characteristic curve (AUROC) was used to evaluate the performance of the networks. RESULTS Our model improved the performance of the four-way classification of the association research circulation osseous (ARCO) stage using MR images of the multiple protocols by 6.8%p in AUROC over a plain VGG network. Each proposed method increased the performance by 4.7%p (self-supervised learning) and 2.6%p (attention fusion) in AUROC, which was demonstrated by the ablation experiments. CONCLUSIONS We have shown the feasibility of the MR-only ONFH staging by using self-supervised learning and attention fusion. A large amount of paired MR-CT data in hospitals can be used to further improve the performance of the staging, and the proposed method has potential to be used in the diagnosis of various diseases that require staging from multiple MR protocols.
Collapse
Affiliation(s)
- Bomin Kim
- Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| | - Geun Young Lee
- Department of Radiology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Republic of Korea
| | - Sung-Hong Park
- Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| |
Collapse
|
30
|
Santosh KC, GhoshRoy D, Nakarmi S. A Systematic Review on Deep Structured Learning for COVID-19 Screening Using Chest CT from 2020 to 2022. Healthcare (Basel) 2023; 11:2388. [PMID: 37685422 PMCID: PMC10486542 DOI: 10.3390/healthcare11172388] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The emergence of the COVID-19 pandemic in Wuhan in 2019 led to the discovery of a novel coronavirus. The World Health Organization (WHO) designated it as a global pandemic on 11 March 2020 due to its rapid and widespread transmission. Its impact has had profound implications, particularly in the realm of public health. Extensive scientific endeavors have been directed towards devising effective treatment strategies and vaccines. Within the healthcare and medical imaging domain, the application of artificial intelligence (AI) has brought significant advantages. This study delves into peer-reviewed research articles spanning the years 2020 to 2022, focusing on AI-driven methodologies for the analysis and screening of COVID-19 through chest CT scan data. We assess the efficacy of deep learning algorithms in facilitating decision making processes. Our exploration encompasses various facets, including data collection, systematic contributions, emerging techniques, and encountered challenges. However, the comparison of outcomes between 2020 and 2022 proves intricate due to shifts in dataset magnitudes over time. The initiatives aimed at developing AI-powered tools for the detection, localization, and segmentation of COVID-19 cases are primarily centered on educational and training contexts. We deliberate on their merits and constraints, particularly in the context of necessitating cross-population train/test models. Our analysis encompassed a review of 231 research publications, bolstered by a meta-analysis employing search keywords (COVID-19 OR Coronavirus) AND chest CT AND (deep learning OR artificial intelligence OR medical imaging) on both the PubMed Central Repository and Web of Science platforms.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab, Vermillion, SD 57069, USA
| | - Debasmita GhoshRoy
- School of Automation, Banasthali Vidyapith, Tonk 304022, Rajasthan, India;
| | - Suprim Nakarmi
- Department of Computer Science, University of South Dakota, Vermillion, SD 57069, USA;
| |
Collapse
|
31
|
Pennati F, Aliverti A, Pozzi T, Gattarello S, Lombardo F, Coppola S, Chiumello D. Machine learning predicts lung recruitment in acute respiratory distress syndrome using single lung CT scan. Ann Intensive Care 2023; 13:60. [PMID: 37405546 PMCID: PMC10322807 DOI: 10.1186/s13613-023-01154-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 06/11/2023] [Indexed: 07/06/2023] Open
Abstract
BACKGROUND To develop and validate classifier models that could be used to identify patients with a high percentage of potentially recruitable lung from readily available clinical data and from single CT scan quantitative analysis at intensive care unit admission. 221 retrospectively enrolled mechanically ventilated, sedated and paralyzed patients with acute respiratory distress syndrome (ARDS) underwent a PEEP trial at 5 and 15 cmH2O of PEEP and two lung CT scans performed at 5 and 45 cmH2O of airway pressure. Lung recruitability was defined at first as percent change in not aerated tissue between 5 and 45 cmH2O (radiologically defined; recruiters: Δ45-5non-aerated tissue > 15%) and secondly as change in PaO2 between 5 and 15 cmH2O (gas exchange-defined; recruiters: Δ15-5PaO2 > 24 mmHg). Four machine learning (ML) algorithms were evaluated as classifiers of radiologically defined and gas exchange-defined lung recruiters using different models including different variables, separately or combined, of lung mechanics, gas exchange and CT data. RESULTS ML algorithms based on CT scan data at 5 cmH2O classified radiologically defined lung recruiters with similar AUC as ML based on the combination of lung mechanics, gas exchange and CT data. ML algorithm based on CT scan data classified gas exchange-defined lung recruiters with the highest AUC. CONCLUSIONS ML based on a single CT data at 5 cmH2O represented an easy-to-apply tool to classify ARDS patients in recruiters and non-recruiters according to both radiologically defined and gas exchange-defined lung recruitment within the first 48 h from the start of mechanical ventilation.
Collapse
Affiliation(s)
- Francesca Pennati
- Ipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Andrea Aliverti
- Ipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Tommaso Pozzi
- Department of Health Sciences, University of Milan, Milan, Italy
| | - Simone Gattarello
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Fabio Lombardo
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Silvia Coppola
- Department of Anesthesia and Intensive Care, ASST Santi Paolo e Carlo, San Paolo University Hospital, Via Di Rudini 9, Milan, Italy
| | - Davide Chiumello
- Department of Health Sciences, University of Milan, Milan, Italy.
- Department of Anesthesia and Intensive Care, ASST Santi Paolo e Carlo, San Paolo University Hospital, Via Di Rudini 9, Milan, Italy.
- Coordinated Research Center on Respiratory Failure, University of Milan, Milan, Italy.
| |
Collapse
|
32
|
Li H, Drukker K, Hu Q, Whitney HM, Fuhrman JD, Giger ML. Predicting intensive care need for COVID-19 patients using deep learning on chest radiography. J Med Imaging (Bellingham) 2023; 10:044504. [PMID: 37608852 PMCID: PMC10440543 DOI: 10.1117/1.jmi.10.4.044504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 07/12/2023] [Accepted: 08/01/2023] [Indexed: 08/24/2023] Open
Abstract
Purpose Image-based prediction of coronavirus disease 2019 (COVID-19) severity and resource needs can be an important means to address the COVID-19 pandemic. In this study, we propose an artificial intelligence/machine learning (AI/ML) COVID-19 prognosis method to predict patients' needs for intensive care by analyzing chest X-ray radiography (CXR) images using deep learning. Approach The dataset consisted of 8357 CXR exams from 5046 COVID-19-positive patients as confirmed by reverse transcription polymerase chain reaction (RT-PCR) tests for the SARS-CoV-2 virus with a training/validation/test split of 64%/16%/20% on a by patient level. Our model involved a DenseNet121 network with a sequential transfer learning technique employed to train on a sequence of gradually more specific and complex tasks: (1) fine-tuning a model pretrained on ImageNet using a previously established CXR dataset with a broad spectrum of pathologies; (2) refining on another established dataset to detect pneumonia; and (3) fine-tuning using our in-house training/validation datasets to predict patients' needs for intensive care within 24, 48, 72, and 96 h following the CXR exams. The classification performances were evaluated on our independent test set (CXR exams of 1048 patients) using the area under the receiver operating characteristic curve (AUC) as the figure of merit in the task of distinguishing between those COVID-19-positive patients who required intensive care following the imaging exam and those who did not. Results Our proposed AI/ML model achieved an AUC (95% confidence interval) of 0.78 (0.74, 0.81) when predicting the need for intensive care 24 h in advance, and at least 0.76 (0.73, 0.80) for 48 h or more in advance using predictions based on the AI prognostic marker derived from CXR images. Conclusions This AI/ML prediction model for patients' needs for intensive care has the potential to support both clinical decision-making and resource management.
Collapse
Affiliation(s)
- Hui Li
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Qiyuan Hu
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Heather M. Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Jordan D. Fuhrman
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Maryellen L. Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
33
|
Mehrdad S, Shamout FE, Wang Y, Atashzar SF. Deep learning for deterioration prediction of COVID-19 patients based on time-series of three vital signs. Sci Rep 2023; 13:9968. [PMID: 37339986 PMCID: PMC10282033 DOI: 10.1038/s41598-023-37013-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 06/14/2023] [Indexed: 06/22/2023] Open
Abstract
Unrecognized deterioration of COVID-19 patients can lead to high morbidity and mortality. Most existing deterioration prediction models require a large number of clinical information, typically collected in hospital settings, such as medical images or comprehensive laboratory tests. This is infeasible for telehealth solutions and highlights a gap in deterioration prediction models based on minimal data, which can be recorded at a large scale in any clinic, nursing home, or even at the patient's home. In this study, we develop and compare two prognostic models that predict if a patient will experience deterioration in the forthcoming 3 to 24 h. The models sequentially process routine triadic vital signs: (a) oxygen saturation, (b) heart rate, and (c) temperature. These models are also provided with basic patient information, including sex, age, vaccination status, vaccination date, and status of obesity, hypertension, or diabetes. The difference between the two models is the way that the temporal dynamics of the vital signs are processed. Model #1 utilizes a temporally-dilated version of the Long-Short Term Memory model (LSTM) for temporal processes, and Model #2 utilizes a residual temporal convolutional network (TCN) for this purpose. We train and evaluate the models using data collected from 37,006 COVID-19 patients at NYU Langone Health in New York, USA. The convolution-based model outperforms the LSTM based model, achieving a high AUROC of 0.8844-0.9336 for 3 to 24 h deterioration prediction on a held-out test set. We also conduct occlusion experiments to evaluate the importance of each input feature, which reveals the significance of continuously monitoring the variation of the vital signs. Our results show the prospect for accurate deterioration forecast using a minimum feature set that can be relatively easily obtained using wearable devices and self-reported patient information.
Collapse
Affiliation(s)
- Sarmad Mehrdad
- Department of Electrical and Computer Engineering, New York University (NYU), New York, USA
| | - Farah E Shamout
- Department of Biomedical Engineering, New York University (NYU), New York, USA
- Division of Engineering, New York University Abu Dhabi (NYUAD), Abu Dhabi, UAE
- Computer Science and Engineering, New York University (NYU), New York, USA
| | - Yao Wang
- Department of Electrical and Computer Engineering, New York University (NYU), New York, USA
- Department of Biomedical Engineering, New York University (NYU), New York, USA
| | - S Farokh Atashzar
- Department of Electrical and Computer Engineering, New York University (NYU), New York, USA.
- Department of Biomedical Engineering, New York University (NYU), New York, USA.
- Department of Mechanical and Aerospace Engineering, New York University (NYU), New York, USA.
| |
Collapse
|
34
|
Das S, Ayus I, Gupta D. A comprehensive review of COVID-19 detection with machine learning and deep learning techniques. HEALTH AND TECHNOLOGY 2023; 13:1-14. [PMID: 37363343 PMCID: PMC10244837 DOI: 10.1007/s12553-023-00757-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/14/2023] [Indexed: 06/28/2023]
Abstract
Purpose The first transmission of coronavirus to humans started in Wuhan city of China, took the shape of a pandemic called Corona Virus Disease 2019 (COVID-19), and posed a principal threat to the entire world. The researchers are trying to inculcate artificial intelligence (Machine learning or deep learning models) for the efficient detection of COVID-19. This research explores all the existing machine learning (ML) or deep learning (DL) models, used for COVID-19 detection which may help the researcher to explore in different directions. The main purpose of this review article is to present a compact overview of the application of artificial intelligence to the research experts, helping them to explore the future scopes of improvement. Methods The researchers have used various machine learning, deep learning, and a combination of machine and deep learning models for extracting significant features and classifying various health conditions in COVID-19 patients. For this purpose, the researchers have utilized different image modalities such as CT-Scan, X-Ray, etc. This study has collected over 200 research papers from various repositories like Google Scholar, PubMed, Web of Science, etc. These research papers were passed through various levels of scrutiny and finally, 50 research articles were selected. Results In those listed articles, the ML / DL models showed an accuracy of 99% and above while performing the classification of COVID-19. This study has also presented various clinical applications of various research. This study specifies the importance of various machine and deep learning models in the field of medical diagnosis and research. Conclusion In conclusion, it is evident that ML/DL models have made significant progress in recent years, but there are still limitations that need to be addressed. Overfitting is one such limitation that can lead to incorrect predictions and overburdening of the models. The research community must continue to work towards finding ways to overcome these limitations and make machine and deep learning models even more effective and efficient. Through this ongoing research and development, we can expect even greater advances in the future.
Collapse
Affiliation(s)
- Sreeparna Das
- Department of Computer Science and Engineering, National Institute of Technology Arunachal Pradesh, Jote, Arunachal Pradesh 791113 India
| | - Ishan Ayus
- Department of Computer Science and Engineering, ITER, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha 751030 India
| | - Deepak Gupta
- Department of Computer Science and Engineering, Motilal Nehru National Institute of Technology Allahabad, Prayagraj, UP 211004 India
| |
Collapse
|
35
|
Meng F, Kottlors J, Shahzad R, Liu H, Fervers P, Jin Y, Rinneburger M, Le D, Weisthoff M, Liu W, Ni M, Sun Y, An L, Huai X, Móré D, Giannakis A, Kaltenborn I, Bucher A, Maintz D, Zhang L, Thiele F, Li M, Perkuhn M, Zhang H, Persigehl T. AI support for accurate and fast radiological diagnosis of COVID-19: an international multicenter, multivendor CT study. Eur Radiol 2023; 33:4280-4291. [PMID: 36525088 PMCID: PMC9755771 DOI: 10.1007/s00330-022-09335-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 11/03/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022]
Abstract
OBJECTIVES Differentiation between COVID-19 and community-acquired pneumonia (CAP) in computed tomography (CT) is a task that can be performed by human radiologists and artificial intelligence (AI). The present study aims to (1) develop an AI algorithm for differentiating COVID-19 from CAP and (2) evaluate its performance. (3) Evaluate the benefit of using the AI result as assistance for radiological diagnosis and the impact on relevant parameters such as accuracy of the diagnosis, diagnostic time, and confidence. METHODS We included n = 1591 multicenter, multivendor chest CT scans and divided them into AI training and validation datasets to develop an AI algorithm (n = 991 CT scans; n = 462 COVID-19, and n = 529 CAP) from three centers in China. An independent Chinese and German test dataset of n = 600 CT scans from six centers (COVID-19 / CAP; n = 300 each) was used to test the performance of eight blinded radiologists and the AI algorithm. A subtest dataset (180 CT scans; n = 90 each) was used to evaluate the radiologists' performance without and with AI assistance to quantify changes in diagnostic accuracy, reporting time, and diagnostic confidence. RESULTS The diagnostic accuracy of the AI algorithm in the Chinese-German test dataset was 76.5%. Without AI assistance, the eight radiologists' diagnostic accuracy was 79.1% and increased with AI assistance to 81.5%, going along with significantly shorter decision times and higher confidence scores. CONCLUSION This large multicenter study demonstrates that AI assistance in CT-based differentiation of COVID-19 and CAP increases radiological performance with higher accuracy and specificity, faster diagnostic time, and improved diagnostic confidence. KEY POINTS • AI can help radiologists to get higher diagnostic accuracy, make faster decisions, and improve diagnostic confidence. • The China-German multicenter study demonstrates the advantages of a human-machine interaction using AI in clinical radiology for diagnostic differentiation between COVID-19 and CAP in CT scans.
Collapse
Affiliation(s)
- Fanyang Meng
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Jonathan Kottlors
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Rahil Shahzad
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Innovative Technology, Philips Healthcare, Aachen, Germany
| | - Haifeng Liu
- Department of Radiology, Wuhan No. 1 Hospital, Wuhan, China
| | - Philipp Fervers
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Yinhua Jin
- Department of Radiology, Ningbo Hwamei Hospital, University of Chinese Academy of Sciences, Wuhan, China
| | - Miriam Rinneburger
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Dou Le
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Mathilda Weisthoff
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Wenyun Liu
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Mengzhe Ni
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Ye Sun
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Liying An
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | | | - Dorottya Móré
- Department of Diagnostic and Interventional Radiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Athanasios Giannakis
- Department of Diagnostic and Interventional Radiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Isabel Kaltenborn
- Institute for Diagnostic and Interventional Radiology, Frankfurt University Hospital, Frankfurt, Germany
| | - Andreas Bucher
- Institute for Diagnostic and Interventional Radiology, Frankfurt University Hospital, Frankfurt, Germany
| | - David Maintz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Lei Zhang
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Frank Thiele
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Innovative Technology, Philips Healthcare, Aachen, Germany
| | - Mingyang Li
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Michael Perkuhn
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Innovative Technology, Philips Healthcare, Aachen, Germany
| | - Huimao Zhang
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China.
| | - Thorsten Persigehl
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
36
|
Yu X, Kang B, Nie P, Deng Y, Liu Z, Mao N, An Y, Xu J, Huang C, Huang Y, Zhang Y, Hou Y, Zhang L, Sun Z, Zhu B, Shi R, Zhang S, Sun C, Wang X. Development and validation of a CT-based radiomics model for differentiating pneumonia-like primary pulmonary lymphoma from infectious pneumonia: A multicenter study. Chin Med J (Engl) 2023; 136:1188-1197. [PMID: 37083119 PMCID: PMC10278712 DOI: 10.1097/cm9.0000000000002671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Indexed: 04/22/2023] Open
Abstract
BACKGROUND Pneumonia-like primary pulmonary lymphoma (PPL) was commonly misdiagnosed as infectious pneumonia, leading to delayed treatment. The purpose of this study was to establish a computed tomography (CT)-based radiomics model to differentiate pneumonia-like PPL from infectious pneumonia. METHODS In this retrospective study, 79 patients with pneumonia-like PPL and 176 patients with infectious pneumonia from 12 medical centers were enrolled. Patients from center 1 to center 7 were assigned to the training or validation cohort, and the remaining patients from other centers were used as the external test cohort. Radiomics features were extracted from CT images. A three-step procedure was applied for radiomics feature selection and radiomics signature building, including the inter- and intra-class correlation coefficients (ICCs), a one-way analysis of variance (ANOVA), and least absolute shrinkage and selection operator (LASSO). Univariate and multivariate analyses were used to identify the significant clinicoradiological variables and construct a clinical factor model. Two radiologists reviewed the CT images for the external test set. Performance of the radiomics model, clinical factor model, and each radiologist were assessed by receiver operating characteristic, and area under the curve (AUC) was compared. RESULTS A total of 144 patients (44 with pneumonia-like PPL and 100 infectious pneumonia) were in the training cohort, 38 patients (12 with pneumonia-like PPL and 26 infectious pneumonia) were in the validation cohort, and 73 patients (23 with pneumonia-like PPL and 50 infectious pneumonia) were in the external test cohort. Twenty-three radiomics features were selected to build the radiomics model, which yielded AUCs of 0.95 (95% confidence interval [CI]: 0.94-0.99), 0.93 (95% CI: 0.85-0.98), and 0.94 (95% CI: 0.87-0.99) in the training, validation, and external test cohort, respectively. The AUCs for the two readers and clinical factor model were 0.74 (95% CI: 0.63-0.83), 0.72 (95% CI: 0.62-0.82), and 0.73 (95% CI: 0.62-0.84) in the external test cohort, respectively. The radiomics model outperformed both the readers' interpretation and clinical factor model ( P <0.05). CONCLUSIONS The CT-based radiomics model may provide an effective and non-invasive tool to differentiate pneumonia-like PPL from infectious pneumonia, which might provide assistance for clinicians in tailoring precise therapy.
Collapse
Affiliation(s)
- Xinxin Yu
- Department of Radiology, Shandong Provincial Hospital, Shandong University, Jinan, Shandong 250021, China
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong 250021, China
| | - Bing Kang
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong 250021, China
| | - Pei Nie
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, Shandong 266000, China
| | - Yan Deng
- Department of Radiology, Qilu Hospital, Shandong University, Jinan, Shandong 250012, China
| | - Zixin Liu
- Department of Medicine, Graduate School, Kyung Hee University, Seoul 446701, Republic of Korea
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong 164000, China
| | - Yahui An
- Department of Research Collaboration, R&D Center, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing 100080, China
| | - Jingxu Xu
- Department of Research Collaboration, R&D Center, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing 100080, China
| | - Chencui Huang
- Department of Research Collaboration, R&D Center, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing 100080, China
| | - Yong Huang
- Department of Radiology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong 250117, China
| | - Yonggao Zhang
- Department of Radiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan 450052, China
| | - Yang Hou
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, Liaoning 110004, China
| | - Longjiang Zhang
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210002, China
| | - Zhanguo Sun
- Department of Radiology, Affiliated Hospital of Jining Medical University, Jining, Shandong 272029, China
| | - Baosen Zhu
- Department of Radiology, Shandong Provincial Hospital, Shandong University, Jinan, Shandong 250021, China
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong 250021, China
| | - Rongchao Shi
- Department of Radiology, Shandong Provincial Hospital, Shandong University, Jinan, Shandong 250021, China
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong 250021, China
| | - Shuai Zhang
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong 250021, China
| | - Cong Sun
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong 250021, China
| | - Ximing Wang
- Department of Radiology, Shandong Provincial Hospital, Shandong University, Jinan, Shandong 250021, China
| |
Collapse
|
37
|
Dabbagh R, Jamal A, Bhuiyan Masud JH, Titi MA, Amer YS, Khayat A, Alhazmi TS, Hneiny L, Baothman FA, Alkubeyyer M, Khan SA, Temsah MH. Harnessing Machine Learning in Early COVID-19 Detection and Prognosis: A Comprehensive Systematic Review. Cureus 2023; 15:e38373. [PMID: 37265897 PMCID: PMC10230599 DOI: 10.7759/cureus.38373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/30/2023] [Indexed: 06/03/2023] Open
Abstract
During the early phase of the COVID-19 pandemic, reverse transcriptase-polymerase chain reaction (RT-PCR) testing faced limitations, prompting the exploration of machine learning (ML) alternatives for diagnosis and prognosis. Providing a comprehensive appraisal of such decision support systems and their use in COVID-19 management can aid the medical community in making informed decisions during the risk assessment of their patients, especially in low-resource settings. Therefore, the objective of this study was to systematically review the studies that predicted the diagnosis of COVID-19 or the severity of the disease using ML. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA), we conducted a literature search of MEDLINE (OVID), Scopus, EMBASE, and IEEE Xplore from January 1 to June 31, 2020. The outcomes were COVID-19 diagnosis or prognostic measures such as death, need for mechanical ventilation, admission, and acute respiratory distress syndrome. We included peer-reviewed observational studies, clinical trials, research letters, case series, and reports. We extracted data about the study's country, setting, sample size, data source, dataset, diagnostic or prognostic outcomes, prediction measures, type of ML model, and measures of diagnostic accuracy. Bias was assessed using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). This study was registered in the International Prospective Register of Systematic Reviews (PROSPERO), with the number CRD42020197109. The final records included for data extraction were 66. Forty-three (64%) studies used secondary data. The majority of studies were from Chinese authors (30%). Most of the literature (79%) relied on chest imaging for prediction, while the remainder used various laboratory indicators, including hematological, biochemical, and immunological markers. Thirteen studies explored predicting COVID-19 severity, while the rest predicted diagnosis. Seventy percent of the articles used deep learning models, while 30% used traditional ML algorithms. Most studies reported high sensitivity, specificity, and accuracy for the ML models (exceeding 90%). The overall concern about the risk of bias was "unclear" in 56% of the studies. This was mainly due to concerns about selection bias. ML may help identify COVID-19 patients in the early phase of the pandemic, particularly in the context of chest imaging. Although these studies reflect that these ML models exhibit high accuracy, the novelty of these models and the biases in dataset selection make using them as a replacement for the clinicians' cognitive decision-making questionable. Continued research is needed to enhance the robustness and reliability of ML systems in COVID-19 diagnosis and prognosis.
Collapse
Affiliation(s)
- Rufaidah Dabbagh
- Family & Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
| | - Amr Jamal
- Family & Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
- Research Chair for Evidence-Based Health Care and Knowledge Translation, Family and Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
| | | | - Maher A Titi
- Quality Management Department, King Saud University Medical City, Riyadh, SAU
- Research Chair for Evidence-Based Health Care and Knowledge Translation, Family and Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
| | - Yasser S Amer
- Pediatrics, Quality Management Department, King Saud University Medical City, Riyadh, SAU
- Research Chair for Evidence-Based Health Care and Knowledge Translation, Family and Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
| | - Afnan Khayat
- Health Information Management Department, Prince Sultan Military College of Health Sciences, Al Dhahran, SAU
| | - Taha S Alhazmi
- Family & Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
| | - Layal Hneiny
- Medicine, Wegner Health Sciences Library, University of South Dakota, Vermillion, USA
| | - Fatmah A Baothman
- Department of Information Systems, King Abdulaziz University, Jeddah, SAU
| | | | - Samina A Khan
- School of Computer Sciences, Universiti Sains Malaysia, Penang, MYS
| | - Mohamad-Hani Temsah
- Pediatric Intensive Care Unit, Department of Pediatrics, King Saud University, Riyadh, SAU
| |
Collapse
|
38
|
Rehman A, Xing H, Adnan Khan M, Hussain M, Hussain A, Gulzar N. Emerging technologies for COVID (ET-CoV) detection and diagnosis: Recent advancements, applications, challenges, and future perspectives. Biomed Signal Process Control 2023; 83:104642. [PMID: 36818992 PMCID: PMC9917176 DOI: 10.1016/j.bspc.2023.104642] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 11/29/2022] [Accepted: 01/25/2023] [Indexed: 02/12/2023]
Abstract
In light of the constantly changing terrain of the COVID outbreak, medical specialists have implemented proactive schemes for vaccine production. Despite the remarkable COVID-19 vaccine development, the virus has mutated into new variants, including delta and omicron. Currently, the situation is critical in many parts of the world, and precautions are being taken to stop the virus from spreading and mutating. Early identification and diagnosis of COVID-19 are the main challenges faced by emerging technologies during the outbreak. In these circumstances, emerging technologies to tackle Coronavirus have proven magnificent. Artificial intelligence (AI), big data, the internet of medical things (IoMT), robotics, blockchain technology, telemedicine, smart applications, and additive manufacturing are suspicious for detecting, classifying, monitoring, and locating COVID-19. Henceforth, this research aims to glance at these COVID-19 defeating technologies by focusing on their strengths and limitations. A CiteSpace-based bibliometric analysis of the emerging technology was established. The most impactful keywords and the ongoing research frontiers were compiled. Emerging technologies were unstable due to data inconsistency, redundant and noisy datasets, and the inability to aggregate the data due to disparate data formats. Moreover, the privacy and confidentiality of patient medical records are not guaranteed. Hence, Significant data analysis is required to develop an intelligent computational model for effective and quick clinical diagnosis of COVID-19. Remarkably, this article outlines how emerging technology has been used to counteract the virus disaster and offers ongoing research frontiers, directing readers to concentrate on the real challenges and thus facilitating additional explorations to amplify emerging technologies.
Collapse
Affiliation(s)
- Amir Rehman
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Huanlai Xing
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Muhammad Adnan Khan
- Pattern Recognition and Machine Learning, Department of Software, Gachon University, Seongnam 13557, Republic of Korea
- Riphah School of Computing & Innovation, Faculty of Computing, Riphah International University, Lahore Campus, Lahore 54000, Pakistan
| | - Mehboob Hussain
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Abid Hussain
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Nighat Gulzar
- School of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| |
Collapse
|
39
|
Wu J, Xia Y, Wang X, Wei Y, Liu A, Innanje A, Zheng M, Chen L, Shi J, Wang L, Zhan Y, Zhou XS, Xue Z, Shi F, Shen D. uRP: An integrated research platform for one-stop analysis of medical images. FRONTIERS IN RADIOLOGY 2023; 3:1153784. [PMID: 37492386 PMCID: PMC10365282 DOI: 10.3389/fradi.2023.1153784] [Citation(s) in RCA: 52] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/31/2023] [Indexed: 07/27/2023]
Abstract
Introduction Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible. Methods We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable. Results and Discussion The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.
Collapse
Affiliation(s)
- Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yuwei Xia
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xuechun Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Aie Liu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Arun Innanje
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Meng Zheng
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Lei Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Liye Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Zhong Xue
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
40
|
Khattab R, Abdelmaksoud IR, Abdelrazek S. Deep Convolutional Neural Networks for Detecting COVID-19 Using Medical Images: A Survey. NEW GENERATION COMPUTING 2023; 41:343-400. [PMID: 37229176 PMCID: PMC10071474 DOI: 10.1007/s00354-023-00213-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
Coronavirus Disease 2019 (COVID-19), which is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2), surprised the world in December 2019 and has threatened the lives of millions of people. Countries all over the world closed worship places and shops, prevented gatherings, and implemented curfews to stand against the spread of COVID-19. Deep Learning (DL) and Artificial Intelligence (AI) can have a great role in detecting and fighting this disease. Deep learning can be used to detect COVID-19 symptoms and signs from different imaging modalities, such as X-Ray, Computed Tomography (CT), and Ultrasound Images (US). This could help in identifying COVID-19 cases as a first step to curing them. In this paper, we reviewed the research studies conducted from January 2020 to September 2022 about deep learning models that were used in COVID-19 detection. This paper clarified the three most common imaging modalities (X-Ray, CT, and US) in addition to the DL approaches that are used in this detection and compared these approaches. This paper also provided the future directions of this field to fight COVID-19 disease.
Collapse
Affiliation(s)
- Rana Khattab
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Islam R. Abdelmaksoud
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Samir Abdelrazek
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| |
Collapse
|
41
|
Gangl C, Krychtiuk K. Digital health-high tech or high touch? Wien Med Wochenschr 2023; 173:115-124. [PMID: 36602630 PMCID: PMC9813878 DOI: 10.1007/s10354-022-00991-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 11/07/2022] [Indexed: 01/06/2023]
Abstract
Digital transformation in medicine refers to the implementation of information technology-driven developments in the healthcare system and their impact on the way we teach, share, and practice medicine. We would like to provide an overview of current developments and opportunities but also of the risks of digital transformation in medicine. Therefore, we examine the possibilities wearables and digital biomarkers provide for early detection and monitoring of diseases and discuss the potential of artificial intelligence applications in medicine. Furthermore, we outline new opportunities offered by telemedicine applications and digital therapeutics, discuss the aspects of social media in healthcare, and provide an outlook on "Health 4.0."
Collapse
Affiliation(s)
- Clemens Gangl
- Department of Internal Medicine II, Division of Cardiology, Medical University of Vienna, Währinger Gürtel 18–20, 1090 Vienna, Austria
| | - Konstantin Krychtiuk
- Department of Internal Medicine II, Division of Cardiology, Medical University of Vienna, Währinger Gürtel 18–20, 1090 Vienna, Austria
| |
Collapse
|
42
|
Wang L, Wu F, Xiao M, Chen YX, Wu L. Prediction of pulp exposure risk of carious pulpitis based on deep learning. HUA XI KOU QIANG YI XUE ZA ZHI = HUAXI KOUQIANG YIXUE ZAZHI = WEST CHINA JOURNAL OF STOMATOLOGY 2023; 41:218-224. [PMID: 37056189 PMCID: PMC10427250 DOI: 10.7518/gjkq.2023.2022418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 02/26/2023] [Indexed: 04/15/2023]
Abstract
OBJECTIVES This study aims to predict the risk of deep caries exposure in radiographic images based on the convolutional neural network model, compare the prediction results of the network model with those of senior dentists, evaluate the performance of the model for teaching and training stomatological students and young dentists, and assist dentists to clarify treatment plans and conduct good doctor-patient communication before surgery. METHODS A total of 206 cases of pulpitis caused by deep caries were selected from the Department of Stomatological Hospital of Tianjin Medical University from 2019 to 2022. According to the inclusion and exclusion criteria, 104 cases of pulpitis were exposed during the decaying preparation period and 102 cases of pulpitis were not exposed. The 206 radiographic images collected were randomly divided into three groups according to the proportion: 126 radiographic images in the training set, 40 radiographic images in the validation set, and 40 radiographic images in the test set. Three convolutional neural networks, visual geometry group network (VGG), residual network (ResNet), and dense convolutional network (DenseNet) were selected to analyze the rules of the radiographic images in the training set. The radiographic images of the validation set were used to adjust the super parameters of the network. Finally, 40 radiographic images of the test set were used to evaluate the performance of the three network models. A senior dentist specializing in dental pulp was selected to predict whether the deep caries of 40 radiographic images in the test set were exposed. The gold standard is whether the pulp is exposed after decaying the prepared hole during the clinical operation. The prediction effect of the three network models (VGG, ResNet, and DenseNet) and the senior dentist on the pulp exposure of 40 radiographic images in the test set were compared using receiver operating characteristic (ROC) curve, area under the ROC curve (AUC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score to select the best network model. RESULTS The best network model was DenseNet model, with AUC of 0.97. The AUC values of the ResNet model, VGG model, and the senior dentist were 0.89, 0.78, and 0.87, respectively. Accuracy was not statistically different between the senior dentist (0.850) and the DenseNet model (0.850)(P>0.05). Kappa consistency test showed moderate reliability (Kappa=0.6>0.4, P<0.05). CONCLUSIONS Among the three convolutional neural network models, the DenseNet model has the best predictive effect on whether deep caries are exposed in imaging. The predictive effect of this model is equivalent to the level of senior dentists specializing in dental pulp.
Collapse
Affiliation(s)
- Li Wang
- Dept. of Endodontics, Stomatological Hospital, Tianjin Medical University, Tianjin 300070, China
| | - Fei Wu
- Dept. of General Dentistry, Yantai Stomatological Hospital Affiliated Binzhou Medical College, Yantai 264008, China
| | - Mo Xiao
- Dept. of Endodontics, Stomatological Hospital, Tianjin Medical University, Tianjin 300070, China
| | - Yu-Xin Chen
- Dept. of Endodontics, Stomatological Hospital, Tianjin Medical University, Tianjin 300070, China
| | - Ligeng Wu
- Dept. of Endodontics, Stomatological Hospital, Tianjin Medical University, Tianjin 300070, China
| |
Collapse
|
43
|
Nakashima M, Uchiyama Y, Minami H, Kasai S. Prediction of COVID-19 patients in danger of death using radiomic features of portable chest radiographs. J Med Radiat Sci 2023; 70:13-20. [PMID: 36334033 PMCID: PMC9877603 DOI: 10.1002/jmrs.631] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 10/14/2022] [Indexed: 11/06/2022] Open
Abstract
INTRODUCTION Computer-aided diagnostic systems have been developed for the detection and differential diagnosis of coronavirus disease 2019 (COVID-19) pneumonia using imaging studies to characterise a patient's current condition. In this radiomic study, we propose a system for predicting COVID-19 patients in danger of death using portable chest X-ray images. METHODS In this retrospective study, we selected 100 patients, including ten that died and 90 that recovered from the COVID-19-AR database of the Cancer Imaging Archive. Since it can be difficult to analyse portable chest X-ray images of patients with COVID-19 because bone components overlap with the abnormal patterns of this disease, we employed a bone-suppression technique during pre-processing. A total of 620 radiomic features were measured in the left and right lung regions, and four radiomic features were selected using the least absolute shrinkage and selection operator technique. We distinguished death from recovery cases using a linear discriminant analysis (LDA) and a support vector machine (SVM). The leave-one-out method was used to train and test the classifiers, and the area under the receiver-operating characteristic curve (AUC) was used to evaluate discriminative performance. RESULTS The AUCs for LDA and SVM were 0.756 and 0.959, respectively. The discriminative performance was improved when the bone-suppression technique was employed. When the SVM was used, the sensitivity for predicting disease severity was 90.9% (9/10), and the specificity was 95.6% (86/90). CONCLUSIONS We believe that the radiomic features of portable chest X-ray images can predict COVID-19 patients in danger of death.
Collapse
Affiliation(s)
- Maoko Nakashima
- Graduate School of Health SciencesKumamoto UniversityKumamotoJapan
| | - Yoshikazu Uchiyama
- Department of Medical Image Sciences, Faculty of Life SciencesKumamoto UniversityKumamotoJapan
| | | | - Satoshi Kasai
- Department of Radiological TechnologyNiigata University of Health and WelfareNiigataJapan
| |
Collapse
|
44
|
Gazeau S, Deng X, Ooi HK, Mostefai F, Hussin J, Heffernan J, Jenner AL, Craig M. The race to understand immunopathology in COVID-19: Perspectives on the impact of quantitative approaches to understand within-host interactions. IMMUNOINFORMATICS (AMSTERDAM, NETHERLANDS) 2023; 9:100021. [PMID: 36643886 PMCID: PMC9826539 DOI: 10.1016/j.immuno.2023.100021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/16/2022] [Accepted: 01/03/2023] [Indexed: 01/09/2023]
Abstract
The COVID-19 pandemic has revealed the need for the increased integration of modelling and data analysis to public health, experimental, and clinical studies. Throughout the first two years of the pandemic, there has been a concerted effort to improve our understanding of the within-host immune response to the SARS-CoV-2 virus to provide better predictions of COVID-19 severity, treatment and vaccine development questions, and insights into viral evolution and the impacts of variants on immunopathology. Here we provide perspectives on what has been accomplished using quantitative methods, including predictive modelling, population genetics, machine learning, and dimensionality reduction techniques, in the first 26 months of the COVID-19 pandemic approaches, and where we go from here to improve our responses to this and future pandemics.
Collapse
Affiliation(s)
- Sonia Gazeau
- Department of Mathematics and Statistics, Université de Montréal, Montréal, Canada
- Sainte-Justine University Hospital Research Centre, Montréal, Canada
| | - Xiaoyan Deng
- Department of Mathematics and Statistics, Université de Montréal, Montréal, Canada
- Sainte-Justine University Hospital Research Centre, Montréal, Canada
| | - Hsu Kiang Ooi
- Digital Technologies Research Centre, National Research Council Canada, Toronto, Canada
| | - Fatima Mostefai
- Montréal Heart Institute Research Centre, Montréal, Canada
- Department of Medicine, Faculty of Medicine, Université de Montréal, Montréal, Canada
| | - Julie Hussin
- Montréal Heart Institute Research Centre, Montréal, Canada
- Department of Medicine, Faculty of Medicine, Université de Montréal, Montréal, Canada
| | - Jane Heffernan
- Modelling Infection and Immunity Lab, Mathematics Statistics, York University, Toronto, Canada
- Centre for Disease Modelling (CDM), Mathematics Statistics, York University, Toronto, Canada
| | - Adrianne L Jenner
- School of Mathematical Sciences, Queensland University of Technology, Brisbane Australia
| | - Morgan Craig
- Department of Mathematics and Statistics, Université de Montréal, Montréal, Canada
- Sainte-Justine University Hospital Research Centre, Montréal, Canada
| |
Collapse
|
45
|
Matsumoto T, Walston SL, Walston M, Kabata D, Miki Y, Shiba M, Ueda D. Deep Learning-Based Time-to-Death Prediction Model for COVID-19 Patients Using Clinical Data and Chest Radiographs. J Digit Imaging 2023; 36:178-188. [PMID: 35941407 PMCID: PMC9360661 DOI: 10.1007/s10278-022-00691-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 06/20/2022] [Accepted: 07/22/2022] [Indexed: 11/18/2022] Open
Abstract
Accurate estimation of mortality and time to death at admission for COVID-19 patients is important and several deep learning models have been created for this task. However, there are currently no prognostic models which use end-to-end deep learning to predict time to event for admitted COVID-19 patients using chest radiographs and clinical data. We retrospectively implemented a new artificial intelligence model combining DeepSurv (a multiple-perceptron implementation of the Cox proportional hazards model) and a convolutional neural network (CNN) using 1356 COVID-19 inpatients. For comparison, we also prepared DeepSurv only with clinical data, DeepSurv only with images (CNNSurv), and Cox proportional hazards models. Clinical data and chest radiographs at admission were used to estimate patient outcome (death or discharge) and duration to the outcome. The Harrel's concordance index (c-index) of the DeepSurv with CNN model was 0.82 (0.75-0.88) and this was significantly higher than the DeepSurv only with clinical data model (c-index = 0.77 (0.69-0.84), p = 0.011), CNNSurv (c-index = 0.70 (0.63-0.79), p = 0.001), and the Cox proportional hazards model (c-index = 0.71 (0.63-0.79), p = 0.001). These results suggest that the time-to-event prognosis model became more accurate when chest radiographs and clinical data were used together.
Collapse
Affiliation(s)
- Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Shannon Leigh Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Michael Walston
- Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Daijiro Kabata
- Department of Medical Statistics, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Masatsugu Shiba
- Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan.,Department of Medical Statistics, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Daiju Ueda
- Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan. .,Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan.
| |
Collapse
|
46
|
Chen Y, Lin Y, Xu X, Ding J, Li C, Zeng Y, Xie W, Huang J. Multi-domain medical image translation generation for lung image classification based on generative adversarial networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107200. [PMID: 36525713 DOI: 10.1016/j.cmpb.2022.107200] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 10/20/2022] [Accepted: 10/21/2022] [Indexed: 06/17/2023]
Abstract
OBJECTIVE Lung image classification-assisted diagnosis has a large application market. Aiming at the problems of poor attention to existing translation models, the insufficient ability of key transfer and generation, insufficient quality of generated images, and lack of detailed features, this paper conducts research on lung medical image translation and lung image classification based on generative adversarial networks. METHODS This paper proposes a medical image multi-domain translation algorithm MI-GAN based on the key migration branch. After the actual analysis of the imbalanced medical image data, the key target domain images are selected, the key migration branch is established, and a single generator is used to complete the medical image multi-domain translation. The conversion between domains ensures the attention performance of the medical image multi-domain translation model and the quality of the synthesized images. At the same time, a lung image classification model based on synthetic image data augmentation is proposed. The synthetic lung CT medical images and the original real medical images are used as the training set together to study the performance of the auxiliary diagnosis model in the classification of normal healthy subjects, and also of the mild and severe COVID-19 patients. RESULTS Based on the chest CT image dataset, MI-GAN has completed the mutual conversion and generation of normal lung images without disease, viral pneumonia and Mild COVID-19 images. The synthetic images GAN-test and GAN-train indicators reached, respectively 92.188% and 85.069%, compared with other generative models in terms of authenticity and diversity, there is a considerable improvement. The accuracy rate of pneumonia diagnosis of the lung image classification model is 93.85%, which is 3.1% higher than that of the diagnosis model trained only with real images; the sensitivity of disease diagnosis is 96.69%, a relative improvement of 7.1%. 1%, the specificity was 89.70%; the area under the ROC curve (AUC) increased from 94.00% to 96.17%. CONCLUSION In this paper, a multi-domain translation model of medical images based on the key transfer branch is proposed, which enables the translation network to have key transfer and attention performance. It is verified on lung CT images and achieved good results. The required medical images are synthesized by the above medical image translation model, and the effectiveness of the synthesized images on the lung image classification network is verified experimentally.
Collapse
Affiliation(s)
- Yunfeng Chen
- Department of Pulmonary Medicine, The Second Affiliated Hospital of Fujian Medical University, 950 Eastsea street, Fengzhe District, Quanzhou, Fujian 362000, China.
| | - Yalan Lin
- Department of Pulmonary Medicine, The Second Affiliated Hospital of Fujian Medical University, 950 Eastsea street, Fengzhe District, Quanzhou, Fujian 362000, China
| | - Xiaodie Xu
- Department of Pulmonary Medicine, The Second Affiliated Hospital of Fujian Medical University, 950 Eastsea street, Fengzhe District, Quanzhou, Fujian 362000, China
| | - Jinzhen Ding
- Department of Pulmonary Medicine, The Second Affiliated Hospital of Fujian Medical University, 950 Eastsea street, Fengzhe District, Quanzhou, Fujian 362000, China
| | - Chuzhao Li
- Department of Pulmonary Medicine, The Second Affiliated Hospital of Fujian Medical University, 950 Eastsea street, Fengzhe District, Quanzhou, Fujian 362000, China
| | - Yiming Zeng
- Department of Pulmonary Medicine, The Second Affiliated Hospital of Fujian Medical University, 950 Eastsea street, Fengzhe District, Quanzhou, Fujian 362000, China.
| | - Weifang Xie
- Faculty of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, China; Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou 362000, China; Key Laboratory of Intelligent Computing and Information Processing, Fujian Province University, Quanzhou 362000, China
| | - Jianlong Huang
- Faculty of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, China; Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou 362000, China; Key Laboratory of Intelligent Computing and Information Processing, Fujian Province University, Quanzhou 362000, China
| |
Collapse
|
47
|
Comparison of the Diagnostic Performance of Deep Learning Algorithms for Reducing the Time Required for COVID-19 RT-PCR Testing. Viruses 2023; 15:v15020304. [PMID: 36851519 PMCID: PMC9966023 DOI: 10.3390/v15020304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 01/13/2023] [Accepted: 01/19/2023] [Indexed: 01/24/2023] Open
Abstract
(1) Background: Rapid and accurate negative discrimination enables efficient management of scarce isolated bed resources and adequate patient accommodation in the majority of areas experiencing an explosion of confirmed cases due to Omicron mutations. Until now, methods for artificial intelligence or deep learning to replace time-consuming RT-PCR have relied on CXR, chest CT, blood test results, or clinical information. (2) Methods: We proposed and compared five different types of deep learning algorithms (RNN, LSTM, Bi-LSTM, GRU, and transformer) for reducing the time required for RT-PCR diagnosis by learning the change in fluorescence value derived over time during the RT-PCR process. (3) Results: Among the five deep learning algorithms capable of training time series data, Bi-LSTM and GRU were shown to be able to decrease the time required for RT-PCR diagnosis by half or by 25% without significantly impairing the diagnostic performance of the COVID-19 RT-PCR test. (4) Conclusions: The diagnostic performance of the model developed in this study when 40 cycles of RT-PCR are used for diagnosis shows the possibility of nearly halving the time required for RT-PCR diagnosis.
Collapse
|
48
|
Topff L, Groot Lipman KBW, Guffens F, Wittenberg R, Bartels-Rutten A, van Veenendaal G, Hess M, Lamerigts K, Wakkie J, Ranschaert E, Trebeschi S, Visser JJ, Beets-Tan RGH, Snoeckx A, Kint P, Van Hoe L, Quattrocchi CC, Dickerscheid D, Lounis S, Schulze E, Sjer AEB, van Vucht N, Tielbeek JA, Raat F, Eijspaart D, Abbas A. Is the generalizability of a developed artificial intelligence algorithm for COVID-19 on chest CT sufficient for clinical use? Results from the International Consortium for COVID-19 Imaging AI (ICOVAI). Eur Radiol 2023; 33:4249-4258. [PMID: 36651954 PMCID: PMC9848031 DOI: 10.1007/s00330-022-09303-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 10/14/2022] [Accepted: 11/18/2022] [Indexed: 01/19/2023]
Abstract
OBJECTIVES Only few published artificial intelligence (AI) studies for COVID-19 imaging have been externally validated. Assessing the generalizability of developed models is essential, especially when considering clinical implementation. We report the development of the International Consortium for COVID-19 Imaging AI (ICOVAI) model and perform independent external validation. METHODS The ICOVAI model was developed using multicenter data (n = 1286 CT scans) to quantify disease extent and assess COVID-19 likelihood using the COVID-19 Reporting and Data System (CO-RADS). A ResUNet model was modified to automatically delineate lung contours and infectious lung opacities on CT scans, after which a random forest predicted the CO-RADS score. After internal testing, the model was externally validated on a multicenter dataset (n = 400) by independent researchers. CO-RADS classification performance was calculated using linearly weighted Cohen's kappa and segmentation performance using Dice Similarity Coefficient (DSC). RESULTS Regarding internal versus external testing, segmentation performance of lung contours was equally excellent (DSC = 0.97 vs. DSC = 0.97, p = 0.97). Lung opacities segmentation performance was adequate internally (DSC = 0.76), but significantly worse on external validation (DSC = 0.59, p < 0.0001). For CO-RADS classification, agreement with radiologists on the internal set was substantial (kappa = 0.78), but significantly lower on the external set (kappa = 0.62, p < 0.0001). CONCLUSION In this multicenter study, a model developed for CO-RADS score prediction and quantification of COVID-19 disease extent was found to have a significant reduction in performance on independent external validation versus internal testing. The limited reproducibility of the model restricted its potential for clinical use. The study demonstrates the importance of independent external validation of AI models. KEY POINTS • The ICOVAI model for prediction of CO-RADS and quantification of disease extent on chest CT of COVID-19 patients was developed using a large sample of multicenter data. • There was substantial performance on internal testing; however, performance was significantly reduced on external validation, performed by independent researchers. The limited generalizability of the model restricts its potential for clinical use. • Results of AI models for COVID-19 imaging on internal tests may not generalize well to external data, demonstrating the importance of independent external validation.
Collapse
Affiliation(s)
- Laurens Topff
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands. .,GROW School for Oncology and Reproduction, Maastricht University, Universiteitssingel 40, 6229 ER, Maastricht, The Netherlands.
| | - Kevin B W Groot Lipman
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands.,GROW School for Oncology and Reproduction, Maastricht University, Universiteitssingel 40, 6229 ER, Maastricht, The Netherlands.,Department of Thoracic Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands
| | - Frederic Guffens
- Department of Radiology, University Hospitals Leuven, Herestraat 49, 3000, Leuven, Belgium
| | - Rianne Wittenberg
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands
| | - Annemarieke Bartels-Rutten
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands
| | | | | | | | | | - Erik Ranschaert
- Department of Radiology, St. Nikolaus Hospital, Hufengasse 4-8, 4700, Eupen, Belgium.,Ghent University, C. Heymanslaan 10, 9000, Ghent, Belgium
| | - Stefano Trebeschi
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands
| | - Jacob J Visser
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, Dr. Molewaterplein 40, 3015, GD, Rotterdam, The Netherlands
| | - Regina G H Beets-Tan
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands.,GROW School for Oncology and Reproduction, Maastricht University, Universiteitssingel 40, 6229 ER, Maastricht, The Netherlands.,Institute of Regional Health Research, University of Southern Denmark, Campusvej 55, 5230, Odense, Denmark
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
49
|
Khan A, Khan SH, Saif M, Batool A, Sohail A, Waleed Khan M. A Survey of Deep Learning Techniques for the Analysis of COVID-19 and their usability for Detecting Omicron. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Affiliation(s)
- Asifullah Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Islamabad, Pakistan
- Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Islamabad, Pakistan
| | - Saddam Hussain Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Computer Systems Engineering, University of Engineering and Applied Sciences (UEAS), Swat, Pakistan
| | - Mahrukh Saif
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
| | - Asiya Batool
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Computer Science, Faculty of Computing & Artificial Intelligence, Air University, Islamabad, Pakistan
| | - Muhammad Waleed Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Mechanical and Aerospace Engineering, Columbus, OH, USA
| |
Collapse
|
50
|
Topff L, Sánchez-García J, López-González R, Pastor AJ, Visser JJ, Huisman M, Guiot J, Beets-Tan RGH, Alberich-Bayarri A, Fuster-Matanzo A, Ranschaert ER. A deep learning-based application for COVID-19 diagnosis on CT: The Imaging COVID-19 AI initiative. PLoS One 2023; 18:e0285121. [PMID: 37130128 PMCID: PMC10153726 DOI: 10.1371/journal.pone.0285121] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 04/15/2023] [Indexed: 05/03/2023] Open
Abstract
BACKGROUND Recently, artificial intelligence (AI)-based applications for chest imaging have emerged as potential tools to assist clinicians in the diagnosis and management of patients with coronavirus disease 2019 (COVID-19). OBJECTIVES To develop a deep learning-based clinical decision support system for automatic diagnosis of COVID-19 on chest CT scans. Secondarily, to develop a complementary segmentation tool to assess the extent of lung involvement and measure disease severity. METHODS The Imaging COVID-19 AI initiative was formed to conduct a retrospective multicentre cohort study including 20 institutions from seven different European countries. Patients with suspected or known COVID-19 who underwent a chest CT were included. The dataset was split on the institution-level to allow external evaluation. Data annotation was performed by 34 radiologists/radiology residents and included quality control measures. A multi-class classification model was created using a custom 3D convolutional neural network. For the segmentation task, a UNET-like architecture with a backbone Residual Network (ResNet-34) was selected. RESULTS A total of 2,802 CT scans were included (2,667 unique patients, mean [standard deviation] age = 64.6 [16.2] years, male/female ratio 1.3:1). The distribution of classes (COVID-19/Other type of pulmonary infection/No imaging signs of infection) was 1,490 (53.2%), 402 (14.3%), and 910 (32.5%), respectively. On the external test dataset, the diagnostic multiclassification model yielded high micro-average and macro-average AUC values (0.93 and 0.91, respectively). The model provided the likelihood of COVID-19 vs other cases with a sensitivity of 87% and a specificity of 94%. The segmentation performance was moderate with Dice similarity coefficient (DSC) of 0.59. An imaging analysis pipeline was developed that returned a quantitative report to the user. CONCLUSION We developed a deep learning-based clinical decision support system that could become an efficient concurrent reading tool to assist clinicians, utilising a newly created European dataset including more than 2,800 CT scans.
Collapse
Affiliation(s)
- Laurens Topff
- Department of Radiology, Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | | | | | | | - Jacob J Visser
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Merel Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Julien Guiot
- Department of Pneumology, University Hospital of Liège (CHU Liège), Liège, Belgium
| | - Regina G H Beets-Tan
- Department of Radiology, Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | | | | | - Erik R Ranschaert
- Department of Radiology, St. Nikolaus Hospital, Eupen, Belgium
- Ghent University, Ghent, Belgium
| |
Collapse
|