1
|
Meijerink LM, Dunias ZS, Leeuwenberg AM, de Hond AAH, Jenkins DA, Martin GP, Sperrin M, Peek N, Spijker R, Hooft L, Moons KGM, van Smeden M, Schuit E. Updating methods for artificial intelligence-based clinical prediction models: a scoping review. J Clin Epidemiol 2025; 178:111636. [PMID: 39662644 DOI: 10.1016/j.jclinepi.2024.111636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 12/02/2024] [Accepted: 12/03/2024] [Indexed: 12/13/2024]
Abstract
OBJECTIVES To give an overview of methods for updating artificial intelligence (AI)-based clinical prediction models based on new data. STUDY DESIGN AND SETTING We comprehensively searched Scopus and Embase up to August 2022 for articles that addressed developments, descriptions, or evaluations of prediction model updating methods. We specifically focused on articles in the medical domain involving AI-based prediction models that were updated based on new data, excluding regression-based updating methods as these have been extensively discussed elsewhere. We categorized and described the identified methods used to update the AI-based prediction model as well as the use cases in which they were used. RESULTS We included 78 articles. The majority of the included articles discussed updating for neural network methods (93.6%) with medical images as input data (65.4%). In many articles (51.3%) existing, pretrained models for broad tasks were updated to perform specialized clinical tasks. Other common reasons for model updating were to address changes in the data over time and cross-center differences; however, more unique use cases were also identified, such as updating a model from a broad population to a specific individual. We categorized the identified model updating methods into four categories: neural network-specific methods (described in 92.3% of the articles), ensemble-specific methods (2.5%), model-agnostic methods (9.0%), and other (1.3%). Variations of neural network-specific methods are further categorized based on the following: (1) the part of the original neural network that is kept, (2) whether and how the original neural network is extended with new parameters, and (3) to what extent the original neural network parameters are adjusted to the new data. The most frequently occurring method (n = 30) involved selecting the first layer(s) of an existing neural network, appending new, randomly initialized layers, and then optimizing the entire neural network. CONCLUSION We identified many ways to adjust or update AI-based prediction models based on new data, within a large variety of use cases. Updating methods for AI-based prediction models other than neural networks (eg, random forest) appear to be underexplored in clinical prediction research. PLAIN LANGUAGE SUMMARY AI-based prediction models are increasingly used in health care, helping clinicians with diagnosing diseases, guiding treatment decisions, and informing patients. However, these prediction models do not always work well when applied to hospitals, patient populations, or times different from those used to develop the models. Developing new models for every situation is neither practical nor desired, as it wastes resources, time, and existing knowledge. A more efficient approach is to adjust existing models to new contexts ('updating'), but there is limited guidance on how to do this for AI-based clinical prediction models. To address this, we reviewed 78 studies in detail to understand how researchers are currently updating AI-based clinical prediction models, and the types of situations in which these updating methods are used. Our findings provide a comprehensive overview of the available methods to update existing models. This is intended to serve as guidance and inspiration for researchers. Ultimately, this can lead to better reuse of existing models and improve the quality and efficiency of AI-based prediction models in health care.
Collapse
Affiliation(s)
- Lotta M Meijerink
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.
| | - Zoë S Dunias
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Artuur M Leeuwenberg
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Anne A H de Hond
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - David A Jenkins
- Division of Informatics, Imaging and Data Sciences, University of Manchester, Manchester, United Kingdom
| | - Glen P Martin
- Division of Informatics, Imaging and Data Sciences, University of Manchester, Manchester, United Kingdom
| | - Matthew Sperrin
- Division of Informatics, Imaging and Data Sciences, University of Manchester, Manchester, United Kingdom
| | - Niels Peek
- Department of Public Health and Primary Care, The Healthcare Improvement Studies Institute, University of Cambridge, Cambridge, United Kingdom
| | - René Spijker
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Lotty Hooft
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Maarten van Smeden
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Ewoud Schuit
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
2
|
Momeni Pour Z, Beheshti Shirazi AA. Identifying COVID-19-Infected Segments in Lung CT Scan Through Two Innovative Artificial Intelligence-Based Transformer Models. ARCHIVES OF ACADEMIC EMERGENCY MEDICINE 2024; 13:e21. [PMID: 39958958 PMCID: PMC11829223 DOI: 10.22037/aaemj.v13i1.2515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/18/2025]
Abstract
Introduction Automatic systems based on Artificial intelligence (AI) algorithms have made significant advancements across various domains, most notably in the field of medicine. This study introduces a novel approach for identifying COVID-19-infected regions in lung computed tomography (CT) scan through the development of two innovative models. Methods In this study we used the Squeeze and Excitation based UNet TRansformers (SE-UNETR) and the Squeeze and Excitation based High-Quality Resolution Swin Transformer Network (SE-HQRSTNet), to develop two three-dimensional segmentation networks for identifying COVID-19-infected regions in lung CT scan. The SE-UNETR model is structured as a 3D UNet architecture with an encoder component built on Vision Transformers (ViTs). This model processes 3D patches directly as input and learns sequential representations of the volumetric data. The encoder connects to the decoder using skip connections, ultimately producing the final semantic segmentation output. Conversely, the SE-HQRSTNet model incorporates High-Resolution Networks (HRNet), Swin Transformer modules, and Squeeze and Excitation (SE) blocks. This architecture is designed to generate features at multiple resolutions, utilizing Multi-Resolution Feature Fusion (MRFF) blocks to effectively integrate semantic features across various scales. The proposed networks were evaluated using a 5-fold cross-validation methodology, along with data augmentation techniques, applied to the COVID-19-CT-Seg and MosMed datasets. Results experimental results demonstrate that the Dice value for the infection masks within the COVID-19-CT-Seg dataset improved by 3.81% and 4.84% with the SE-UNETR and SE-HQRSTNet models, respectively, compared to previously reported work. Furthermore, the Dice value for the MosMed dataset increased from 66.8% to 69.35% and 70.89% for the SE-UNETR and SE-HQRSTNet models, respectively. Conclusion These improvements indicate that the proposed models exhibit superior efficiency and performance relative to existing methodologies.
Collapse
Affiliation(s)
- Zeinab Momeni Pour
- Department of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | | |
Collapse
|
3
|
Liu K, Zhang J. Cost-efficient and glaucoma-specifical model by exploiting normal OCT images with knowledge transfer learning. BIOMEDICAL OPTICS EXPRESS 2023; 14:6151-6171. [PMID: 38420316 PMCID: PMC10898582 DOI: 10.1364/boe.500917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/17/2023] [Accepted: 10/21/2023] [Indexed: 03/02/2024]
Abstract
Monitoring the progression of glaucoma is crucial for preventing further vision loss. However, deep learning-based models emphasize early glaucoma detection, resulting in a significant performance gap to glaucoma-confirmed subjects. Moreover, developing a fully-supervised model is suffering from insufficient annotated glaucoma datasets. Currently, sufficient and low-cost normal OCT images with pixel-level annotations can serve as valuable resources, but effectively transferring shared knowledge from normal datasets is a challenge. To alleviate the issue, we propose a knowledge transfer learning model for exploiting shared knowledge from low-cost and sufficient annotated normal OCT images by explicitly establishing the relationship between the normal domain and the glaucoma domain. Specifically, we directly introduce glaucoma domain information to the training stage through a three-step adversarial-based strategy. Additionally, our proposed model exploits different level shared features in both output space and encoding space with a suitable output size by a multi-level strategy. We have collected and collated a dataset called the TongRen OCT glaucoma dataset, including pixel-level annotated glaucoma OCT images and diagnostic information. The results on the dataset demonstrate our proposed model outperforms the un-supervised model and the mixed training strategy, achieving an increase of 5.28% and 5.77% on mIoU, respectively. Moreover, our proposed model narrows performance gap to the fully-supervised model decreased by only 1.01% on mIoU. Therefore, our proposed model can serve as a valuable tool for extracting glaucoma-related features, facilitating the tracking progression of glaucoma.
Collapse
Affiliation(s)
- Kai Liu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
- Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, 100083, China
- Department of Computer Science, City University of Hong Kong, Hong Kong, 98121, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
- Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, 100083, China
- Hefei Innovation Research Institute, Beihang University, Hefei, 230012, China
| |
Collapse
|
4
|
Lyu F, Ye M, Yip TCF, Wong GLH, Yuen PC. Local Style Transfer via Latent Space Manipulation for Cross-Disease Lesion Segmentation. IEEE J Biomed Health Inform 2023; PP:273-284. [PMID: 37883256 DOI: 10.1109/jbhi.2023.3327726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Automatic lesion segmentation is important for assisting doctors in the diagnostic process. Recent deep learning approaches heavily rely on large-scale datasets, which are difficult to obtain in many clinical applications. Leveraging external labelled datasets is an effective solution to tackle the problem of insufficient training data. In this paper, we propose a new framework, namely LatenTrans, to utilize existing datasets for boosting the performance of lesion segmentation in extremely low data regimes. LatenTrans translates non-target lesions into target-like lesions and expands the training dataset with target-like data for better performance. Images are first projected to the latent space via aligned style-based generative models, and rich lesion semantics are encoded using the latent codes. A novel consistency-aware latent code manipulation module is proposed to enable high-quality local style transfer from non-target lesions to target-like lesions while preserving other parts. Moreover, we propose a new metric, Normalized Latent Distance, to solve the question of how to select an adequate one from various existing datasets for knowledge transfer. Extensive experiments are conducted on segmenting lung and brain lesions, and the experimental results demonstrate that our proposed LatenTrans is superior to existing methods for cross-disease lesion segmentation.
Collapse
|
5
|
Saha S, Dutta S, Goswami B, Nandi D. ADU-Net: An Attention Dense U-Net based deep supervised DNN for automated lesion segmentation of COVID-19 from chest CT images. Biomed Signal Process Control 2023; 85:104974. [PMID: 37122956 PMCID: PMC10121143 DOI: 10.1016/j.bspc.2023.104974] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 04/01/2023] [Accepted: 04/15/2023] [Indexed: 05/02/2023]
Abstract
An automatic method for qualitative and quantitative evaluation of chest Computed Tomography (CT) images is essential for diagnosing COVID-19 patients. We aim to develop an automated COVID-19 prediction framework using deep learning. We put forth a novel Deep Neural Network (DNN) composed of an attention-based dense U-Net with deep supervision for COVID-19 lung lesion segmentation from chest CT images. We incorporate dense U-Net where convolution kernel size 5×5 is used instead of 3×3. The dense and transition blocks are introduced to implement a densely connected network on each encoder level. Also, the attention mechanism is applied between the encoder, skip connection, and decoder. These are used to keep both the high and low-level features efficiently. The deep supervision mechanism creates secondary segmentation maps from the features. Deep supervision combines secondary supervision maps from various resolution levels and produces a better final segmentation map. The trained artificial DNN model takes the test data at its input and generates a prediction output for COVID-19 lesion segmentation. The proposed model has been applied to the MedSeg COVID-19 chest CT segmentation dataset. Data pre-processing methods help the training process and improve performance. We compare the performance of the proposed DNN model with state-of-the-art models by computing the well-known metrics: dice coefficient, Jaccard coefficient, accuracy, specificity, sensitivity, and precision. As a result, the proposed model outperforms the state-of-the-art models. This new model may be considered an efficient automated screening system for COVID-19 diagnosis and can potentially improve patient health care and management system.
Collapse
Affiliation(s)
- Sanjib Saha
- Department of Computer Science and Engineering, National Institute of Technology, Durgapur, 713209, West Bengal, India
- Department of Computer Science and Engineering, Dr. B. C. Roy Engineering College, Durgapur, 713206, West Bengal, India
| | - Subhadeep Dutta
- Department of Computer Science and Engineering, Dr. B. C. Roy Engineering College, Durgapur, 713206, West Bengal, India
| | - Biswarup Goswami
- Department of Respiratory Medicine, Health and Family Welfare, Government of West Bengal, Kolkata, 700091, West Bengal, India
| | - Debashis Nandi
- Department of Computer Science and Engineering, National Institute of Technology, Durgapur, 713209, West Bengal, India
| |
Collapse
|
6
|
Ma Y, Zhang Y, Chen L, Jiang Q, Wei B. Dual attention fusion UNet for COVID-19 lesion segmentation from CT images. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023:XST230001. [PMID: 37092210 DOI: 10.3233/xst-230001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
BACKGROUND Chest CT scan is an effective way to detect and diagnose COVID-19 infection. However, features of COVID-19 infection in chest CT images are very complex and heterogeneous, which make segmentation of COVID-19 lesions from CT images quite challenging. OBJECTIVE To overcome this challenge, this study proposes and test an end-to-end deep learning method called dual attention fusion UNet (DAF-UNet). METHODS The proposed DAF-UNet improves the typical UNet into an advanced architecture. The dense-connected convolution is adopted to replace the convolution operation. The mixture of average-pooling and max-pooling acts as the down-sampling in the encoder. Bridge-connected layers, including convolution, batch normalization, and leaky rectified linear unit (leaky ReLU) activation, serve as the skip connections between the encoder and decoder to bridge the semantic gap differences. A multiscale pyramid pooling module acts as the bottleneck to fit the features of COVID-19 lesion with complexity. Furthermore, dual attention feature (DAF) fusion containing channel and position attentions followed the improved UNet to learn the long-dependency contextual features of COVID-19 and further enhance the capacity of the proposed DAF-UNet. The proposed model is first pre-trained on the pseudo label dataset (generated by Inf-Net) containing many samples, then fine-tuned on the standard annotation dataset (provided by the Italian Society of Medical and Interventional Radiology) with high-quality but limited samples to improve performance of COVID-19 lesion segmentation on chest CT images. RESULTS The Dice coefficient and Sensitivity are 0.778 and 0.798 respectively. The proposed DAF-UNet has higher scores than the popular models (Att-UNet, Dense-UNet, Inf-Net, COPLE-Net) tested using the same dataset as our model. CONCLUSION The study demonstrates that the proposed DAF-UNet achieves superior performance for precisely segmenting COVID-19 lesions from chest CT scans compared with the state-of-the-art approaches. Thus, the DAF-UNet has promising potential for assisting COVID-19 disease screening and detection.
Collapse
Affiliation(s)
- Yinjin Ma
- School of Data Science, Tongren University, Tongren, China
| | | | - Lin Chen
- School of Data Science, Tongren University, Tongren, China
| | - Qiang Jiang
- Tongren City People's Hospital, Tongren, China
| | - Biao Wei
- Key Laboratory of OptoelectronicTechnology and Systems, Ministry of Education, Chongqing University, Chongqing, China
| |
Collapse
|
7
|
Hasan MM, Islam MU, Sadeq MJ, Fung WK, Uddin J. Review on the Evaluation and Development of Artificial Intelligence for COVID-19 Containment. SENSORS (BASEL, SWITZERLAND) 2023; 23:527. [PMID: 36617124 PMCID: PMC9824505 DOI: 10.3390/s23010527] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/23/2022] [Accepted: 12/29/2022] [Indexed: 06/17/2023]
Abstract
Artificial intelligence has significantly enhanced the research paradigm and spectrum with a substantiated promise of continuous applicability in the real world domain. Artificial intelligence, the driving force of the current technological revolution, has been used in many frontiers, including education, security, gaming, finance, robotics, autonomous systems, entertainment, and most importantly the healthcare sector. With the rise of the COVID-19 pandemic, several prediction and detection methods using artificial intelligence have been employed to understand, forecast, handle, and curtail the ensuing threats. In this study, the most recent related publications, methodologies and medical reports were investigated with the purpose of studying artificial intelligence's role in the pandemic. This study presents a comprehensive review of artificial intelligence with specific attention to machine learning, deep learning, image processing, object detection, image segmentation, and few-shot learning studies that were utilized in several tasks related to COVID-19. In particular, genetic analysis, medical image analysis, clinical data analysis, sound analysis, biomedical data classification, socio-demographic data analysis, anomaly detection, health monitoring, personal protective equipment (PPE) observation, social control, and COVID-19 patients' mortality risk approaches were used in this study to forecast the threatening factors of COVID-19. This study demonstrates that artificial-intelligence-based algorithms integrated into Internet of Things wearable devices were quite effective and efficient in COVID-19 detection and forecasting insights which were actionable through wide usage. The results produced by the study prove that artificial intelligence is a promising arena of research that can be applied for disease prognosis, disease forecasting, drug discovery, and to the development of the healthcare sector on a global scale. We prove that artificial intelligence indeed played a significantly important role in helping to fight against COVID-19, and the insightful knowledge provided here could be extremely beneficial for practitioners and research experts in the healthcare domain to implement the artificial-intelligence-based systems in curbing the next pandemic or healthcare disaster.
Collapse
Affiliation(s)
- Md. Mahadi Hasan
- Department of Computer Science and Engineering, Asian University of Bangladesh, Ashulia 1349, Bangladesh
| | - Muhammad Usama Islam
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA
| | - Muhammad Jafar Sadeq
- Department of Computer Science and Engineering, Asian University of Bangladesh, Ashulia 1349, Bangladesh
| | - Wai-Keung Fung
- Department of Applied Computing and Engineering, Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff CF5 2YB, UK
| | - Jasim Uddin
- Department of Applied Computing and Engineering, Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff CF5 2YB, UK
| |
Collapse
|
8
|
Han K, Wang J, Zou Y, Zhang Y, Zhou L, Yin Y. Association between emphysema and other pulmonary computed tomography patterns in COVID-19 pneumonia. J Med Virol 2023; 95:e28293. [PMID: 36358023 PMCID: PMC9828029 DOI: 10.1002/jmv.28293] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 10/22/2022] [Accepted: 11/07/2022] [Indexed: 11/13/2022]
Abstract
To evaluate the chest computed tomography (CT) findings of patients with Corona Virus Disease 2019 (COVID-19) on admission to hospital. And then correlate CT pulmonary infiltrates involvement with the findings of emphysema. We analyzed the different infiltrates of COVID-19 pneumonia using emphysema as the grade of pneumonia. We applied open-source assisted software (3D Slicer) to model the lungs and lesions of 66 patients with COVID-19, which were retrospectively included. we divided the 66 COVID-19 patients into the following two groups: (A) 12 patients with less than 10% emphysema in the low-attenuation area less than -950 Hounsfield units (%LAA-950), (B) 54 patients with greater than or equal to 10% emphysema in %LAA-950. Imaging findings were assessed retrospectively by two authors and then pulmonary infiltrates and emphysema volumes were measured on CT using 3D Slicer software. Differences between pulmonary infiltrates, emphysema, Collapsed, affected of patients with CT findings were assessed by Kruskal-Wallis and Wilcoxon test, respectively. Statistical significance was set at p < 0.05. The left lung (A) affected left lung 20.00/affected right lung 18.50, (B) affected left lung 13.00/affected right lung 11.50 was most frequently involved region in COVID-19. In addition, collapsed left lung, (A) collapsed left lung 4.95/collapsed right lung 4.65, (B) collapsed left lung 3.65/collapsed right lung 3.15 was also more severe than the right one. There were significant differences between the Group A and Group B in terms of the percentage of CT involvement in each lung region (p < 0.05), except for the inflated affected total lung (p = 0.152). The median percentage of collapsed left lung in the Group A was 20.00 (14.00-30.00), right lung was 18.50 (13.00-30.25) and the total was 19.00 (13.00-30.00), while the median percentage of collapsed left lung in the Group B was 13.00 (10.00-14.75), right lung was 11.50 (10.00-15.00) and the total was 12.50 (10.00-15.00). The percentage of affected left lung is an independent predictor of emphysema in COVID-19 patients. We need to focus on the left lung of the patient as it is more affected. The people with lower levels of emphysema may have more collapsed segments. The more collapsed segments may lead to more serious clinical feature.
Collapse
Affiliation(s)
- Ke Han
- Department of Cardiothoracic Vascular Surgery, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| | - Jing Wang
- Department of Dermatology, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| | - Yulin Zou
- Department of Dermatology, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China,Department of Dermatology, Jinzhou Medical University Graduate Training Base, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| | - Yuxin Zhang
- Department of Dermatology, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| | - Lin Zhou
- Department of Medical Imaging Center, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| | - Yiping Yin
- Department of Pulmonary & Critical Care Medicine, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| |
Collapse
|
9
|
Chen H, Jiang Y, Ko H, Loew M. A teacher-student framework with Fourier Transform augmentation for COVID-19 infection segmentation in CT images. Biomed Signal Process Control 2023; 79:104250. [PMID: 36188130 PMCID: PMC9510070 DOI: 10.1016/j.bspc.2022.104250] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 08/11/2022] [Accepted: 09/18/2022] [Indexed: 11/23/2022]
Abstract
Automatic segmentation of infected regions in computed tomography (CT) images is necessary for the initial diagnosis of COVID-19. Deep-learning-based methods have the potential to automate this task but require a large amount of data with pixel-level annotations. Training a deep network with annotated lung cancer CT images, which are easier to obtain, can alleviate this problem to some extent. However, this approach may suffer from a reduction in performance when applied to unseen COVID-19 images during the testing phase, caused by the difference in the image intensity and object region distribution between the training set and test set. In this paper, we proposed a novel unsupervised method for COVID-19 infection segmentation that aims to learn the domain-invariant features from lung cancer and COVID-19 images to improve the generalization ability of the segmentation network for use with COVID-19 CT images. First, to address the intensity difference, we proposed a novel data augmentation module based on Fourier Transform, which transfers the annotated lung cancer data into the style of COVID-19 image. Secondly, to reduce the distribution difference, we designed a teacher-student network to learn rotation-invariant features for segmentation. The experiments demonstrated that even without getting access to the annotations of the COVID-19 CT images during the training phase, the proposed network can achieve a state-of-the-art segmentation performance on COVID-19 infection.
Collapse
Affiliation(s)
- Han Chen
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Yifan Jiang
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Hanseok Ko
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Murray Loew
- Biomedical Engineering, George Washington University, Washington D.C., USA
| |
Collapse
|
10
|
Liu S, Tang X, Cai T, Zhang Y, Wang C. COVID-19 CT image segmentation based on improved Res2Net. Med Phys 2022; 49:7583-7595. [PMID: 35916116 PMCID: PMC9538682 DOI: 10.1002/mp.15882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 06/27/2022] [Accepted: 07/18/2022] [Indexed: 01/08/2023] Open
Abstract
PURPOSE Corona virus disease 2019 (COVID-19) is threatening the health of the global people and bringing great losses to our economy and society. However, computed tomography (CT) image segmentation can make clinicians quickly identify the COVID-19-infected regions. Accurate segmentation infection area of COVID-19 can contribute screen confirmed cases. METHODS We designed a segmentation network for COVID-19-infected regions in CT images. To begin with, multilayered features were extracted by the backbone network of Res2Net. Subsequently, edge features of the infected regions in the low-level feature f2 were extracted by the edge attention module. Second, we carefully designed the structure of the attention position module (APM) to extract high-level feature f5 and detect infected regions. Finally, we proposed a context exploration module consisting of two parallel explore blocks, which can remove some false positives and false negatives to reach more accurate segmentation results. RESULTS Experimental results show that, on the public COVID-19 dataset, the Dice, sensitivity, specificity,S α ${S}_\alpha $ ,E ∅ m e a n $E_\emptyset ^{mean}$ , and mean absolute error (MAE) of our method are 0.755, 0.751, 0.959, 0.795, 0.919, and 0.060, respectively. Compared with the latest COVID-19 segmentation model Inf-Net, the Dice similarity coefficient of our model has increased by 7.3%; the sensitivity (Sen) has increased by 5.9%. On contrary, the MAE has dropped by 2.2%. CONCLUSIONS Our method performs well on COVID-19 CT image segmentation. We also find that our method is so portable that can be suitable for various current popular networks. In a word, our method can help screen people infected with COVID-19 effectively and save the labor power of clinicians and radiologists.
Collapse
Affiliation(s)
- Shangwang Liu
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
- Engineering Lab of Intelligence Business & Internet of ThingsXinxiangHenanChina
| | - Xiufang Tang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Tongbo Cai
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Yangyang Zhang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Changgeng Wang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| |
Collapse
|
11
|
Polat H. A modified DeepLabV3+ based semantic segmentation of chest computed tomography images for COVID-19 lung infections. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:1481-1495. [PMID: 35941930 PMCID: PMC9349869 DOI: 10.1002/ima.22772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Revised: 04/19/2022] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
Coronavirus disease (COVID-19) affects the lives of billions of people worldwide and has destructive impacts on daily life routines, the global economy, and public health. Early diagnosis and quantification of COVID-19 infection have a vital role in improving treatment outcomes and interrupting transmission. For this purpose, advances in medical imaging techniques like computed tomography (CT) scans offer great potential as an alternative to RT-PCR assay. CT scans enable a better understanding of infection morphology and tracking of lesion boundaries. Since manual analysis of CT can be extremely tedious and time-consuming, robust automated image segmentation is necessary for clinical diagnosis and decision support. This paper proposes an efficient segmentation framework based on the modified DeepLabV3+ using lower atrous rates in the Atrous Spatial Pyramid Pooling (ASPP) module. The lower atrous rates make receptive small to capture intricate morphological details. The encoder part of the framework utilizes a pre-trained residual network based on dilated convolutions for optimum resolution of feature maps. In order to evaluate the robustness of the modified model, a comprehensive comparison with other state-of-the-art segmentation methods was also performed. The experiments were carried out using a fivefold cross-validation technique on a publicly available database containing 100 single-slice CT scans from >40 patients with COVID-19. The modified DeepLabV3+ achieved good segmentation performance using around 43.9 M parameters. The lower atrous rates in the ASPP module improved segmentation performance. After fivefold cross-validation, the framework achieved an overall Dice similarity coefficient score of 0.881. The results demonstrate that several minor modifications to the DeepLabV3+ pipeline can provide robust solutions for improving segmentation performance and hardware implementation.
Collapse
Affiliation(s)
- Hasan Polat
- Department of Electrical and EnergyBingol UniversityBingölTurkey
| |
Collapse
|
12
|
Khalifa NEM, Manogaran G, Taha MHN, Loey M. A deep learning semantic segmentation architecture for COVID-19 lesions discovery in limited chest CT datasets. EXPERT SYSTEMS 2022; 39:e12742. [PMID: 34177038 PMCID: PMC8209878 DOI: 10.1111/exsy.12742] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 04/28/2021] [Accepted: 04/30/2021] [Indexed: 05/10/2023]
Abstract
During the epidemic of COVID-19, Computed Tomography (CT) is used to help in the diagnosis of patients. Most current studies on this subject appear to be focused on broad and private annotated data which are impractical to access from an organization, particularly while radiologists are fighting the coronavirus disease. It is challenging to equate these techniques since they were built on separate datasets, educated on various training sets, and tested using different metrics. In this research, a deep learning semantic segmentation architecture for COVID-19 lesions detection in limited chest CT datasets will be presented. The proposed model architecture consists of the encoder and the decoder components. The encoder component contains three layers of convolution and pooling, while the decoder contains three layers of deconvolutional and upsampling. The dataset consists of 20 CT scans of lungs belongs to 20 patients from two sources of data. The total number of images in the dataset is 3520 CT scans with its labelled images. The dataset is split into 70% for the training phase and 30% for the testing phase. Images of the dataset are passed through the pre-processing phase to be resized and normalized. Five experimental trials are conducted through the research with different images selected for the training and the testing phases for every trial. The proposed model achieves 0.993 in the global accuracy, and 0.987, 0.799, 0.874 for weighted IoU, mean IoU and mean BF score accordingly. The performance metrics such as precision, sensitivity, specificity and F1 score strengthens the obtained results. The proposed model outperforms the related works which use the same dataset in terms of performance and IoU metrics.
Collapse
Affiliation(s)
- Nour Eldeen M. Khalifa
- Department of Information TechnologyFaculty of Computers & Artificial Intelligence, Cairo UniversityCairoEgypt
| | - Gunasekaran Manogaran
- University of CaliforniaDavisCaliforniaUSA
- College of Information and Electrical EngineeringAsia UniversityTaichungTaiwan
| | - Mohamed Hamed N. Taha
- Department of Information TechnologyFaculty of Computers & Artificial Intelligence, Cairo UniversityCairoEgypt
| | - Mohamed Loey
- Department of Computer Science, Faculty of Computers and Artificial IntelligenceBenha UniversityBenhaEgypt
| |
Collapse
|
13
|
Pandey SK, Bhandari AK, Singh H. A transfer learning based deep learning model to diagnose covid-19 CT scan images. HEALTH AND TECHNOLOGY 2022; 12:845-866. [PMID: 35698586 PMCID: PMC9177227 DOI: 10.1007/s12553-022-00677-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 05/20/2022] [Indexed: 12/15/2022]
Abstract
To save the life of human beings during the pandemic conditions we need an effective automated method to deal with this situation. In pandemic conditions when the available resources becomes insufficient to handle the patient's load, then we needed some fast and reliable method which analyse the patient medical data with high efficiency and accuracy within time limitations. In this manuscript, an effective and efficient method is proposed for exact diagnosis of the patient whether it is coronavirus disease-2019 (covid-19) positive or negative with the help of deep learning. To find the correct diagnosis with high accuracy we use pre-processed segmented images for the analysis with deep learning. In the first step the X-ray image or computed tomography (CT) of a covid-19 infected person is analysed with various schemes of image segmentation like simple thresholding at 0.3, simple thresholding at 0.6, multiple thresholding (between 26-230) and Otsu's algorithm. On comparative analysis of all these methods, it is found that the Otsu's algorithm is a simple and optimum scheme to improve the segmented outcome of binary image for the diagnosis point of view. Otsu's segmentation scheme gives more precise values in comparison to other methods on the scale of various image quality parameters like accuracy, sensitivity, f-measure, precision, and specificity. For image classification here we use Resnet-50, MobileNet and VGG-16 models of deep learning which gives accuracy 70.24%, 72.95% and 83.18% respectively with non-segmented CT scan images and 75.08%, 80.12% and 99.28% respectively with Otsu's segmented CT scan images. On a comparative study we find that the VGG-16 models with CT scan image segmented with Otsu's segmentation gives very high accuracy of 99.28%. On the basis of the diagnosis of the patient firstly we go for an arterial blood gas (ABG) analysis and then on the behalf of this diagnosis and ABG report, the severity level of the patient can be decided and according to this severity level, proper treatment protocols can be followed immediately to save the patient's life. Compared with the existing works, our deep learning based novel method reduces the complexity, takes much less time and has a greater accuracy for exact diagnosis of coronavirus disease-2019 (covid-19).
Collapse
Affiliation(s)
- Sanat Kumar Pandey
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Bihar, India
| | - Ashish Kumar Bhandari
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Bihar, India
| | - Himanshu Singh
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tiruchippalli, India
| |
Collapse
|
14
|
Avola D, Bacciu A, Cinque L, Fagioli A, Marini MR, Taiello R. Study on transfer learning capabilities for pneumonia classification in chest-x-rays images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106833. [PMID: 35537296 PMCID: PMC9033299 DOI: 10.1016/j.cmpb.2022.106833] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 04/12/2022] [Accepted: 04/21/2022] [Indexed: 05/09/2023]
Abstract
BACKGROUND over the last year, the severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) and its variants have highlighted the importance of screening tools with high diagnostic accuracy for new illnesses such as COVID-19. In that regard, deep learning approaches have proven as effective solutions for pneumonia classification, especially when considering chest-x-rays images. However, this lung infection can also be caused by other viral, bacterial or fungi pathogens. Consequently, efforts are being poured toward distinguishing the infection source to help clinicians to diagnose the correct disease origin. Following this tendency, this study further explores the effectiveness of established neural network architectures on the pneumonia classification task through the transfer learning paradigm. METHODOLOGY to present a comprehensive comparison, 12 well-known ImageNet pre-trained models were fine-tuned and used to discriminate among chest-x-rays of healthy people, and those showing pneumonia symptoms derived from either a viral (i.e., generic or SARS-CoV-2) or bacterial source. Furthermore, since a common public collection distinguishing between such categories is currently not available, two distinct datasets of chest-x-rays images, describing the aforementioned sources, were combined and employed to evaluate the various architectures. RESULTS the experiments were performed using a total of 6330 images split between train, validation, and test sets. For all models, standard classification metrics were computed (e.g., precision, f1-score), and most architectures obtained significant performances, reaching, among the others, up to 84.46% average f1-score when discriminating the four identified classes. Moreover, execution times, areas under the receiver operating characteristic (AUROC), confusion matrices, activation maps computed via the Grad-CAM algorithm, and additional experiments to assess the robustness of each model using only 50%, 20%, and 10% of the training set were also reported to present an informed discussion on the networks classifications. CONCLUSION this paper examines the effectiveness of well-known architectures on a joint collection of chest-x-rays presenting pneumonia cases derived from either viral or bacterial sources, with particular attention to SARS-CoV-2 contagions for viral pathogens; demonstrating that existing architectures can effectively diagnose pneumonia sources and suggesting that the transfer learning paradigm could be a crucial asset in diagnosing future unknown illnesses.
Collapse
Affiliation(s)
- Danilo Avola
- Department of Computer Science, Sapienza University, Via Salaria 113, Rome 00185, Italy
| | - Andrea Bacciu
- Department of Computer Science, Sapienza University, Via Salaria 113, Rome 00185, Italy
| | - Luigi Cinque
- Department of Computer Science, Sapienza University, Via Salaria 113, Rome 00185, Italy
| | - Alessio Fagioli
- Department of Computer Science, Sapienza University, Via Salaria 113, Rome 00185, Italy.
| | - Marco Raoul Marini
- Department of Computer Science, Sapienza University, Via Salaria 113, Rome 00185, Italy
| | - Riccardo Taiello
- Department of Computer Science, Sapienza University, Via Salaria 113, Rome 00185, Italy
| |
Collapse
|
15
|
Wang Y, Yang Q, Tian L, Zhou X, Rekik I, Huang H. HFCF-Net: A hybrid-feature cross fusion network for COVID-19 lesion segmentation from CT volumetric images. Med Phys 2022; 49:3797-3815. [PMID: 35301729 PMCID: PMC9088496 DOI: 10.1002/mp.15600] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 02/16/2022] [Accepted: 02/21/2022] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) spreads rapidly across the globe, seriously threatening the health of people all over the world. To reduce the diagnostic pressure of front-line doctors, an accurate and automatic lesion segmentation method is highly desirable in clinic practice. PURPOSE Many proposed two-dimensional (2D) methods for sliced-based lesion segmentation cannot take full advantage of spatial information in the three-dimensional (3D) volume data, resulting in limited segmentation performance. Three-dimensional methods can utilize the spatial information but suffer from long training time and slow convergence speed. To solve these problems, we propose an end-to-end hybrid-feature cross fusion network (HFCF-Net) to fuse the 2D and 3D features at three scales for the accurate segmentation of COVID-19 lesions. METHODS The proposed HFCF-Net incorporates 2D and 3D subnets to extract features within and between slices effectively. Then the cross fusion module is designed to bridge 2D and 3D decoders at the same scale to fuse both types of features. The module consists of three cross fusion blocks, each of which contains a prior fusion path and a context fusion path to jointly learn better lesion representations. The former aims to explicitly provide the 3D subnet with lesion-related prior knowledge, and the latter utilizes the 3D context information as the attention guidance of the 2D subnet, which promotes the precise segmentation of the lesion regions. Furthermore, we explore an imbalance-robust adaptive learning loss function that includes image-level loss and pixel-level loss to tackle the problems caused by the apparent imbalance between the proportions of the lesion and non-lesion voxels, providing a learning strategy to dynamically adjust the learning focus between 2D and 3D branches during the training process for effective supervision. RESULT Extensive experiments conducted on a publicly available dataset demonstrate that the proposed segmentation network significantly outperforms some state-of-the-art methods for the COVID-19 lesion segmentation, yielding a Dice similarity coefficient of 74.85%. The visual comparison of segmentation performance also proves the superiority of the proposed network in segmenting different-sized lesions. CONCLUSIONS In this paper, we propose a novel HFCF-Net for rapid and accurate COVID-19 lesion segmentation from chest computed tomography volume data. It innovatively fuses hybrid features in a cross manner for lesion segmentation, aiming to utilize the advantages of 2D and 3D subnets to complement each other for enhancing the segmentation performance. Benefitting from the cross fusion mechanism, the proposed HFCF-Net can segment the lesions more accurately with the knowledge acquired from both subnets.
Collapse
Affiliation(s)
- Yanting Wang
- School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
| | - Qingyu Yang
- School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
| | - Lixia Tian
- School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
| | - Xuezhong Zhou
- School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
| | - Islem Rekik
- BASIRA LaboratoryFaculty of Computer and InformaticsIstanbul Technical UniversityIstanbulTurkey
- School of Science and EngineeringComputingUniversity of DundeeDundeeUK
| | - Huifang Huang
- School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
| |
Collapse
|
16
|
Muhammad U, Hoque MZ, Oussalah M, Keskinarkaus A, Seppänen T, Sarder P. SAM: Self-augmentation mechanism for COVID-19 detection using chest X-ray images. Knowl Based Syst 2022; 241:108207. [PMID: 35068707 PMCID: PMC8762871 DOI: 10.1016/j.knosys.2022.108207] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 01/07/2022] [Accepted: 01/08/2022] [Indexed: 12/20/2022]
Abstract
COVID-19 is a rapidly spreading viral disease and has affected over 100 countries worldwide. The numbers of casualties and cases of infection have escalated particularly in countries with weakened healthcare systems. Recently, reverse transcription-polymerase chain reaction (RT-PCR) is the test of choice for diagnosing COVID-19. However, current evidence suggests that COVID-19 infected patients are mostly stimulated from a lung infection after coming in contact with this virus. Therefore, chest X-ray (i.e., radiography) and chest CT can be a surrogate in some countries where PCR is not readily available. This has forced the scientific community to detect COVID-19 infection from X-ray images and recently proposed machine learning methods offer great promise for fast and accurate detection. Deep learning with convolutional neural networks (CNNs) has been successfully applied to radiological imaging for improving the accuracy of diagnosis. However, the performance remains limited due to the lack of representative X-ray images available in public benchmark datasets. To alleviate this issue, we propose a self-augmentation mechanism for data augmentation in the feature space rather than in the data space using reconstruction independent component analysis (RICA). Specifically, a unified architecture is proposed which contains a deep convolutional neural network (CNN), a feature augmentation mechanism, and a bidirectional LSTM (BiLSTM). The CNN provides the high-level features extracted at the pooling layer where the augmentation mechanism chooses the most relevant features and generates low-dimensional augmented features. Finally, BiLSTM is used to classify the processed sequential information. We conducted experiments on three publicly available databases to show that the proposed approach achieves the state-of-the-art results with accuracy of 97%, 84% and 98%. Explainability analysis has been carried out using feature visualization through PCA projection and t-SNE plots.
Collapse
Affiliation(s)
- Usman Muhammad
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Md Ziaul Hoque
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Mourad Oussalah
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
- Medical Imaging, Physics, and Technology (MIPT), Faculty of Medicine, University of Oulu, Finland
| | - Anja Keskinarkaus
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Tapio Seppänen
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Pinaki Sarder
- Department of Pathology and Anatomical Sciences, University at Buffalo, USA
| |
Collapse
|
17
|
Bartoli A, Fournel J, Maurin A, Marchi B, Habert P, Castelli M, Gaubert JY, Cortaredona S, Lagier JC, Million M, Raoult D, Ghattas B, Jacquier A. Value and prognostic impact of a deep learning segmentation model of COVID-19 lung lesions on low-dose chest CT. RESEARCH IN DIAGNOSTIC AND INTERVENTIONAL IMAGING 2022; 1:100003. [PMID: 37520010 PMCID: PMC8939894 DOI: 10.1016/j.redii.2022.100003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 03/02/2022] [Accepted: 03/09/2022] [Indexed: 12/23/2022]
Abstract
Objectives 1) To develop a deep learning (DL) pipeline allowing quantification of COVID-19 pulmonary lesions on low-dose computed tomography (LDCT). 2) To assess the prognostic value of DL-driven lesion quantification. Methods This monocentric retrospective study included training and test datasets taken from 144 and 30 patients, respectively. The reference was the manual segmentation of 3 labels: normal lung, ground-glass opacity(GGO) and consolidation(Cons). Model performance was evaluated with technical metrics, disease volume and extent. Intra- and interobserver agreement were recorded. The prognostic value of DL-driven disease extent was assessed in 1621 distinct patients using C-statistics. The end point was a combined outcome defined as death, hospitalization>10 days, intensive care unit hospitalization or oxygen therapy. Results The Dice coefficients for lesion (GGO+Cons) segmentations were 0.75±0.08, exceeding the values for human interobserver (0.70±0.08; 0.70±0.10) and intraobserver measures (0.72±0.09). DL-driven lesion quantification had a stronger correlation with the reference than inter- or intraobserver measures. After stepwise selection and adjustment for clinical characteristics, quantification significantly increased the prognostic accuracy of the model (0.82 vs. 0.90; p<0.0001). Conclusions A DL-driven model can provide reproducible and accurate segmentation of COVID-19 lesions on LDCT. Automatic lesion quantification has independent prognostic value for the identification of high-risk patients.
Collapse
Key Words
- ACE, angiotensin-converting enzyme
- Artificial intelligence
- BMI, body mass index
- CNN, convolutional neural network
- COVID-19
- COVID-19, coronavirus disease 2019
- CT-SS, chest tomography severity score
- Cons, consolidation
- DL, deep learning
- DSC, Dice similarity coefficient
- Deep learning
- Diagnostic imaging
- GGO, ground-glass opacity
- ICU, intensive care unit
- LDCT, low-dose computed tomography
- MAE, mean absolute error
- MVSF, mean volume similarity fraction
- Multidetector computed tomography
- ROC, receiver operating characteristic
Collapse
Affiliation(s)
- Axel Bartoli
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- CRMBM - UMR CNRS 7339, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Joris Fournel
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- CRMBM - UMR CNRS 7339, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Arnaud Maurin
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
| | - Baptiste Marchi
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
| | - Paul Habert
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- LIEE, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
- CERIMED, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Maxime Castelli
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
| | - Jean-Yves Gaubert
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- LIEE, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
- CERIMED, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Sebastien Cortaredona
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, VITROME, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Jean-Christophe Lagier
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, MEPHI, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Matthieu Million
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, MEPHI, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Didier Raoult
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, MEPHI, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Badih Ghattas
- I2M - UMR CNRS 7373, Aix-Marseille University. CNRS, Centrale Marseille, 13453 Marseille, France
| | - Alexis Jacquier
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- CRMBM - UMR CNRS 7339, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| |
Collapse
|
18
|
Peng Y, Zhang Z, Tu H, Li X. Automatic Segmentation of Novel Coronavirus Pneumonia Lesions in CT Images Utilizing Deep-Supervised Ensemble Learning Network. Front Med (Lausanne) 2022; 8:755309. [PMID: 35047520 PMCID: PMC8761973 DOI: 10.3389/fmed.2021.755309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 11/29/2021] [Indexed: 11/13/2022] Open
Abstract
Background: The novel coronavirus disease 2019 (COVID-19) has been spread widely in the world, causing a huge threat to the living environment of people. Objective: Under CT imaging, the structure features of COVID-19 lesions are complicated and varied greatly in different cases. To accurately locate COVID-19 lesions and assist doctors to make the best diagnosis and treatment plan, a deep-supervised ensemble learning network is presented for COVID-19 lesion segmentation in CT images. Methods: Since a large number of COVID-19 CT images and the corresponding lesion annotations are difficult to obtain, a transfer learning strategy is employed to make up for the shortcoming and alleviate the overfitting problem. Based on the reality that traditional single deep learning framework is difficult to extract complicated and varied COVID-19 lesion features effectively that may cause some lesions to be undetected. To overcome the problem, a deep-supervised ensemble learning network is presented to combine with local and global features for COVID-19 lesion segmentation. Results: The performance of the proposed method was validated in experiments with a publicly available dataset. Compared with manual annotations, the proposed method acquired a high intersection over union (IoU) of 0.7279 and a low Hausdorff distance (H) of 92.4604. Conclusion: A deep-supervised ensemble learning network was presented for coronavirus pneumonia lesion segmentation in CT images. The effectiveness of the proposed method was verified by visual inspection and quantitative evaluation. Experimental results indicated that the proposed method has a good performance in COVID-19 lesion segmentation.
Collapse
Affiliation(s)
- Yuanyuan Peng
- School of Electrical and Automation Engineering, East China Jiaotong University, Nanchang, China
- School of Computer Science, Northwestern Polytechnical University, Xi'an, China
| | - Zixu Zhang
- School of Electrical and Automation Engineering, East China Jiaotong University, Nanchang, China
| | - Hongbin Tu
- School of Electrical and Automation Engineering, East China Jiaotong University, Nanchang, China
- Technique Center, Hunan Great Wall Technology Information Co. Ltd., Changsha, China
| | - Xiong Li
- School of Software, East China Jiaotong University, Nanchang, China
| |
Collapse
|
19
|
Shiri I, Arabi H, Salimi Y, Sanaat A, Akhavanallaf A, Hajianfar G, Askari D, Moradi S, Mansouri Z, Pakbin M, Sandoughdaran S, Abdollahi H, Radmard AR, Rezaei‐Kalantari K, Ghelich Oghli M, Zaidi H. COLI-Net: Deep learning-assisted fully automated COVID-19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:12-25. [PMID: 34898850 PMCID: PMC8652855 DOI: 10.1002/ima.22672] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 09/18/2021] [Accepted: 10/17/2021] [Indexed: 05/17/2023]
Abstract
We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347'259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7'333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98-0.99) and 0.91 ± 0.038 (95% CI, 0.90-0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, -0.12 to 0.18) and -0.18 ± 3.4% (95% CI, -0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16-0.59) and 0.81 ± 6.6% (95% CI, -0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (-6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Ghasem Hajianfar
- Rajaie Cardiovascular Medical and Research CenterIran University of Medical SciencesTehranIran
| | - Dariush Askari
- Department of Radiology TechnologyShahid Beheshti University of Medical SciencesTehranIran
| | - Shakiba Moradi
- Research and Development DepartmentMed Fanavaran Plus Co.KarajIran
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Masoumeh Pakbin
- Clinical Research Development CenterQom University of Medical SciencesQomIran
| | - Saleh Sandoughdaran
- Men's Health and Reproductive Health Research CenterShahid Beheshti University of Medical SciencesTehranIran
| | - Hamid Abdollahi
- Department of Radiologic Technology, Faculty of Allied MedicineKerman University of Medical SciencesKermanIran
| | - Amir Reza Radmard
- Department of RadiologyShariati Hospital, Tehran University of Medical SciencesTehranIran
| | - Kiara Rezaei‐Kalantari
- Rajaie Cardiovascular Medical and Research CenterIran University of Medical SciencesTehranIran
| | - Mostafa Ghelich Oghli
- Research and Development DepartmentMed Fanavaran Plus Co.KarajIran
- Department of Cardiovascular SciencesKU LeuvenLeuvenBelgium
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
- Geneva University NeurocenterGeneva UniversityGenevaSwitzerland
- Department of Nuclear Medicine and Molecular ImagingUniversity of Groningen, University Medical Center GroningenGroningenNetherlands
- Department of Nuclear MedicineUniversity of Southern DenmarkOdenseDenmark
| |
Collapse
|
20
|
Kumar A, Dhara AK, Thakur SB, Sadhu A, Nandi D. Special Convolutional Neural Network for Identification and Positioning of Interstitial Lung Disease Patterns in Computed Tomography Images. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [PMCID: PMC8711684 DOI: 10.1134/s1054661821040027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In this paper, automated detection of interstitial lung disease patterns in high resolution computed tomography images is achieved by developing a faster region-based convolutional network based detector with GoogLeNet as a backbone. GoogLeNet is simplified by removing few inception models and used as the backbone of the detector network. The proposed framework is developed to detect several interstitial lung disease patterns without doing lung field segmentation. The proposed method is able to detect the five most prevalent interstitial lung disease patterns: fibrosis, emphysema, consolidation, micronodules and ground-glass opacity, as well as normal. Five-fold cross-validation has been used to avoid bias and reduce over-fitting. The proposed framework performance is measured in terms of F-score on the publicly available MedGIFT database. It outperforms state-of-the-art techniques. The detection is performed at slice level and could be used for screening and differential diagnosis of interstitial lung disease patterns using high resolution computed tomography images.
Collapse
Affiliation(s)
- Abhishek Kumar
- School of Computer and Information Sciences University of Hyderabad, 500046 Hyderabad, India
| | - Ashis Kumar Dhara
- Electrical Engineering National Institute of Technology, 713209 Durgapur, India
| | - Sumitra Basu Thakur
- Department of Chest and Respiratory Care Medicine, Medical College, 700073 Kolkata, India
| | - Anup Sadhu
- EKO Diagnostic, Medical College, 700073 Kolkata, India
| | - Debashis Nandi
- Computer Science and Engineering National Institute of Technology, 713209 Durgapur, India
| |
Collapse
|
21
|
Zhang Y, Liao Q, Yuan L, Zhu H, Xing J, Zhang J. Exploiting Shared Knowledge From Non-COVID Lesions for Annotation-Efficient COVID-19 CT Lung Infection Segmentation. IEEE J Biomed Health Inform 2021; 25:4152-4162. [PMID: 34415840 PMCID: PMC8843066 DOI: 10.1109/jbhi.2021.3106341] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The novel Coronavirus disease (COVID-19) is a highly contagious virus and has spread all over the world, posing an extremely serious threat to all countries. Automatic lung infection segmentation from computed tomography (CT) plays an important role in the quantitative analysis of COVID-19. However, the major challenge lies in the inadequacy of annotated COVID-19 datasets. Currently, there are several public non-COVID lung lesion segmentation datasets, providing the potential for generalizing useful information to the related COVID-19 segmentation task. In this paper, we propose a novel relation-driven collaborative learning model to exploit shared knowledge from non-COVID lesions for annotation-efficient COVID-19 CT lung infection segmentation. The model consists of a general encoder to capture general lung lesion features based on multiple non-COVID lesions, and a target encoder to focus on task-specific features based on COVID-19 infections. We develop a collaborative learning scheme to regularize feature-level relation consistency of given input and encourage the model to learn more general and discriminative representation of COVID-19 infections. Extensive experiments demonstrate that trained with limited COVID-19 data, exploiting shared knowledge from non-COVID lesions can further improve state-of-the-art performance with up to 3.0% in dice similarity coefficient and 4.2% in normalized surface dice. In addition, experimental results on large scale 2D dataset with CT slices show that our method significantly outperforms cutting-edge segmentation methods metrics. Our method promotes new insights into annotation-efficient deep learning and illustrates strong potential for real-world applications in the global fight against COVID-19 in the absence of sufficient high-quality annotations.
Collapse
|
22
|
Zhang F. Application of machine learning in CT images and X-rays of COVID-19 pneumonia. Medicine (Baltimore) 2021; 100:e26855. [PMID: 34516488 PMCID: PMC8428739 DOI: 10.1097/md.0000000000026855] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 07/18/2021] [Accepted: 07/20/2021] [Indexed: 01/05/2023] Open
Abstract
ABSTRACT Coronavirus disease (COVID-19) has spread worldwide. X-ray and computed tomography (CT) are 2 technologies widely used in image acquisition, segmentation, diagnosis, and evaluation. Artificial intelligence can accurately segment infected parts in X-ray and CT images, assist doctors in improving diagnosis efficiency, and facilitate the subsequent assessment of the severity of the patient infection. The medical assistant platform based on machine learning can help radiologists make clinical decisions and helper in screening, diagnosis, and treatment. By providing scientific methods for image recognition, segmentation, and evaluation, we summarized the latest developments in the application of artificial intelligence in COVID-19 lung imaging, and provided guidance and inspiration to researchers and doctors who are fighting the COVID-19 virus.
Collapse
|
23
|
Zhao X, Zhang P, Song F, Fan G, Sun Y, Wang Y, Tian Z, Zhang L, Zhang G. D2A U-Net: Automatic segmentation of COVID-19 CT slices based on dual attention and hybrid dilated convolution. Comput Biol Med 2021; 135:104526. [PMID: 34146799 PMCID: PMC8169238 DOI: 10.1016/j.compbiomed.2021.104526] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Revised: 05/24/2021] [Accepted: 05/24/2021] [Indexed: 11/19/2022]
Abstract
Coronavirus Disease 2019 (COVID-19) has become one of the most urgent public health events worldwide due to its high infectivity and mortality. Computed tomography (CT) is a significant screening tool for COVID-19 infection, and automatic segmentation of lung infection in COVID-19 CT images can assist diagnosis and health care of patients. However, accurate and automatic segmentation of COVID-19 lung infections is faced with a few challenges, including blurred edges of infection and relatively low sensitivity. To address the issues above, a novel dilated dual attention U-Net based on the dual attention strategy and hybrid dilated convolutions, namely D2A U-Net, is proposed for COVID-19 lesion segmentation in CT slices. In our D2A U-Net, the dual attention strategy composed of two attention modules is utilized to refine feature maps and reduce the semantic gap between different levels of feature maps. Moreover, the hybrid dilated convolutions are introduced to the model decoder to achieve larger receptive fields, which refines the decoding process. The proposed method is evaluated on an open-source dataset and achieves a Dice score of 0.7298 and recall score of 0.7071, which outperforms the popular cutting-edge methods in the semantic segmentation. The proposed network is expected to be a potential AI-based approach used for the diagnosis and prognosis of COVID-19 patients.
Collapse
Affiliation(s)
- Xiangyu Zhao
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Peng Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Fan Song
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Guangda Fan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Yangyang Sun
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Yujia Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Zheyuan Tian
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Luqi Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Guanglei Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
| |
Collapse
|
24
|
Müller D, Soto-Rey I, Kramer F. Robust chest CT image segmentation of COVID-19 lung infection based on limited data. INFORMATICS IN MEDICINE UNLOCKED 2021; 25:100681. [PMID: 34337140 PMCID: PMC8313817 DOI: 10.1016/j.imu.2021.100681] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 07/12/2021] [Accepted: 07/25/2021] [Indexed: 12/17/2022] Open
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) affects billions of lives around the world and has a significant impact on public healthcare. For quantitative assessment and disease monitoring medical imaging like computed tomography offers great potential as alternative to RT-PCR methods. For this reason, automated image segmentation is highly desired as clinical decision support. However, publicly available COVID-19 imaging data is limited which leads to overfitting of traditional approaches. METHODS To address this problem, we propose an innovative automated segmentation pipeline for COVID-19 infected regions, which is able to handle small datasets by utilization as variant databases. Our method focuses on on-the-fly generation of unique and random image patches for training by performing several preprocessing methods and exploiting extensive data augmentation. For further reduction of the overfitting risk, we implemented a standard 3D U-Net architecture instead of new or computational complex neural network architectures. RESULTS Through a k-fold cross-validation on 20 CT scans as training and validation of COVID-19, we were able to develop a highly accurate as well as robust segmentation model for lungs and COVID-19 infected regions without overfitting on limited data. We performed an in-detail analysis and discussion on the robustness of our pipeline through a sensitivity analysis based on the cross-validation and impact on model generalizability of applied preprocessing techniques. Our method achieved Dice similarity coefficients for COVID-19 infection between predicted and annotated segmentation from radiologists of 0.804 on validation and 0.661 on a separate testing set consisting of 100 patients. CONCLUSIONS We demonstrated that the proposed method outperforms related approaches, advances the state-of-the-art for COVID-19 segmentation and improves robust medical image analysis based on limited data.
Collapse
Affiliation(s)
- Dominik Müller
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Germany
| | - Iñaki Soto-Rey
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Germany
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Germany
| |
Collapse
|
25
|
Laino ME, Ammirabile A, Posa A, Cancian P, Shalaby S, Savevski V, Neri E. The Applications of Artificial Intelligence in Chest Imaging of COVID-19 Patients: A Literature Review. Diagnostics (Basel) 2021; 11:1317. [PMID: 34441252 PMCID: PMC8394327 DOI: 10.3390/diagnostics11081317] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 07/02/2021] [Accepted: 07/09/2021] [Indexed: 12/23/2022] Open
Abstract
Diagnostic imaging is regarded as fundamental in the clinical work-up of patients with a suspected or confirmed COVID-19 infection. Recent progress has been made in diagnostic imaging with the integration of artificial intelligence (AI) and machine learning (ML) algorisms leading to an increase in the accuracy of exam interpretation and to the extraction of prognostic information useful in the decision-making process. Considering the ever expanding imaging data generated amid this pandemic, COVID-19 has catalyzed the rapid expansion in the application of AI to combat disease. In this context, many recent studies have explored the role of AI in each of the presumed applications for COVID-19 infection chest imaging, suggesting that implementing AI applications for chest imaging can be a great asset for fast and precise disease screening, identification and characterization. However, various biases should be overcome in the development of further ML-based algorithms to give them sufficient robustness and reproducibility for their integration into clinical practice. As a result, in this literature review, we will focus on the application of AI in chest imaging, in particular, deep learning, radiomics and advanced imaging as quantitative CT.
Collapse
Affiliation(s)
- Maria Elena Laino
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy; (P.C.); (V.S.)
| | - Angela Ammirabile
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy;
- Department of Radiology, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Alessandro Posa
- Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Fondazione Policlinico Universitario Agostino Gemelli—IRCCS, 00168 Rome, Italy;
| | - Pierandrea Cancian
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy; (P.C.); (V.S.)
| | - Sherif Shalaby
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Via Roma 67, 56126 Pisa, Italy; (S.S.); (E.N.)
| | - Victor Savevski
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy; (P.C.); (V.S.)
| | - Emanuele Neri
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Via Roma 67, 56126 Pisa, Italy; (S.S.); (E.N.)
- Italian Society of Medical and Interventional Radiology, SIRM Foundation, Via della Signora 2, 20122 Milano, Italy
| |
Collapse
|