151
|
Rajamani KT, Rani P, Siebert H, ElagiriRamalingam R, Heinrich MP. Attention-augmented U-Net (AA-U-Net) for semantic segmentation. SIGNAL, IMAGE AND VIDEO PROCESSING 2023; 17:981-989. [PMID: 35910403 PMCID: PMC9311338 DOI: 10.1007/s11760-022-02302-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 06/24/2022] [Accepted: 06/27/2022] [Indexed: 05/22/2023]
Abstract
UNLABELLED Deep learning-based image segmentation models rely strongly on capturing sufficient spatial context without requiring complex models that are hard to train with limited labeled data. For COVID-19 infection segmentation on CT images, training data are currently scarce. Attention models, in particular the most recent self-attention methods, have shown to help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent attention-augmented convolution model aims to capture long range interactions by concatenating self-attention and convolution feature maps. This work proposes a novel attention-augmented convolution U-Net (AA-U-Net) that enables a more accurate spatial aggregation of contextual information by integrating attention-augmented convolution in the bottleneck of an encoder-decoder segmentation architecture. A deep segmentation network (U-Net) with this attention mechanism significantly improves the performance of semantic segmentation tasks on challenging COVID-19 lesion segmentation. The validation experiments show that the performance gain of the attention-augmented U-Net comes from their ability to capture dynamic and precise (wider) attention context. The AA-U-Net achieves Dice scores of 72.3% and 61.4% for ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.2% points against a baseline U-Net and 3.09% points compared to a baseline U-Net with matched parameters. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s11760-022-02302-3.
Collapse
Affiliation(s)
| | - Priya Rani
- Applied Artificial Intelligence Institute, Deakin University, Burwood, VIC 3125 Australia
| | - Hanna Siebert
- Institute of Medical Informatics, University of Lübeck, Luebeck, Germany
| | | | | |
Collapse
|
152
|
Chen S, Zhong L, Qiu C, Zhang Z, Zhang X. Transformer-based multilevel region and edge aggregation network for magnetic resonance image segmentation. Comput Biol Med 2023; 152:106427. [PMID: 36543009 DOI: 10.1016/j.compbiomed.2022.106427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 11/18/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
To improve the quality of magnetic resonance (MR) image edge segmentation, some researchers applied additional edge labels to train the network to extract edge information and aggregate it with region information. They have made significant progress. However, due to the intrinsic locality of convolution operations, the convolution neural network-based region and edge aggregation has limitations in modeling long-range information. To solve this problem, we proposed a novel transformer-based multilevel region and edge aggregation network for MR image segmentation. To the best of our knowledge, this is the first literature on transformer-based region and edge aggregation. We first extract multilevel region and edge features using a dual-branch module. Then, the region and edge features at different levels are inferred and aggregated through multiple transformer-based inference modules to form multilevel complementary features. Finally, the attention feature selection module aggregates these complementary features with the corresponding level region and edge features to decode the region and edge features. We evaluated our method on a public MR dataset: Medical image computation and computer-assisted intervention atrial segmentation challenge (ASC). Meanwhile, the private MR dataset considered infrapatellar fat pad (IPFP). Our method achieved a dice score of 93.2% for ASC and 91.9% for IPFP. Compared with other 2D segmentation methods, our method improved a dice score by 0.6% for ASC and 3.0% for IPFP.
Collapse
Affiliation(s)
- Shaolong Chen
- School of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen, 518107, China
| | - Lijie Zhong
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics·Guangdong Province), Guangzhou, 510630, China
| | - Changzhen Qiu
- School of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen, 518107, China
| | - Zhiyong Zhang
- School of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen, 518107, China.
| | - Xiaodong Zhang
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics·Guangdong Province), Guangzhou, 510630, China.
| |
Collapse
|
153
|
Li Z, Li X, Jin Z, Shen L. Learning from pseudo-lesion: a self-supervised framework for COVID-19 diagnosis. Neural Comput Appl 2023; 35:10717-10731. [PMID: 37155461 PMCID: PMC10038387 DOI: 10.1007/s00521-023-08259-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 01/06/2023] [Indexed: 05/10/2023]
Abstract
The Coronavirus disease 2019 (COVID-19) has rapidly spread all over the world since its first report in December 2019, and thoracic computed tomography (CT) has become one of the main tools for its diagnosis. In recent years, deep learning-based approaches have shown impressive performance in myriad image recognition tasks. However, they usually require a large number of annotated data for training. Inspired by ground glass opacity, a common finding in COIVD-19 patient's CT scans, we proposed in this paper a novel self-supervised pretraining method based on pseudo-lesion generation and restoration for COVID-19 diagnosis. We used Perlin noise, a gradient noise based mathematical model, to generate lesion-like patterns, which were then randomly pasted to the lung regions of normal CT images to generate pseudo-COVID-19 images. The pairs of normal and pseudo-COVID-19 images were then used to train an encoder-decoder architecture-based U-Net for image restoration, which does not require any labeled data. The pretrained encoder was then fine-tuned using labeled data for COVID-19 diagnosis task. Two public COVID-19 diagnosis datasets made up of CT images were employed for evaluation. Comprehensive experimental results demonstrated that the proposed self-supervised learning approach could extract better feature representation for COVID-19 diagnosis, and the accuracy of the proposed method outperformed the supervised model pretrained on large-scale images by 6.57% and 3.03% on SARS-CoV-2 dataset and Jinan COVID-19 dataset, respectively.
Collapse
Affiliation(s)
- Zhongliang Li
- AI Research Center for Medical Image Analysis and Diagnosis, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060 Guangdong China
| | - Xuechen Li
- National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University, Shenzhen, 518060 Guangdong China
| | - Zhihao Jin
- AI Research Center for Medical Image Analysis and Diagnosis, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060 Guangdong China
| | - Linlin Shen
- AI Research Center for Medical Image Analysis and Diagnosis, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060 Guangdong China
| |
Collapse
|
154
|
Dritsas E, Trigka M. Supervised Machine Learning Models to Identify Early-Stage Symptoms of SARS-CoV-2. SENSORS (BASEL, SWITZERLAND) 2022; 23:s23010040. [PMID: 36616638 PMCID: PMC9824026 DOI: 10.3390/s23010040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/07/2022] [Accepted: 12/16/2022] [Indexed: 06/12/2023]
Abstract
The coronavirus disease (COVID-19) pandemic was caused by the SARS-CoV-2 virus and began in December 2019. The virus was first reported in the Wuhan region of China. It is a new strain of coronavirus that until then had not been isolated in humans. In severe cases, pneumonia, acute respiratory distress syndrome, multiple organ failure or even death may occur. Now, the existence of vaccines, antiviral drugs and the appropriate treatment are allies in the confrontation of the disease. In the present research work, we utilized supervised Machine Learning (ML) models to determine early-stage symptoms of SARS-CoV-2 occurrence. For this purpose, we experimented with several ML models, and the results showed that the ensemble model, namely Stacking, outperformed the others, achieving an Accuracy, Precision, Recall and F-Measure equal to 90.9% and an Area Under Curve (AUC) of 96.4%.
Collapse
|
155
|
Fully automatic identification of post-treatment infarct lesions after endovascular therapy based on non-contrast computed tomography. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08094-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
156
|
Galzin E, Roche L, Vlachomitrou A, Nempont O, Carolus H, Schmidt-Richberg A, Jin P, Rodrigues P, Klinder T, Richard JC, Tazarourte K, Douplat M, Sigal A, Bouscambert-Duchamp M, Si-Mohamed SA, Gouttard S, Mansuy A, Talbot F, Pialat JB, Rouvière O, Milot L, Cotton F, Douek P, Duclos A, Rabilloud M, Boussel L. Additional value of chest CT AI-based quantification of lung involvement in predicting death and ICU admission for COVID-19 patients. RESEARCH IN DIAGNOSTIC AND INTERVENTIONAL IMAGING 2022; 4:100018. [PMID: 37284031 PMCID: PMC9716289 DOI: 10.1016/j.redii.2022.100018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Accepted: 11/15/2022] [Indexed: 12/03/2022]
Abstract
Objectives We evaluated the contribution of lung lesion quantification on chest CT using a clinical Artificial Intelligence (AI) software in predicting death and intensive care units (ICU) admission for COVID-19 patients. Methods For 349 patients with positive COVID-19-PCR test that underwent a chest CT scan at admittance or during hospitalization, we applied the AI for lung and lung lesion segmentation to obtain lesion volume (LV), and LV/Total Lung Volume (TLV) ratio. ROC analysis was used to extract the best CT criterion in predicting death and ICU admission. Two prognostic models using multivariate logistic regressions were constructed to predict each outcome and were compared using AUC values. The first model ("Clinical") was based on patients' characteristics and clinical symptoms only. The second model ("Clinical+LV/TLV") included also the best CT criterion. Results LV/TLV ratio demonstrated best performance for both outcomes; AUC of 67.8% (95% CI: 59.5 - 76.1) and 81.1% (95% CI: 75.7 - 86.5) respectively. Regarding death prediction, AUC values were 76.2% (95% CI: 69.9 - 82.6) and 79.9% (95%IC: 74.4 - 85.5) for the "Clinical" and the "Clinical+LV/TLV" models respectively, showing significant performance increase (+ 3.7%; p-value<0.001) when adding LV/TLV ratio. Similarly, for ICU admission prediction, AUC values were 74.9% (IC 95%: 69.2 - 80.6) and 84.8% (IC 95%: 80.4 - 89.2) respectively corresponding to significant performance increase (+ 10%: p-value<0.001). Conclusions Using a clinical AI software to quantify the COVID-19 lung involvement on chest CT, combined with clinical variables, allows better prediction of death and ICU admission.
Collapse
Affiliation(s)
- Eloise Galzin
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
| | - Laurent Roche
- Department of Biostatistics, Hospices Civils de Lyon, Lyon F-69003, France
- Université de Lyon, Lyon F-69000, France
- Laboratoire de Biométrie et Biologie Evolutive, Université Lyon 1, CNRS, UMR5558, Equipe Biostatistique-Santé, Villeurbanne F-69622, France
| | - Anna Vlachomitrou
- Philips France, 33 rue de Verdun, CS 60 055, Suresnes Cedex 92156, France
| | - Olivier Nempont
- Philips France, 33 rue de Verdun, CS 60 055, Suresnes Cedex 92156, France
| | - Heike Carolus
- Philips Research, Röntgenstrasse 24-26, Hamburg D-22335, Germany
| | | | - Peng Jin
- Philips Medical Systems Nederland BV (Philips Healthcare), the Netherlands
| | - Pedro Rodrigues
- Philips Medical Systems Nederland BV (Philips Healthcare), the Netherlands
| | - Tobias Klinder
- Philips Research, Röntgenstrasse 24-26, Hamburg D-22335, Germany
| | - Jean-Christophe Richard
- Department of Critical Care Medicine, Hôpital De La Croix Rousse, Hospices Civils de Lyon, Lyon, France
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, Lyon U1294, France
| | - Karim Tazarourte
- Research on Healthcare Performance (RESHAPE), INSERM U1290, Université Claude Bernard Lyon 1, Lyon, France
- Emergency department and SAMU 69, Hospices civils de Lyon, France
| | - Marion Douplat
- Research on Healthcare Performance (RESHAPE), INSERM U1290, Université Claude Bernard Lyon 1, Lyon, France
- Emergency department and SAMU 69, Hospices civils de Lyon, France
| | - Alain Sigal
- Emergency department and SAMU 69, Hospices civils de Lyon, France
| | - Maude Bouscambert-Duchamp
- Laboratoire de Virologie, Institut des Agents Infectieux de Lyon, Centre National de Référence des virus respiratoires France Sud, Centre de Biologie et de Pathologie Nord, Hospices Civils de Lyon, Lyon F-69317, France
- Université de Lyon, Virpath, CIRI, INSERM U1111, CNRS UMR5308, ENS Lyon, Université Claude Bernard Lyon 1, Lyon F-69372, France
| | - Salim Aymeric Si-Mohamed
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, Lyon U1294, France
| | | | - Adeline Mansuy
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
| | - François Talbot
- Department of Information Technology, Hospices Civils de Lyon, Lyon, France
| | - Jean-Baptiste Pialat
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, Lyon U1294, France
| | - Olivier Rouvière
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
- LabTAU INSERM U1032, Lyon, France
| | - Laurent Milot
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
- LabTAU INSERM U1032, Lyon, France
| | - François Cotton
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, Lyon U1294, France
| | - Philippe Douek
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, Lyon U1294, France
| | - Antoine Duclos
- Research on Healthcare Performance (RESHAPE), INSERM U1290, Université Claude Bernard Lyon 1, Lyon, France
| | - Muriel Rabilloud
- Department of Biostatistics, Hospices Civils de Lyon, Lyon F-69003, France
- Université de Lyon, Lyon F-69000, France
- Laboratoire de Biométrie et Biologie Evolutive, Université Lyon 1, CNRS, UMR5558, Equipe Biostatistique-Santé, Villeurbanne F-69622, France
| | - Loic Boussel
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, Lyon U1294, France
| |
Collapse
|
157
|
Huang X, Chen J, Chen M, Chen L, Wan Y. TDD-UNet:Transformer with double decoder UNet for COVID-19 lesions segmentation. Comput Biol Med 2022; 151:106306. [PMID: 36403357 PMCID: PMC9664702 DOI: 10.1016/j.compbiomed.2022.106306] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 10/22/2022] [Accepted: 11/06/2022] [Indexed: 11/09/2022]
Abstract
The outbreak of new coronary pneumonia has brought severe health risks to the world. Detection of COVID-19 based on the UNet network has attracted widespread attention in medical image segmentation. However, the traditional UNet model is challenging to capture the long-range dependence of the image due to the limitations of the convolution kernel with a fixed receptive field. The Transformer Encoder overcomes the long-range dependence problem. However, the Transformer-based segmentation approach cannot effectively capture the fine-grained details. We propose a transformer with a double decoder UNet for COVID-19 lesions segmentation to address this challenge, TDD-UNet. We introduce the multi-head self-attention of the Transformer to the UNet encoding layer to extract global context information. The dual decoder structure is used to improve the result of foreground segmentation by predicting the background and applying deep supervision. We performed quantitative analysis and comparison for our proposed method on four public datasets with different modalities, including CT and CXR, to demonstrate its effectiveness and generality in segmenting COVID-19 lesions. We also performed ablation studies on the COVID-19-CT-505 dataset to verify the effectiveness of the key components of our proposed model. The proposed TDD-UNet also achieves higher Dice and Jaccard mean scores and the lowest standard deviation compared to competitors. Our proposed method achieves better segmentation results than other state-of-the-art methods.
Collapse
Affiliation(s)
- Xuping Huang
- Computer School, University of South China, Hengyang 421001, China
| | - Junxi Chen
- Affiliated Nanhua Hospital, University of South China, Hengyang 421001, China
| | - Mingzhi Chen
- College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China
| | - Lingna Chen
- Computer School, University of South China, Hengyang 421001, China.
| | - Yaping Wan
- Computer School, University of South China, Hengyang 421001, China.
| |
Collapse
|
158
|
Liu S, Tang X, Cai T, Zhang Y, Wang C. COVID-19 CT image segmentation based on improved Res2Net. Med Phys 2022; 49:7583-7595. [PMID: 35916116 PMCID: PMC9538682 DOI: 10.1002/mp.15882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 06/27/2022] [Accepted: 07/18/2022] [Indexed: 01/08/2023] Open
Abstract
PURPOSE Corona virus disease 2019 (COVID-19) is threatening the health of the global people and bringing great losses to our economy and society. However, computed tomography (CT) image segmentation can make clinicians quickly identify the COVID-19-infected regions. Accurate segmentation infection area of COVID-19 can contribute screen confirmed cases. METHODS We designed a segmentation network for COVID-19-infected regions in CT images. To begin with, multilayered features were extracted by the backbone network of Res2Net. Subsequently, edge features of the infected regions in the low-level feature f2 were extracted by the edge attention module. Second, we carefully designed the structure of the attention position module (APM) to extract high-level feature f5 and detect infected regions. Finally, we proposed a context exploration module consisting of two parallel explore blocks, which can remove some false positives and false negatives to reach more accurate segmentation results. RESULTS Experimental results show that, on the public COVID-19 dataset, the Dice, sensitivity, specificity,S α ${S}_\alpha $ ,E ∅ m e a n $E_\emptyset ^{mean}$ , and mean absolute error (MAE) of our method are 0.755, 0.751, 0.959, 0.795, 0.919, and 0.060, respectively. Compared with the latest COVID-19 segmentation model Inf-Net, the Dice similarity coefficient of our model has increased by 7.3%; the sensitivity (Sen) has increased by 5.9%. On contrary, the MAE has dropped by 2.2%. CONCLUSIONS Our method performs well on COVID-19 CT image segmentation. We also find that our method is so portable that can be suitable for various current popular networks. In a word, our method can help screen people infected with COVID-19 effectively and save the labor power of clinicians and radiologists.
Collapse
Affiliation(s)
- Shangwang Liu
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
- Engineering Lab of Intelligence Business & Internet of ThingsXinxiangHenanChina
| | - Xiufang Tang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Tongbo Cai
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Yangyang Zhang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Changgeng Wang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| |
Collapse
|
159
|
Fan C, Zeng Z, Xiao L, Qu X. GFNet: Automatic segmentation of COVID-19 lung infection regions using CT images based on boundary features. PATTERN RECOGNITION 2022; 132:108963. [PMID: 35966970 PMCID: PMC9359771 DOI: 10.1016/j.patcog.2022.108963] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 07/31/2022] [Accepted: 08/07/2022] [Indexed: 05/03/2023]
Abstract
In early 2020, the global spread of the COVID-19 has presented the world with a serious health crisis. Due to the large number of infected patients, automatic segmentation of lung infections using computed tomography (CT) images has great potential to enhance traditional medical strategies. However, the segmentation of infected regions in CT slices still faces many challenges. Specially, the most core problem is the high variability of infection characteristics and the low contrast between the infected and the normal regions. This problem leads to fuzzy regions in lung CT segmentation. To address this problem, we have designed a novel global feature network(GFNet) for COVID-19 lung infections: VGG16 as backbone, we design a Edge-guidance module(Eg) that fuses the features of each layer. First, features are extracted by reverse attention module and Eg is combined with it. This series of steps enables each layer to fully extract boundary details that are difficult to be noticed by previous models, thus solving the fuzzy problem of infected regions. The multi-layer output features are fused into the final output to finally achieve automatic and accurate segmentation of infected areas. We compared the traditional medical segmentation networks, UNet, UNet++, the latest model Inf-Net, and methods of few shot learning field. Experiments show that our model is superior to the above models in Dice, Sensitivity, Specificity and other evaluation metrics, and our segmentation results are clear and accurate from the visual effect, which proves the effectiveness of GFNet. In addition, we verify the generalization ability of GFNet on another "never seen" dataset, and the results prove that our model still has better generalization ability than the above model. Our code has been shared at https://github.com/zengzhenhuan/GFNet.
Collapse
Affiliation(s)
- Chaodong Fan
- School of Computer Science and Technology, Hainan University, Haikou 570228, China
- School of Computer Science, Xiangtan University, Xiangtan 411100, China
- Foshan Green Intelligent Manufacturing Research Institute of Xiangtan University, Foshan 528000, China
- School of Information Technology and Management, Hunan University of Finance and Economics, Changsha 410205, China
| | - Zhenhuan Zeng
- School of Computer Science, Xiangtan University, Xiangtan 411100, China
| | - Leyi Xiao
- School of Computer Science and Technology, Hainan University, Haikou 570228, China
- School of Information Technology and Management, Hunan University of Finance and Economics, Changsha 410205, China
- AnHui Key Laboratory of Detection Technology and Energy Saving Devices, AnHui Polytechnic University, Wuhu 241000, China
- Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou Normal University, Quanzhou 362000 China
- Vehicle Measurement, Control and Safety Key Laboratory of Sichuan Province, Xihua University, Chengdu 610039, China
| | - Xilong Qu
- School of Information Technology and Management, Hunan University of Finance and Economics, Changsha 410205, China
| |
Collapse
|
160
|
Hussain MA, Mirikharaji Z, Momeny M, Marhamati M, Neshat AA, Garbi R, Hamarneh G. Active deep learning from a noisy teacher for semi-supervised 3D image segmentation: Application to COVID-19 pneumonia infection in CT. Comput Med Imaging Graph 2022; 102:102127. [PMID: 36257092 PMCID: PMC9540707 DOI: 10.1016/j.compmedimag.2022.102127] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 09/23/2022] [Accepted: 09/28/2022] [Indexed: 01/27/2023]
Abstract
Supervised deep learning has become a standard approach to solving medical image segmentation tasks. However, serious difficulties in attaining pixel-level annotations for sufficiently large volumetric datasets in real-life applications have highlighted the critical need for alternative approaches, such as semi-supervised learning, where model training can leverage small expert-annotated datasets to enable learning from much larger datasets without laborious annotation. Most of the semi-supervised approaches combine expert annotations and machine-generated annotations with equal weights within deep model training, despite the latter annotations being relatively unreliable and likely to affect model optimization negatively. To overcome this, we propose an active learning approach that uses an example re-weighting strategy, where machine-annotated samples are weighted (i) based on the similarity of their gradient directions of descent to those of expert-annotated data, and (ii) based on the gradient magnitude of the last layer of the deep model. Specifically, we present an active learning strategy with a query function that enables the selection of reliable and more informative samples from machine-annotated batch data generated by a noisy teacher. When validated on clinical COVID-19 CT benchmark data, our method improved the performance of pneumonia infection segmentation compared to the state of the art.
Collapse
Affiliation(s)
| | - Zahra Mirikharaji
- Medical Image Analysis Lab, Simon Fraser University, Burnaby, BC V5A 1S6, Canada.
| | | | | | | | - Rafeef Garbi
- BiSICL, University of British Columbia, Vancouver, BC V6T 1Z4, Canada.
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, Simon Fraser University, Burnaby, BC V5A 1S6, Canada.
| |
Collapse
|
161
|
Tang T, Li F, Jiang M, Xia X, Zhang R, Lin K. Improved Complementary Pulmonary Nodule Segmentation Model Based on Multi-Feature Fusion. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1755. [PMID: 36554161 PMCID: PMC9778431 DOI: 10.3390/e24121755] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/23/2022] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
Accurate segmentation of lung nodules from pulmonary computed tomography (CT) slices plays a vital role in the analysis and diagnosis of lung cancer. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in the automatic segmentation of lung nodules. However, they are still challenged by the large diversity of segmentation targets, and the small inter-class variances between the nodule and its surrounding tissues. To tackle this issue, we propose a features complementary network according to the process of clinical diagnosis, which made full use of the complementarity and facilitation among lung nodule location information, global coarse area, and edge information. Specifically, we first consider the importance of global features of nodules in segmentation and propose a cross-scale weighted high-level feature decoder module. Then, we develop a low-level feature decoder module for edge feature refinement. Finally, we construct a complementary module to make information complement and promote each other. Furthermore, we weight pixels located at the nodule edge on the loss function and add an edge supervision to the deep supervision, both of which emphasize the importance of edges in segmentation. The experimental results demonstrate that our model achieves robust pulmonary nodule segmentation and more accurate edge segmentation.
Collapse
Affiliation(s)
- Tiequn Tang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
- School of Physics and Electronic Engineering, Fuyang Normal University, Fuyang 236037, China
| | - Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Minshan Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
- Department of Biomedical Engineering, Florida International University, Miami, FL 33174, USA
| | - Xunpeng Xia
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Rongfu Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Kailin Lin
- Fudan University Shanghai Cancer Center, Shanghai 200032, China
| |
Collapse
|
162
|
Lasker A, Obaidullah SM, Chakraborty C, Roy K. Application of Machine Learning and Deep Learning Techniques for COVID-19 Screening Using Radiological Imaging: A Comprehensive Review. SN COMPUTER SCIENCE 2022; 4:65. [PMID: 36467853 PMCID: PMC9702883 DOI: 10.1007/s42979-022-01464-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 10/18/2022] [Indexed: 11/26/2022]
Abstract
Lung, being one of the most important organs in human body, is often affected by various SARS diseases, among which COVID-19 has been found to be the most fatal disease in recent times. In fact, SARS-COVID 19 led to pandemic that spreads fast among the community causing respiratory problems. Under such situation, radiological imaging-based screening [mostly chest X-ray and computer tomography (CT) modalities] has been performed for rapid screening of the disease as it is a non-invasive approach. Due to scarcity of physician/chest specialist/expert doctors, technology-enabled disease screening techniques have been developed by several researchers with the help of artificial intelligence and machine learning (AI/ML). It can be remarkably observed that the researchers have introduced several AI/ML/DL (deep learning) algorithms for computer-assisted detection of COVID-19 using chest X-ray and CT images. In this paper, a comprehensive review has been conducted to summarize the works related to applications of AI/ML/DL for diagnostic prediction of COVID-19, mainly using X-ray and CT images. Following the PRISMA guidelines, total 265 articles have been selected out of 1715 published articles till the third quarter of 2021. Furthermore, this review summarizes and compares varieties of ML/DL techniques, various datasets, and their results using X-ray and CT imaging. A detailed discussion has been made on the novelty of the published works, along with advantages and limitations.
Collapse
Affiliation(s)
- Asifuzzaman Lasker
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Sk Md Obaidullah
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Chandan Chakraborty
- Department of Computer Science & Engineering, National Institute of Technical Teachers’ Training & Research Kolkata, Kolkata, India
| | - Kaushik Roy
- Department of Computer Science, West Bengal State University, Barasat, India
| |
Collapse
|
163
|
Dubey AK, Mohbey KK. Combined Cloud-Based Inference System for the Classification of COVID-19 in CT-Scan and X-Ray Images. NEW GENERATION COMPUTING 2022; 41:61-84. [PMID: 36439302 PMCID: PMC9676871 DOI: 10.1007/s00354-022-00195-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Accepted: 11/09/2022] [Indexed: 06/16/2023]
Abstract
In the past few years, most of the work has been done around the classification of covid-19 using different images like CT-scan, X-ray, and ultrasound. But none of that is capable enough to deal with each of these image types on a single common platform and can identify the possibility that a person is suffering from COVID or not. Thus, we realized there should be a platform to identify COVID-19 in CT-scan and X-ray images on the fly. So, to fulfill this need, we proposed an AI model to identify CT-scan and X-ray images from each other and then use this inference to classify them of COVID positive or negative. The proposed model uses the inception architecture under the hood and trains on the open-source extended covid-19 dataset. The dataset consists of plenty of images for both image types and is of size 4 GB. We achieved an accuracy of 100%, average macro-Precision of 100%, average macro-Recall of 100%, average macro f1-score of 100%, and AUC score of 99.6%. Furthermore, in this work, cloud-based architecture is proposed to massively scale and load balance as the Number of user requests rises. As a result, it will deliver a service with minimal latency to all users.
Collapse
Affiliation(s)
- Ankit Kumar Dubey
- Department of Computer Science, Central University of Rajasthan, Ajmer, India
| | | |
Collapse
|
164
|
A Multi-centric Evaluation of Deep Learning Models for Segmentation of COVID-19 Lung Lesions on Chest CT Scans. IRANIAN JOURNAL OF RADIOLOGY 2022. [DOI: 10.5812/iranjradiol-117992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Background: Chest computed tomography (CT) scan is one of the most common tools used for the diagnosis of patients with coronavirus disease 2019 (COVID-19). While segmentation of COVID-19 lung lesions by radiologists can be time-consuming, the application of advanced deep learning techniques for automated segmentation can be a promising step toward the management of this infection and similar diseases in the future. Objectives: This study aimed to evaluate the performance and generalizability of deep learning-based models for the automated segmentation of COVID-19 lung lesions. Patients and Methods: Four datasets (2 private and 2 public) were used in this study. The first and second private datasets included 297 (147 healthy and 150 COVID-19 cases) and 82 COVID-19 subjects. The public datasets included the COVID19-P20 (20 COVID-19 cases from 2 centers) and the MosMedData datasets (50 COVID-19 patients from a single center). Model comparisons were made based on the Dice similarity coefficient (DSC), receiver operating characteristic (ROC) curve, and area under the curve (AUC). The predicted CT severity scores by the model were compared with those of radiologists by measuring the Pearson’s correlation coefficients (PCC). Also, DSC was used to compare the inter-rater agreement of the model and expert against that of 2 experts on an unseen dataset. Finally, the generalizability of the model was evaluated, and a simple calibration strategy was proposed. Results: The VGG16-UNet model showed the best performance across both private datasets, with a DSC of 84.23% ± 1.73% on the first private dataset and 56.61% ± 1.48% on the second private dataset. Similar results were obtained on public datasets, with a DSC of 60.10% ± 2.34% on the COVID19-P20 dataset and 66.28% ± 2.80% on a combined dataset of COVID19-P20 and MosMedData. The predicted CT severity scores of the model were compared against those of radiologists and were found to be 0.89 and 0.85 on the first private dataset and 0.77 and 0.74 on the second private dataset for the right and left lungs, respectively. Moreover, the model trained on the first private dataset was examined on the second private dataset and compared against the radiologist, which revealed a performance gap of 5.74% based on DSCs. A calibration strategy was employed to reduce this gap to 0.53%. Conclusion: The results demonstrated the potential of the proposed model in localizing COVID-19 lesions on CT scans across multiple datasets; its accuracy competed with the radiologists and could assist them in diagnostic and treatment procedures. The effect of model calibration on the performance of an unseen dataset was also reported, increasing the DSC by more than 5%.
Collapse
|
165
|
Zhou T, Zhou Y, Gong C, Yang J, Zhang Y. Feature Aggregation and Propagation Network for Camouflaged Object Detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:7036-7047. [PMID: 36331642 DOI: 10.1109/tip.2022.3217695] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Camouflaged object detection (COD) aims to detect/segment camouflaged objects embedded in the environment, which has attracted increasing attention over the past decades. Although several COD methods have been developed, they still suffer from unsatisfactory performance due to the intrinsic similarities between the foreground objects and background surroundings. In this paper, we propose a novel Feature Aggregation and Propagation Network (FAP-Net) for camouflaged object detection. Specifically, we propose a Boundary Guidance Module (BGM) to explicitly model the boundary characteristic, which can provide boundary-enhanced features to boost the COD performance. To capture the scale variations of the camouflaged objects, we propose a Multi-scale Feature Aggregation Module (MFAM) to characterize the multi-scale information from each layer and obtain the aggregated feature representations. Furthermore, we propose a Cross-level Fusion and Propagation Module (CFPM). In the CFPM, the feature fusion part can effectively integrate the features from adjacent layers to exploit the cross-level correlations, and the feature propagation part can transmit valuable context information from the encoder to the decoder network via a gate unit. Finally, we formulate a unified and end-to-end trainable framework where cross-level features can be effectively fused and propagated for capturing rich context information. Extensive experiments on three benchmark camouflaged datasets demonstrate that our FAP-Net outperforms other state-of-the-art COD models. Moreover, our model can be extended to the polyp segmentation task, and the comparison results further validate the effectiveness of the proposed model in segmenting polyps. The source code and results will be released at https://github.com/taozh2017/FAPNet.
Collapse
|
166
|
Yu Y, Lee HJ, Lee H, Ro YM. Defending Person Detection Against Adversarial Patch Attack by Using Universal Defensive Frame. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:6976-6990. [PMID: 36318546 DOI: 10.1109/tip.2022.3217375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Person detection has attracted great attention in the computer vision area and is an imperative element in human-centric computer vision. Although the predictive performances of person detection networks have been improved dramatically, they are vulnerable to adversarial patch attacks. Changing the pixels in a restricted region can easily fool the person detection network in safety-critical applications such as autonomous driving and security systems. Despite the necessity of countering adversarial patch attacks, very few efforts have been dedicated to defending person detection against adversarial patch attack. In this paper, we propose a novel defense strategy that defends against an adversarial patch attack by optimizing a defensive frame for person detection. The defensive frame alleviates the effect of the adversarial patch while maintaining person detection performance with clean person. The proposed defensive frame in the person detection is generated with a competitive learning algorithm which makes an iterative competition between detection threatening module and detection shielding module in person detection. Comprehensive experimental results demonstrate that the proposed method effectively defends person detection against adversarial patch attacks.
Collapse
|
167
|
Dornaika F, Hoang VT. Deep data representation with feature propagation for semi-supervised learning. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01701-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
168
|
Luo J, Sun Y, Chi J, Liao X, Xu C. A novel deep learning-based method for COVID-19 pneumonia detection from CT images. BMC Med Inform Decis Mak 2022; 22:284. [PMID: 36324135 PMCID: PMC9629767 DOI: 10.1186/s12911-022-02022-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 10/17/2022] [Indexed: 11/07/2022] Open
Abstract
Background The sensitivity of RT-PCR in diagnosing COVID-19 is only 60–70%, and chest CT plays an indispensable role in the auxiliary diagnosis of COVID-19 pneumonia, but the results of CT imaging are highly dependent on professional radiologists. Aims This study aimed to develop a deep learning model to assist radiologists in detecting COVID-19 pneumonia.
Methods The total study population was 437. The training dataset contained 26,477, 2468, and 8104 CT images of normal, CAP, and COVID-19, respectively. The validation dataset contained 14,076, 1028, and 3376 CT images of normal, CAP, and COVID-19 patients, respectively. The test set included 51 normal cases, 28 CAP patients, and 51 COVID-19 patients. We designed and trained a deep learning model to recognize normal, CAP, and COVID-19 patients based on U-Net and ResNet-50. Moreover, the diagnoses of the deep learning model were compared with different levels of radiologists. Results In the test set, the sensitivity of the deep learning model in diagnosing normal cases, CAP, and COVID-19 patients was 98.03%, 89.28%, and 92.15%, respectively. The diagnostic accuracy of the deep learning model was 93.84%. In the validation set, the accuracy was 92.86%, which was better than that of two novice doctors (86.73% and 87.75%) and almost equal to that of two experts (94.90% and 93.88%). The AI model performed significantly better than all four radiologists in terms of time consumption (35 min vs. 75 min, 93 min, 79 min, and 82 min). Conclusion The AI model we obtained had strong decision-making ability, which could potentially assist doctors in detecting COVID-19 pneumonia.
Collapse
Affiliation(s)
- Ju Luo
- Third Xiangya Hospital, Central South University, NO.138, Tongzipo Road, Changsha, 410013, Hunan, China
| | - Yuhao Sun
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
| | - Jingshu Chi
- Third Xiangya Hospital, Central South University, NO.138, Tongzipo Road, Changsha, 410013, Hunan, China
| | - Xin Liao
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
| | - Canxia Xu
- Third Xiangya Hospital, Central South University, NO.138, Tongzipo Road, Changsha, 410013, Hunan, China.
| |
Collapse
|
169
|
Roth HR, Xu Z, Tor-Díez C, Sanchez Jacob R, Zember J, Molto J, Li W, Xu S, Turkbey B, Turkbey E, Yang D, Harouni A, Rieke N, Hu S, Isensee F, Tang C, Yu Q, Sölter J, Zheng T, Liauchuk V, Zhou Z, Moltz JH, Oliveira B, Xia Y, Maier-Hein KH, Li Q, Husch A, Zhang L, Kovalev V, Kang L, Hering A, Vilaça JL, Flores M, Xu D, Wood B, Linguraru MG. Rapid artificial intelligence solutions in a pandemic-The COVID-19-20 Lung CT Lesion Segmentation Challenge. Med Image Anal 2022; 82:102605. [PMID: 36156419 PMCID: PMC9444848 DOI: 10.1016/j.media.2022.102605] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 07/01/2022] [Accepted: 08/25/2022] [Indexed: 11/30/2022]
Abstract
Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge - 2020.
Collapse
Affiliation(s)
- Holger R Roth
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany.
| | - Ziyue Xu
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Carlos Tor-Díez
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, WA, DC, USA
| | - Ramon Sanchez Jacob
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Jonathan Zember
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Jose Molto
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Wenqi Li
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Sheng Xu
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Baris Turkbey
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Evrim Turkbey
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Dong Yang
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Ahmed Harouni
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Nicola Rieke
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Shishuai Hu
- School of Computer Science and Engineering, Northwestern Polytechnical University, China
| | - Fabian Isensee
- Applied Computer Vision Lab, Helmholtz Imaging , Heidelberg, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Qinji Yu
- Shanghai Jiao Tong University, China
| | - Jan Sölter
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, Luxembourg
| | - Tong Zheng
- School of Informatics, Nagoya University, Japan
| | - Vitali Liauchuk
- Biomedical Image Analysis Department, United Institute of Informatics Problems, Belarus
| | - Ziqi Zhou
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, China
| | | | - Bruno Oliveira
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Yong Xia
- School of Computer Science and Engineering, Northwestern Polytechnical University, China
| | - Klaus H Maier-Hein
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Qikai Li
- Shanghai Jiao Tong University, China
| | - Andreas Husch
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | | | - Vassili Kovalev
- Biomedical Image Analysis Department, United Institute of Informatics Problems, Belarus
| | - Li Kang
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, China
| | - Alessa Hering
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - João L Vilaça
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Mona Flores
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Daguang Xu
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Bradford Wood
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, WA, DC, USA; School of Medicine and Health Sciences, George Washington University, WA, DC, USA
| |
Collapse
|
170
|
Gao L, Liu B, Fu P, Xu M. Depth-aware Inverted Refinement Network for RGB-D Salient Object Detection. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.11.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
171
|
Zhou S, Xu X, Bai J, Bragin M. Combining multi-view ensemble and surrogate lagrangian relaxation for real-time 3D biomedical image segmentation on the edge. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
172
|
Jalali Moghaddam M, Ghavipour M. Towards smart diagnostic methods for COVID-19: Review of deep learning for medical imaging. IPEM-TRANSLATION 2022; 3:100008. [PMID: 36312890 PMCID: PMC9597575 DOI: 10.1016/j.ipemt.2022.100008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 10/20/2022] [Accepted: 10/24/2022] [Indexed: 11/08/2022]
Abstract
The infectious disease known as COVID-19 has spread dramatically all over the world since December 2019. The fast diagnosis and isolation of infected patients are key factors in slowing down the spread of this virus and better management of the pandemic. Although the CT and X-ray modalities are commonly used for the diagnosis of COVID-19, identifying COVID-19 patients from medical images is a time-consuming and error-prone task. Artificial intelligence has shown to have great potential to speed up and optimize the prognosis and diagnosis process of COVID-19. Herein, we review publications on the application of deep learning (DL) techniques for diagnostics of patients with COVID-19 using CT and X-ray chest images for a period from January 2020 to October 2021. Our review focuses solely on peer-reviewed, well-documented articles. It provides a comprehensive summary of the technical details of models developed in these articles and discusses the challenges in the smart diagnosis of COVID-19 using DL techniques. Based on these challenges, it seems that the effectiveness of the developed models in clinical use needs to be further investigated. This review provides some recommendations to help researchers develop more accurate prediction models.
Collapse
Affiliation(s)
- Marjan Jalali Moghaddam
- Department of Computer Engineering and Information Technology, Amirkabir University of Technology, Tehran, Iran
| | - Mina Ghavipour
- Department of Computer Engineering and Information Technology, Amirkabir University of Technology, Tehran, Iran
| |
Collapse
|
173
|
Peng Y, Zhang T, Guo Y. Cov-TransNet: Dual branch fusion network with transformer for COVID-19 infection segmentation. Biomed Signal Process Control 2022; 80:104366. [PMCID: PMC9671472 DOI: 10.1016/j.bspc.2022.104366] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/06/2022] [Accepted: 10/30/2022] [Indexed: 11/09/2022]
Abstract
Segmentation of COVID-19 infection is a challenging task due to the blurred boundaries and low contrast between the infected and the non-infected areas in COVID-19 CT images, especially for small infection regions. COV-TransNet is presented to achieve high-precision segmentation of COVID-19 infection regions in this paper. The proposed segmentation network is composed of the auxiliary branch and the backbone branch. The auxiliary branch network adopts transformer to provide global information, helping the convolution layers in backbone branch to learn specific local features better. A multi-scale feature attention module is introduced to capture contextual information and adaptively enhance feature representations. Specially, a high internal resolution is maintained during the attention calculation process. Moreover, feature activation module can effectively reduce the loss of valid information during sampling. The proposed network can take full advantage of different depth and multi-scale features to achieve high sensitivity for identifying lesions of varied sizes and locations. We experiment on several datasets of the COVID-19 lesion segmentation task, including COVID-19-CT-Seg, UESTC-COVID-19, MosMedData and COVID-19-MedSeg. Comprehensive results demonstrate that COV-TransNet outperforms the existing state-of-the-art segmentation methods and achieves better segmentation performance for multi-scale lesions.
Collapse
|
174
|
Chen S, Duan J, Wang H, Wang R, Li J, Qi M, Duan Y, Qi S. Automatic detection of stroke lesion from diffusion-weighted imaging via the improved YOLOv5. Comput Biol Med 2022; 150:106120. [PMID: 36179511 DOI: 10.1016/j.compbiomed.2022.106120] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/31/2022] [Accepted: 09/17/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND AND OBJECTIVE Stroke is the second most deadly disease globally and seriously endangers people's lives and health. The automatic detection of stroke lesions from diffusion-weighted imaging (DWI) can improve the diagnosis. Recently, automatic detection methods based on YOLOv5 have been utilized in medical images. However, most of them barely capture the stroke lesions because of their small size and fuzzy boundaries. METHODS To address this problem, a novel method for tracing the edge of the stroke lesion based on YOLOv5 (TE-YOLOv5) is proposed. Specifically, we constantly update the high-level features of the lesion using an aggregate pool (AP) module. Conversely, we feed the extracted feature into the reverse attention (RA) module to trace the edge relationship promptly. Overall, 1681 DWI images of 319 stroke patients have been collected, and experienced radiologists have marked the lesions. DWI images were randomly split into the training and test set at a ratio of 8:2. TE-YOLOv5 has been compared with the related models, and a detailed ablation analysis has been conducted to clarify the role of the RA and AP modules. RESULTS TE-YOLOv5 outperforms its counterparts and achieves competitive performance with a precision of 81.5%, a recall of 75.8%, and a mAP@0.5 of 80.7% (mean average precision while the intersection over union is 0.5) under the same backbone. At the patient level, the positive finding rate can reach 98.51%, while the confidence is set at 80.0%. After ablating RA, the mAP@0.5 decreases to 79.6%; after ablating RA and AP, the mAP@0.5 decreases to 78.1%. CONCLUSIONS The proposed TE-YOLOv5 can automatically and effectively detect stroke lesions from DWI images, especially for those with an extremely small size and blurred boundaries. AP and RA modules can aggregate multi-layer high-level features and concurrently track the edge relationship of stroke lesions. These detection methods might help radiologists improve stroke diagnosis and have great application potential in clinical practice.
Collapse
Affiliation(s)
- Shannan Chen
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Lab of Advanced Design and Intelligent Computing, Ministry of Education, Dalian University, Dalian, China.
| | - Jinfeng Duan
- Department of General Surgery, General Hospital of Northern Theater Command, Shenyang, China.
| | - Hong Wang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Rongqiang Wang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Jinze Li
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Miao Qi
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Yang Duan
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| |
Collapse
|
175
|
Li Y, Liu Z, Lai Q, Li S, Guo Y, Wang Y, Dai Z, Huang J. ESA-UNet for assisted diagnosis of cardiac magnetic resonance image based on the semantic segmentation of the heart. Front Cardiovasc Med 2022; 9:1012450. [PMID: 36386384 PMCID: PMC9645148 DOI: 10.3389/fcvm.2022.1012450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 10/06/2022] [Indexed: 11/16/2022] Open
Abstract
Background Cardiovascular diseases have become the number one disease affecting human health in today's society. In the diagnosis of cardiac diseases, magnetic resonance image (MRI) technology is the most widely used one. However, in clinical diagnosis, the analysis of MRI relies on manual work, which is laborious and time-consuming, and also easily influenced by the subjective experience of doctors. Methods In this article, we propose an artificial intelligence-aided diagnosis system for cardiac MRI with image segmentation as the main component to assist in the diagnosis of cardiovascular diseases. We first performed adequate pre-processing of MRI. The pre-processing steps include the detection of regions of interest of cardiac MRI data, as well as data normalization and data enhancement, and then we input the images after data pre-processing into the deep learning network module of ESA-Unet for the identification of the aorta in order to obtain preliminary segmentation results, and finally, the boundaries of the segmentation results are further optimized using conditional random fields. For ROI detection, we first use standard deviation filters for filtering to find regions in the heart cycle image sequence where pixel intensity varies strongly with time and then use Canny edge detection and Hough transform techniques to find the region of interest containing the heart. The ESA-Unet proposed in this article, moreover, is jointly designed with a self-attentive mechanism and multi-scale jump connection based on convolutional networks. Results The experimental dataset used in this article is from the Department of CT/MRI at the Second Affiliated Hospital of Fujian Medical University. Experiments compare other convolution-based methods, such as UNet, FCN, FPN, and PSPNet, and the results show that our model achieves the best results on Acc, Pr, ReCall, DSC, and IoU metrics. After comparative analysis, the experimental results show that the ESA-UNet network segmentation model designed in this article has higher accuracy, intuitiveness, and more application value than traditional image segmentation algorithms. Conclusion With the continuous application of nuclear magnetic resonance technology in clinical diagnosis, the method in this article is expected to become a tool that can effectively improve the efficiency of doctors' diagnoses.
Collapse
Affiliation(s)
- Yuanzhe Li
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Zhiqiang Liu
- Medical Imaging Department, Guangzhou Twelfth People's Hospital, Guangzhou, China
| | - Qingquan Lai
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Shuting Li
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Yifan Guo
- Department of Radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou, China
| | - Yi Wang
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Zhangsheng Dai
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Jing Huang
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| |
Collapse
|
176
|
Oztel I, Yolcu Oztel G, Akgun D. A hybrid LBP-DCNN based feature extraction method in YOLO: An application for masked face and social distance detection. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:1565-1583. [PMID: 36313483 PMCID: PMC9589619 DOI: 10.1007/s11042-022-14073-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 10/06/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
COVID-19 is an ongoing pandemic and the WHO recommends at least one-meter social distance, and the use of medical face masks to slow the disease's transmission. This paper proposes an automated approach for detecting social distance and face masks. Thus, it aims to help the reduction of diseases transferred by respiratory droplets such as COVID-19. For this system, a two-cascaded YOLO is used. The first cascade detects humans in the environment and computes the social distance between them. Then, the second cascade detects human faces with or without a mask. Finally, red bounding boxes encircle the people's images that did not follow the rules. Also, in this paper, we propose a two-part feature extraction approach used with YOLO. The first part of the proposed feature extraction method extracts general features using the transfer learning approach. The second part extracts better features specific to the current task using the LBP layer and classification layers. The best average precision for the human detection task was obtained as 66% using Resnet50 in YOLO. The best average precision for the mask detection was obtained as 95% using Darknet19+LBP with YOLO. Also, another popular object detection network, Faster R-CNN, have been used for comparison purpose. The proposed system performed better than the literature in human and mask detection tasks.
Collapse
Affiliation(s)
- Ismail Oztel
- Computer Engineering Department, Sakarya University, Sakarya, 54050 Turkey
| | - Gozde Yolcu Oztel
- Software Engineering Department, Sakarya University, Sakarya, 54050 Turkey
| | - Devrim Akgun
- Software Engineering Department, Sakarya University, Sakarya, 54050 Turkey
| |
Collapse
|
177
|
Faragallah OS, El-Hoseny HM, El-Sayed HS. Efficient COVID-19 super pixel segmentation algorithm using MCFO-based SLIC. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2022; 14:9217-9232. [PMID: 36310644 PMCID: PMC9589839 DOI: 10.1007/s12652-022-04425-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Accepted: 09/14/2022] [Indexed: 06/08/2023]
Abstract
In computer vision segmentation field, super pixel identity has become an important index in the recently segmentation algorithms especially in medical images. Simple Linear Iterative Clustering (SLIC) algorithm is one of the most popular super pixel methods as it has a great robustness, less sensitive to the image type and benefit to the boundary recall in different kinds of image processing. Recently, COVID-19 severity increased with the lack of an effective treatment or vaccine. As the Corona virus spreads in an unknown manner, th-ere is a strong need for segmenting the lungs infected regions for fast tracking and early detection, no matter how small. This may consider difficult to be achieved with traditional segmentation techniques. From this perspective, this paper presents an efficient modified central force optimization (MCFO)-based SLIC segmentation algorithm to discuss chest CT images for detecting the positive COVID-19 cases. The proposed MCFO-based SLIC segmentation algorithm performance is evaluated and compared with the thresholding segmentation algorithm using different evaluation metrics such as accuracy, boundary recall, F-measure, similarity index, MCC, Dice, and Jaccard. The outcomes demonstrated that the proposed MCFO-based SLIC segmentation algorithm has achieved better detection for the small infected regions in CT lung scans than the thresholding segmentation.
Collapse
Affiliation(s)
- Osama S. Faragallah
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif, 21944 Saudi Arabia
| | - Heba M. El-Hoseny
- Department of Computer Science, The Higher Future Institute for Specialized Technological Studies, El Shorouk, Egypt
| | - Hala S. El-Sayed
- Department of Electrical Engineering, Faculty of Engineering, Menoufia University, Shebin El-Kom, 32511 Egypt
| |
Collapse
|
178
|
Batra S, Sharma H, Boulila W, Arya V, Srivastava P, Khan MZ, Krichen M. An Intelligent Sensor Based Decision Support System for Diagnosing Pulmonary Ailment through Standardized Chest X-ray Scans. SENSORS (BASEL, SWITZERLAND) 2022; 22:7474. [PMID: 36236573 PMCID: PMC9571822 DOI: 10.3390/s22197474] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 09/28/2022] [Accepted: 09/29/2022] [Indexed: 06/16/2023]
Abstract
Academics and the health community are paying much attention to developing smart remote patient monitoring, sensors, and healthcare technology. For the analysis of medical scans, various studies integrate sophisticated deep learning strategies. A smart monitoring system is needed as a proactive diagnostic solution that may be employed in an epidemiological scenario such as COVID-19. Consequently, this work offers an intelligent medicare system that is an IoT-empowered, deep learning-based decision support system (DSS) for the automated detection and categorization of infectious diseases (COVID-19 and pneumothorax). The proposed DSS system was evaluated using three independent standard-based chest X-ray scans. The suggested DSS predictor has been used to identify and classify areas on whole X-ray scans with abnormalities thought to be attributable to COVID-19, reaching an identification and classification accuracy rate of 89.58% for normal images and 89.13% for COVID-19 and pneumothorax. With the suggested DSS system, a judgment depending on individual chest X-ray scans may be made in approximately 0.01 s. As a result, the DSS system described in this study can forecast at a pace of 95 frames per second (FPS) for both models, which is near to real-time.
Collapse
Affiliation(s)
- Shivani Batra
- Department of Computer Science and Engineering, KIET Group of Institutions, Ghaziabad 201206, India
| | - Harsh Sharma
- Department of Computer Science and Engineering, KIET Group of Institutions, Ghaziabad 201206, India
| | - Wadii Boulila
- Robotics and Internet-of-Things Laboratory, Prince Sultan University, Riyadh 12435, Saudi Arabia
- RIADI Laboratory, National School of Computer Sciences, University of Manouba, Manouba 2010, Tunisia
| | - Vaishali Arya
- School of Engineering, GD Goenka University, Gurugram 122103, India
| | - Prakash Srivastava
- Department of Computer Science and Engineering, Graphic Era (Deemed to Be University), Dehradun 248002, India
| | - Mohammad Zubair Khan
- Department of Computer Science and Information, Taibah University, Medina 42353, Saudi Arabia
| | - Moez Krichen
- Faculty of Computer Science & IT, Al Baha University, Al Baha 65779, Saudi Arabia
| |
Collapse
|
179
|
Chi J, Zhang S, Han X, Wang H, Wu C, Yu X. MID-UNet: Multi-input directional UNet for COVID-19 lung infection segmentation from CT images. SIGNAL PROCESSING. IMAGE COMMUNICATION 2022; 108:116835. [PMID: 35935468 PMCID: PMC9344813 DOI: 10.1016/j.image.2022.116835] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 05/30/2022] [Accepted: 07/23/2022] [Indexed: 05/05/2023]
Abstract
Coronavirus Disease 2019 (COVID-19) has spread globally since the first case was reported in December 2019, becoming a world-wide existential health crisis with over 90 million total confirmed cases. Segmentation of lung infection from computed tomography (CT) scans via deep learning method has a great potential in assisting the diagnosis and healthcare for COVID-19. However, current deep learning methods for segmenting infection regions from lung CT images suffer from three problems: (1) Low differentiation of semantic features between the COVID-19 infection regions, other pneumonia regions and normal lung tissues; (2) High variation of visual characteristics between different COVID-19 cases or stages; (3) High difficulty in constraining the irregular boundaries of the COVID-19 infection regions. To solve these problems, a multi-input directional UNet (MID-UNet) is proposed to segment COVID-19 infections in lung CT images. For the input part of the network, we firstly propose an image blurry descriptor to reflect the texture characteristic of the infections. Then the original CT image, the image enhanced by the adaptive histogram equalization, the image filtered by the non-local means filter and the blurry feature map are adopted together as the input of the proposed network. For the structure of the network, we propose the directional convolution block (DCB) which consist of 4 directional convolution kernels. DCBs are applied on the short-cut connections to refine the extracted features before they are transferred to the de-convolution parts. Furthermore, we propose a contour loss based on local curvature histogram then combine it with the binary cross entropy (BCE) loss and the intersection over union (IOU) loss for better segmentation boundary constraint. Experimental results on the COVID-19-CT-Seg dataset demonstrate that our proposed MID-UNet provides superior performance over the state-of-the-art methods on segmenting COVID-19 infections from CT images.
Collapse
Affiliation(s)
- Jianning Chi
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Shuang Zhang
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Xiaoying Han
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Huan Wang
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Chengdong Wu
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Xiaosheng Yu
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| |
Collapse
|
180
|
Chen Y, Zhou T, Chen Y, Feng L, Zheng C, Liu L, Hu L, Pan B. HADCNet: Automatic segmentation of COVID-19 infection based on a hybrid attention dense connected network with dilated convolution. Comput Biol Med 2022; 149:105981. [PMID: 36029749 PMCID: PMC9391231 DOI: 10.1016/j.compbiomed.2022.105981] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 08/03/2022] [Accepted: 08/14/2022] [Indexed: 12/01/2022]
Abstract
the automatic segmentation of lung infections in CT slices provides a rapid and effective strategy for diagnosing, treating, and assessing COVID-19 cases. However, the segmentation of the infected areas presents several difficulties, including high intraclass variability and interclass similarity among infected areas, as well as blurred edges and low contrast. Therefore, we propose HADCNet, a deep learning framework that segments lung infections based on a dual hybrid attention strategy. HADCNet uses an encoder hybrid attention module to integrate feature information at different scales across the peer hierarchy to refine the feature map. Furthermore, a decoder hybrid attention module uses an improved skip connection to embed the semantic information of higher-level features into lower-level features by integrating multi-scale contextual structures and assigning the spatial information of lower-level features to higher-level features, thereby capturing the contextual dependencies of lesion features across levels and refining the semantic structure, which reduces the semantic gap between feature maps at different levels and improves the model segmentation performance. We conducted fivefold cross-validations of our model on four publicly available datasets, with final mean Dice scores of 0.792, 0.796, 0.785, and 0.723. These results show that the proposed model outperforms popular state-of-the-art semantic segmentation methods and indicate its potential use in the diagnosis and treatment of COVID-19.
Collapse
Affiliation(s)
- Ying Chen
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Taohui Zhou
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Yi Chen
- Department of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, 325035, PR China.
| | - Longfeng Feng
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Cheng Zheng
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Lan Liu
- Department of Radiology, Jiangxi Cancer Hospital, Nanchang, 330029, PR China.
| | - Liping Hu
- Department of Radiology, Jiangxi Cancer Hospital, Nanchang, 330029, PR China.
| | - Bujian Pan
- Department of Hepatobiliary Surgery, Wenzhou Central Hospital, The Dingli Clinical Institute of Wenzhou Medical University, Wenzhou, Zhejiang, 325000, PR China.
| |
Collapse
|
181
|
Jin G, Liu C, Chen X. An efficient deep neural network framework for COVID-19 lung infection segmentation. Inf Sci (N Y) 2022; 612:745-758. [PMID: 36068814 PMCID: PMC9436790 DOI: 10.1016/j.ins.2022.08.059] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 08/11/2022] [Accepted: 08/13/2022] [Indexed: 02/04/2023]
Abstract
Since the outbreak of Coronavirus Disease 2019 (COVID-19) in 2020, it has significantly affected the global health system. The use of deep learning technology to automatically segment pneumonia lesions from Computed Tomography (CT) images can greatly reduce the workload of physicians and expand traditional diagnostic methods. However, there are still some challenges to tackle the task, including obtaining high-quality annotations and subtle differences between classes. In the present study, a novel deep neural network based on Resnet architecture is proposed to automatically segment infected areas from CT images. To reduce the annotation cost, a Vector Quantized Variational AutoEncoder (VQ-VAE) branch is added to reconstruct the input images for purpose of regularizing the shared decoder and the latent maps of the VQ-VAE are utilized to further improve the feature representation. Moreover, a novel proportions loss is presented for mitigating class imbalance and enhance the generalization ability of the model. In addition, a semi-supervised mechanism based on adversarial learning to the network has been proposed, which can utilize the information of the trusted region in unlabeled images to further regularize the network. Extensive experiments on the COVID-SemiSeg are performed to verify the superiority of the proposed method, and the results are in line with expectations.
Collapse
Affiliation(s)
- Ge Jin
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Chuancai Liu
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
- Collaborative Innovation Center of IoT Technology and Intelligent Systems, Minjiang University, Fuzhou 350108, China
| | - Xu Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| |
Collapse
|
182
|
Karthik R, Menaka R, Hariharan M, Kathiresan GS. AI for COVID-19 Detection from Radiographs: Incisive Analysis of State of the Art Techniques, Key Challenges and Future Directions. Ing Rech Biomed 2022; 43:486-510. [PMID: 34336141 PMCID: PMC8312058 DOI: 10.1016/j.irbm.2021.07.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 06/14/2021] [Accepted: 07/19/2021] [Indexed: 12/24/2022]
Abstract
Background and objective In recent years, Artificial Intelligence has had an evident impact on the way research addresses challenges in different domains. It has proven to be a huge asset, especially in the medical field, allowing for time-efficient and reliable solutions. This research aims to spotlight the impact of deep learning and machine learning models in the detection of COVID-19 from medical images. This is achieved by conducting a review of the state-of-the-art approaches proposed by the recent works in this field. Methods The main focus of this study is the recent developments of classification and segmentation approaches to image-based COVID-19 detection. The study reviews 140 research papers published in different academic research databases. These papers have been screened and filtered based on specified criteria, to acquire insights prudent to image-based COVID-19 detection. Results The methods discussed in this review include different types of imaging modality, predominantly X-rays and CT scans. These modalities are used for classification and segmentation tasks as well. This review seeks to categorize and discuss the different deep learning and machine learning architectures employed for these tasks, based on the imaging modality utilized. It also hints at other possible deep learning and machine learning architectures that can be proposed for better results towards COVID-19 detection. Along with that, a detailed overview of the emerging trends and breakthroughs in Artificial Intelligence-based COVID-19 detection has been discussed as well. Conclusion This work concludes by stipulating the technical and non-technical challenges faced by researchers and illustrates the advantages of image-based COVID-19 detection with Artificial Intelligence techniques.
Collapse
Affiliation(s)
- R Karthik
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai, India
| | - R Menaka
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai, India
| | - M Hariharan
- School of Computing Sciences and Engineering, Vellore Institute of Technology, Chennai, India
| | - G S Kathiresan
- School of Electronics Engineering, Vellore Institute of Technology, Chennai, India
| |
Collapse
|
183
|
Fan DP, Ji GP, Cheng MM, Shao L. Concealed Object Detection. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:6024-6042. [PMID: 34061739 DOI: 10.1109/tpami.2021.3085766] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We present the first systematic study on concealed object detection (COD), which aims to identify objects that are visually embedded in their background. The high intrinsic similarities between the concealed objects and their background make COD far more challenging than traditional object detection/segmentation. To better understand this task, we collect a large-scale dataset, called COD10K, which consists of 10,000 images covering concealed objects in diverse real-world scenarios from 78 object categories. Further, we provide rich annotations including object categories, object boundaries, challenging attributes, object-level labels, and instance-level annotations. Our COD10K is the largest COD dataset to date, with the richest annotations, which enables comprehensive concealed object understanding and can even be used to help progress several other vision tasks, such as detection, segmentation, classification etc. Motivated by how animals hunt in the wild, we also design a simple but strong baseline for COD, termed the Search Identification Network (SINet). Without any bells and whistles, SINet outperforms twelve cutting-edge baselines on all datasets tested, making them robust, general architectures that could serve as catalysts for future research in COD. Finally, we provide some interesting findings, and highlight several potential applications and future directions. To spark research in this new field, our code, dataset, and online demo are available at our project page: http://mmcheng.net/cod.
Collapse
|
184
|
Alsaaidah B, Al-Hadidi MR, Al-Nsour H, Masadeh R, AlZubi N. Comprehensive Survey of Machine Learning Systems for COVID-19 Detection. J Imaging 2022; 8:267. [PMID: 36286361 PMCID: PMC9604704 DOI: 10.3390/jimaging8100267] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 09/11/2022] [Accepted: 09/20/2022] [Indexed: 01/14/2023] Open
Abstract
The last two years are considered the most crucial and critical period of the COVID-19 pandemic affecting most life aspects worldwide. This virus spreads quickly within a short period, increasing the fatality rate associated with the virus. From a clinical perspective, several diagnosis methods are carried out for early detection to avoid virus propagation. However, the capabilities of these methods are limited and have various associated challenges. Consequently, many studies have been performed for COVID-19 automated detection without involving manual intervention and allowing an accurate and fast decision. As is the case with other diseases and medical issues, Artificial Intelligence (AI) provides the medical community with potential technical solutions that help doctors and radiologists diagnose based on chest images. In this paper, a comprehensive review of the mentioned AI-based detection solution proposals is conducted. More than 200 papers are reviewed and analyzed, and 145 articles have been extensively examined to specify the proposed AI mechanisms with chest medical images. A comprehensive examination of the associated advantages and shortcomings is illustrated and summarized. Several findings are concluded as a result of a deep analysis of all the previous works using machine learning for COVID-19 detection, segmentation, and classification.
Collapse
Affiliation(s)
- Bayan Alsaaidah
- Department of Computer Science, Prince Abdullah bin Ghazi Faculty of Information Technology and Communications, Al-Balqa Applied University, Salt 19117, Jordan
| | - Moh’d Rasoul Al-Hadidi
- Department of Electrical Engineering, Electrical Power Engineering and Computer Engineering, Faculty of Engineering, Al-Balqa Applied University, Salt 19117, Jordan
| | - Heba Al-Nsour
- Department of Computer Science, Prince Abdullah bin Ghazi Faculty of Information Technology and Communications, Al-Balqa Applied University, Salt 19117, Jordan
| | - Raja Masadeh
- Computer Science Department, The World Islamic Sciences and Education University, Amman 11947, Jordan
| | - Nael AlZubi
- Department of Electrical Engineering, Electrical Power Engineering and Computer Engineering, Faculty of Engineering, Al-Balqa Applied University, Salt 19117, Jordan
| |
Collapse
|
185
|
Murillo-González A, González D, Jaramillo L, Galeano C, Tavera F, Mejía M, Hernández A, Rivera DR, Paniagua JG, Ariza-Jiménez L, Garcés Echeverri JJ, Diaz León CA, Serna-Higuita DL, Barrios W, Arrázola W, Mejía MÁ, Arango S, Marín Ramírez D, Salinas-Miranda E, Quintero OL. Medical decision support system using weakly-labeled lung CT scans. FRONTIERS IN MEDICAL TECHNOLOGY 2022; 4:980735. [PMID: 36248019 PMCID: PMC9554434 DOI: 10.3389/fmedt.2022.980735] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022] Open
Abstract
Purpose Determination and development of an effective set of models leveraging Artificial Intelligence techniques to generate a system able to support clinical practitioners working with COVID-19 patients. It involves a pipeline including classification, lung and lesion segmentation, as well as lesion quantification of axial lung CT studies. Approach A deep neural network architecture based on DenseNet is introduced for the classification of weakly-labeled, variable-sized (and possibly sparse) axial lung CT scans. The models are trained and tested on aggregated, publicly available data sets with over 10 categories. To further assess the models, a data set was collected from multiple medical institutions in Colombia, which includes healthy, COVID-19 and patients with other diseases. It is composed of 1,322 CT studies from a diverse set of CT machines and institutions that make over 550,000 slices. Each CT study was labeled based on a clinical test, and no per-slice annotation took place. This enabled a classification into Normal vs. Abnormal patients, and for those that were considered abnormal, an extra classification step into Abnormal (other diseases) vs. COVID-19. Additionally, the pipeline features a methodology to segment and quantify lesions of COVID-19 patients on the complete CT study, enabling easier localization and progress tracking. Moreover, multiple ablation studies were performed to appropriately assess the elements composing the classification pipeline. Results The best performing lung CT study classification models achieved 0.83 accuracy, 0.79 sensitivity, 0.87 specificity, 0.82 F1 score and 0.85 precision for the Normal vs. Abnormal task. For the Abnormal vs COVID-19 task, the model obtained 0.86 accuracy, 0.81 sensitivity, 0.91 specificity, 0.84 F1 score and 0.88 precision. The ablation studies showed that using the complete CT study in the pipeline resulted in greater classification performance, restating that relevant COVID-19 patterns cannot be ignored towards the top and bottom of the lung volume. Discussion The lung CT classification architecture introduced has shown that it can handle weakly-labeled, variable-sized and possibly sparse axial lung studies, reducing the need for expert annotations at a per-slice level. Conclusions This work presents a working methodology that can guide the development of decision support systems for clinical reasoning in future interventionist or prospective studies.
Collapse
Affiliation(s)
| | - David González
- Radiology Department, Universidad CES, Medellín, Colombia
| | | | - Carlos Galeano
- Radiology Department, Universidad CES, Medellín, Colombia
| | - Fabby Tavera
- Radiology Department, Universidad de Antioquia, Medellín, Colombia
| | - Marcia Mejía
- Radiology Department, Universidad de Antioquia, Medellín, Colombia
| | - Alejandro Hernández
- Institución Prestadora de Servicios de Salud IPS Universitaria, Medellín, Colombia
| | | | | | | | | | | | | | | | - Wiston Arrázola
- Department of Mathematical Sciences, Universidad EAFIT, Medellín, Colombia
| | - Miguel Ángel Mejía
- Department of Mathematical Sciences, Universidad EAFIT, Medellín, Colombia
| | - Sebastián Arango
- Department of Mathematical Sciences, Universidad EAFIT, Medellín, Colombia
| | | | | | - O. L. Quintero
- Department of Mathematical Sciences, Universidad EAFIT, Medellín, Colombia
| |
Collapse
|
186
|
Li M, Li X, Jiang Y, Zhang J, Luo H, Yin S. Explainable multi-instance and multi-task learning for COVID-19 diagnosis and lesion segmentation in CT images. Knowl Based Syst 2022; 252:109278. [PMID: 35783000 PMCID: PMC9235304 DOI: 10.1016/j.knosys.2022.109278] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 06/12/2022] [Accepted: 06/13/2022] [Indexed: 11/16/2022]
Abstract
Coronavirus Disease 2019 (COVID-19) still presents a pandemic trend globally. Detecting infected individuals and analyzing their status can provide patients with proper healthcare while protecting the normal population. Chest CT (computed tomography) is an effective tool for screening of COVID-19. It displays detailed pathology-related information. To achieve automated COVID-19 diagnosis and lung CT image segmentation, convolutional neural networks (CNNs) have become mainstream methods. However, most of the previous works consider automated diagnosis and image segmentation as two independent tasks, in which some focus on lung fields segmentation and the others focus on single-lesion segmentation. Moreover, lack of clinical explainability is a common problem for CNN-based methods. In such context, we develop a multi-task learning framework in which the diagnosis of COVID-19 and multi-lesion recognition (segmentation of CT images) are achieved simultaneously. The core of the proposed framework is an explainable multi-instance multi-task network. The network learns task-related features adaptively with learnable weights, and gives explicable diagnosis results by suggesting local CT images with lesions as additional evidence. Then, severity assessment of COVID-19 and lesion quantification are performed to analyze patient status. Extensive experimental results on real-world datasets show that the proposed framework outperforms all the compared approaches for COVID-19 diagnosis and multi-lesion segmentation.
Collapse
Affiliation(s)
- Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Jiusi Zhang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Shen Yin
- Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, Trondheim, 7034, Norway
| |
Collapse
|
187
|
Ahmed N, Tan X, Ma L. LW-CovidNet: Automatic covid-19 lung infection detection from chest X-ray images. IET IMAGE PROCESSING 2022; 17:IPR212637. [PMID: 36246853 PMCID: PMC9538131 DOI: 10.1049/ipr2.12637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 05/06/2022] [Accepted: 05/10/2022] [Indexed: 06/16/2023]
Abstract
Coronavirus Disease 2019 (Covid-19) overtook the worldwide in early 2020, placing the world's health in threat. Automated lung infection detection using Chest X-ray images has a ton of potential for enhancing the traditional covid-19 treatment strategy. However, there are several challenges to detect infected regions from Chest X-ray images, including significant variance in infected features similar spatial characteristics, multi-scale variations in texture shapes and sizes of infected regions. Moreover, high parameters with transfer learning are also a constraints to deploy deep convolutional neural network(CNN) models in real time environment. A novel covid-19 lightweight CNN(LW-CovidNet) method is proposed to automatically detect covid-19 infected regions from Chest X-ray images to address these challenges. In our proposed hybrid method of integrating Standard and Depth-wise Separable convolutions are used to aggregate the high level features and also compensate the information loss by increasing the Receptive Field of the model. The detection boundaries of disease regions representations are then enhanced via an Edge-Attention method by applying heatmaps for accurate detection of disease regions. Extensive experiments indicate that the proposed LW-CovidNet surpasses most cutting-edge detection methods and also contributes to the advancement of state-of-the-art performance. It is envisaged that with reliable accuracy, this method can be introduced for clinical practices in the future.
Collapse
Affiliation(s)
- Noor Ahmed
- School of Electronic Information and Electrical EngineeringDepartment of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghaiChina
| | - Xin Tan
- School of Electronic Information and Electrical EngineeringDepartment of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghaiChina
| | - Lizhuang Ma
- School of Electronic Information and Electrical EngineeringDepartment of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghaiChina
| |
Collapse
|
188
|
Lu X, Xu Y, Yuan W. DBF-Net: a semi-supervised dual-task balanced fusion network for segmenting infected regions from lung CT images. EVOLVING SYSTEMS 2022; 14:519-532. [PMID: 37193370 PMCID: PMC9483907 DOI: 10.1007/s12530-022-09466-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 09/11/2022] [Indexed: 11/25/2022]
Abstract
Accurate segmentation of infected regions in lung computed tomography (CT) images is essential to improve the timeliness and effectiveness of treatment for coronavirus disease 2019 (COVID-19). However, the main difficulties in developing of lung lesion segmentation in COVID-19 are still the fuzzy boundary of the lung-infected region, the low contrast between the infected region and the normal trend region, and the difficulty in obtaining labeled data. To this end, we propose a novel dual-task consistent network framework that uses multiple inputs to continuously learn and extract lung infection region features, which is used to generate reliable label images (pseudo-labels) and expand the dataset. Specifically, we periodically feed multiple sets of raw and data-enhanced images into two trunk branches of the network; the characteristics of the lung infection region are extracted by a lightweight double convolution (LDC) module and fusiform equilibrium fusion pyramid (FEFP) convolution in the backbone. According to the learned features, the infected regions are segmented, and pseudo-labels are made based on the semi-supervised learning strategy, which effectively alleviates the semi-supervised problem of unlabeled data. Our proposed semi-supervised dual-task balanced fusion network (DBF-Net) creates pseudo-labels on the COVID-SemiSeg dataset and the COVID-19 CT segmentation dataset. Furthermore, we perform lung infection segmentation on the DBF-Net model, with a segmentation sensitivity of 70.6% and specificity of 92.8%. The results of the investigation indicate that the proposed network greatly enhances the segmentation ability of COVID-19 infection.
Collapse
Affiliation(s)
- Xiaoyan Lu
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou People’s Republic of China
| | - Yang Xu
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou People’s Republic of China
- Guiyang Aluminum Magnesium Design and Research Institute Co., Ltd, Guiyang, Guizhou People’s Republic of China
| | - Wenhao Yuan
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou People’s Republic of China
| |
Collapse
|
189
|
Owais M, Baek NR, Park KR. DMDF-Net: Dual multiscale dilated fusion network for accurate segmentation of lesions related to COVID-19 in lung radiographic scans. EXPERT SYSTEMS WITH APPLICATIONS 2022; 202:117360. [PMID: 35529253 PMCID: PMC9057951 DOI: 10.1016/j.eswa.2022.117360] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 01/24/2022] [Accepted: 04/25/2022] [Indexed: 05/14/2023]
Abstract
The recent disaster of COVID-19 has brought the whole world to the verge of devastation because of its highly transmissible nature. In this pandemic, radiographic imaging modalities, particularly, computed tomography (CT), have shown remarkable performance for the effective diagnosis of this virus. However, the diagnostic assessment of CT data is a human-dependent process that requires sufficient time by expert radiologists. Recent developments in artificial intelligence have substituted several personal diagnostic procedures with computer-aided diagnosis (CAD) methods that can make an effective diagnosis, even in real time. In response to COVID-19, various CAD methods have been developed in the literature, which can detect and localize infectious regions in chest CT images. However, most existing methods do not provide cross-data analysis, which is an essential measure for assessing the generality of a CAD method. A few studies have performed cross-data analysis in their methods. Nevertheless, these methods show limited results in real-world scenarios without addressing generality issues. Therefore, in this study, we attempt to address generality issues and propose a deep learning-based CAD solution for the diagnosis of COVID-19 lesions from chest CT images. We propose a dual multiscale dilated fusion network (DMDF-Net) for the robust segmentation of small lesions in a given CT image. The proposed network mainly utilizes the strength of multiscale deep features fusion inside the encoder and decoder modules in a mutually beneficial manner to achieve superior segmentation performance. Additional pre- and post-processing steps are introduced in the proposed method to address the generality issues and further improve the diagnostic performance. Mainly, the concept of post-region of interest (ROI) fusion is introduced in the post-processing step, which reduces the number of false-positives and provides a way to accurately quantify the infected area of lung. Consequently, the proposed framework outperforms various state-of-the-art methods by accomplishing superior infection segmentation results with an average Dice similarity coefficient of 75.7%, Intersection over Union of 67.22%, Average Precision of 69.92%, Sensitivity of 72.78%, Specificity of 99.79%, Enhance-Alignment Measure of 91.11%, and Mean Absolute Error of 0.026.
Collapse
Affiliation(s)
- Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, South Korea
| | - Na Rae Baek
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, South Korea
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, South Korea
| |
Collapse
|
190
|
Yu Q, Qi L, Gao Y, Wang W, Shi Y. Crosslink-Net: Double-Branch Encoder Network via Fusing Vertical and Horizontal Convolutions for Medical Image Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5893-5908. [PMID: 36074869 DOI: 10.1109/tip.2022.3203223] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Accurate image segmentation plays a crucial role in medical image analysis, yet it faces great challenges caused by various shapes, diverse sizes, and blurry boundaries. To address these difficulties, square kernel-based encoder-decoder architectures have been proposed and widely used, but their performance remains unsatisfactory. To further address these challenges, we present a novel double-branch encoder architecture. Our architecture is inspired by two observations. (1) Since the discrimination of the features learned via square convolutional kernels needs to be further improved, we propose utilizing nonsquare vertical and horizontal convolutional kernels in a double-branch encoder so that the features learned by both branches can be expected to complement each other. (2) Considering that spatial attention can help models to better focus on the target region in a large-sized image, we develop an attention loss to further emphasize the segmentation of small-sized targets. With the above two schemes, we develop a novel double-branch encoder-based segmentation framework for medical image segmentation, namely, Crosslink-Net, and validate its effectiveness on five datasets with experiments. The code is released at https://github.com/Qianyu1226/Crosslink-Net.
Collapse
|
191
|
Wu X, Zhang Z, Guo L, Chen H, Luo Q, Jin B, Gu W, Lu F, Chen J. FAM: focal attention module for lesion segmentation of COVID-19 CT images. JOURNAL OF REAL-TIME IMAGE PROCESSING 2022; 19:1091-1104. [PMID: 36091622 PMCID: PMC9441194 DOI: 10.1007/s11554-022-01249-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 08/12/2022] [Indexed: 05/27/2023]
Abstract
The novel coronavirus pneumonia (COVID-19) is the world's most serious public health crisis, posing a serious threat to public health. In clinical practice, automatic segmentation of the lesion from computed tomography (CT) images using deep learning methods provides an promising tool for identifying and diagnosing COVID-19. To improve the accuracy of image segmentation, an attention mechanism is adopted to highlight important features. However, existing attention methods are of weak performance or negative impact to the accuracy of convolutional neural networks (CNNs) due to various reasons (e.g. low contrast of the boundary between the lesion and the surrounding, the image noise). To address this issue, we propose a novel focal attention module (FAM) for lesion segmentation of CT images. FAM contains a channel attention module and a spatial attention module. In the spatial attention module, it first generates rough spatial attention, a shape prior of the lesion region obtained from the CT image using median filtering and distance transformation. The rough spatial attention is then input into two 7 × 7 convolution layers for correction, achieving refined spatial attention on the lesion region. FAM is individually integrated with six state-of-the-art segmentation networks (e.g. UNet, DeepLabV3+, etc.), and then we validated these six combinations on the public dataset including COVID-19 CT images. The results show that FAM improve the Dice Similarity Coefficient (DSC) of CNNs by 2%, and reduced the number of false negatives (FN) and false positives (FP) up to 17.6%, which are significantly higher than that using other attention modules such as CBAM and SENet. Furthermore, FAM significantly improve the convergence speed of the model training and achieve better real-time performance. The codes are available at GitHub (https://github.com/RobotvisionLab/FAM.git).
Collapse
Affiliation(s)
- Xiaoxin Wu
- State Key Laboratory for Diagnosis and Treatment of Infectious Diseases, National Clinical Research Center for Infectious Diseases, First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang China
| | - Zhihao Zhang
- College of Computer Science and Technology, Shanghai University of Electric Power, Shanghai, China
| | - Lingling Guo
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, Zhejiang China
| | - Hui Chen
- State Key Laboratory for Diagnosis and Treatment of Infectious Diseases, National Clinical Research Center for Infectious Diseases, First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang China
| | - Qiaojie Luo
- School of Stomatology, Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang China
| | - Bei Jin
- Department of Oral and Maxillofacial Surgery, Taizhou Hospital, Wenzhou Medical University, Taizhou, Zhejiang China
| | - Weiyan Gu
- Department of Oral and Maxillofacial Surgery, Taizhou Hospital, Wenzhou Medical University, Taizhou, Zhejiang China
| | - Fangfang Lu
- College of Computer Science and Technology, Shanghai University of Electric Power, Shanghai, China
| | - Jingjing Chen
- Zhejiang University City College, Hangzhou, Zhejiang China
| |
Collapse
|
192
|
Wang G, Guo S, Han L, Cekderi AB. Two-dimensional reciprocal cross entropy multi-threshold combined with improved firefly algorithm for lung parenchyma segmentation of COVID-19 CT image. Biomed Signal Process Control 2022; 78:103933. [PMID: 35774106 PMCID: PMC9217142 DOI: 10.1016/j.bspc.2022.103933] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 05/28/2022] [Accepted: 06/18/2022] [Indexed: 12/01/2022]
Abstract
The lesions of COVID-19 CT image show various kinds of ground-glass opacity and consolidation, which are distributed in left lung, right lung or both lungs. The lung lobes are uneven and it have similar gray value to the surrounding arteries, veins, and bronchi. The lesions of COVID-19 have different sizes and shapes in different periods. Accurate segmentation of lung parenchyma in CT image is a key step in COVID-19 detection and diagnosis. Aiming at the unideal effect of traditional image segmentation methods on lung parenchyma segmentation in CT images, a lung parenchyma segmentation method based on two-dimensional reciprocal cross entropy multi-threshold combined with improved firefly algorithm is proposed. Firstly, the optimal threshold method is used to realize the initial segmentation of the lung, so that the segmentation threshold can change adaptively according to the detailed information of lung lobes, trachea, bronchi and ground-glass opacity. Then the lung parenchyma is further processed to obtain the lung parenchyma template, and then the defective template is repaired combined with the improved Freeman chain code and Bezier curve. Finally, the lung parenchyma is extracted by multiplying the template with the lung CT image. The accuracy of lung parenchyma segmentation has been improved in the contrast clarity of CT image and the consistency of lung parenchyma regional features, with an average segmentation accuracy rate of 97.4%. The experimental results show that for COVID-19 and suspected cases, the method has an ideal segmentation effect, and it has good accuracy and robustness.
Collapse
Affiliation(s)
- Guowei Wang
- State Key Laboratory of Intelligent Control and Decision of Complex Systems, School of Automation, Beijing Institute of Technology, Beijing 100081, China
| | - Shuli Guo
- State Key Laboratory of Intelligent Control and Decision of Complex Systems, School of Automation, Beijing Institute of Technology, Beijing 100081, China
| | - Lina Han
- Department of Cardiology, The Second Medical Center, National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing, China
| | - Anil Baris Cekderi
- State Key Laboratory of Intelligent Control and Decision of Complex Systems, School of Automation, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
193
|
Killekar A, Grodecki K, Lin A, Cadet S, McElhinney P, Razipour A, Chan C, Pressman BD, Julien P, Chen P, Simon J, Maurovich-Horvat P, Gaibazzi N, Thakur U, Mancini E, Agalbato C, Munechika J, Matsumoto H, Menè R, Parati G, Cernigliaro F, Nerlekar N, Torlasco C, Pontone G, Dey D, Slomka P. Rapid quantification of COVID-19 pneumonia burden from computed tomography with convolutional long short-term memory networks. J Med Imaging (Bellingham) 2022; 9:054001. [PMID: 36090960 PMCID: PMC9446878 DOI: 10.1117/1.jmi.9.5.054001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 08/16/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Quantitative lung measures derived from computed tomography (CT) have been demonstrated to improve prognostication in coronavirus disease 2019 (COVID-19) patients but are not part of clinical routine because the required manual segmentation of lung lesions is prohibitively time consuming. We aim to automatically segment ground-glass opacities and high opacities (comprising consolidation and pleural effusion). Approach: We propose a new fully automated deep-learning framework for fast multi-class segmentation of lung lesions in COVID-19 pneumonia from both contrast and non-contrast CT images using convolutional long short-term memory (ConvLSTM) networks. Utilizing the expert annotations, model training was performed using five-fold cross-validation to segment COVID-19 lesions. The performance of the method was evaluated on CT datasets from 197 patients with a positive reverse transcription polymerase chain reaction test result for SARS-CoV-2, 68 unseen test cases, and 695 independent controls. Results: Strong agreement between expert manual and automatic segmentation was obtained for lung lesions with a Dice score of 0.89 ± 0.07 ; excellent correlations of 0.93 and 0.98 for ground-glass opacity (GGO) and high opacity volumes, respectively, were obtained. In the external testing set of 68 patients, we observed a Dice score of 0.89 ± 0.06 as well as excellent correlations of 0.99 and 0.98 for GGO and high opacity volumes, respectively. Computations for a CT scan comprising 120 slices were performed under 3 s on a computer equipped with an NVIDIA TITAN RTX GPU. Diagnostically, the automated quantification of the lung burden % discriminate COVID-19 patients from controls with an area under the receiver operating curve of 0.96 (0.95-0.98). Conclusions: Our method allows for the rapid fully automated quantitative measurement of the pneumonia burden from CT, which can be used to rapidly assess the severity of COVID-19 pneumonia on chest CT.
Collapse
Affiliation(s)
- Aditya Killekar
- Cedars-Sinai Medical Center, Department of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences and Imaging, Los Angeles, California, United States
| | | | - Andrew Lin
- Cedars-Sinai Medical Center, Department of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences and Imaging, Los Angeles, California, United States
| | - Sebastien Cadet
- Cedars-Sinai Medical Center, Department of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences and Imaging, Los Angeles, California, United States
| | - Priscilla McElhinney
- Cedars-Sinai Medical Center, Department of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences and Imaging, Los Angeles, California, United States
| | - Aryabod Razipour
- Cedars-Sinai Medical Center, Department of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences and Imaging, Los Angeles, California, United States
| | - Cato Chan
- Cedars-Sinai Medical Center, Department of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences and Imaging, Los Angeles, California, United States
| | - Barry D. Pressman
- Cedars-Sinai Medical Center, Department of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences and Imaging, Los Angeles, California, United States
| | - Peter Julien
- Cedars-Sinai Medical Center, Department of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences and Imaging, Los Angeles, California, United States
| | - Peter Chen
- Cedars-Sinai Medical Center, Department of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences and Imaging, Los Angeles, California, United States
| | | | | | | | - Udit Thakur
- Monash Health, Melbourne, Victoria, Australia
| | | | - Cecilia Agalbato
- University of Milan, Centro Cardiologico Monzino IRCCS, Milan, Italy
| | | | | | - Roberto Menè
- IRCCS Istituto Auxologico Italiano, Department of Cardiovascular, Neural and Metabolic Sciences, Milan, Italy
- University of Milano-Bicocca, Department of Medicine and Surgery, Milan, Italy
| | - Gianfranco Parati
- IRCCS Istituto Auxologico Italiano, Department of Cardiovascular, Neural and Metabolic Sciences, Milan, Italy
- University of Milano-Bicocca, Department of Medicine and Surgery, Milan, Italy
| | - Franco Cernigliaro
- IRCCS Istituto Auxologico Italiano, Department of Cardiovascular, Neural and Metabolic Sciences, Milan, Italy
- University of Milano-Bicocca, Department of Medicine and Surgery, Milan, Italy
| | | | - Camilla Torlasco
- IRCCS Istituto Auxologico Italiano, Department of Cardiovascular, Neural and Metabolic Sciences, Milan, Italy
- University of Milano-Bicocca, Department of Medicine and Surgery, Milan, Italy
| | - Gianluca Pontone
- University of Milan, Centro Cardiologico Monzino IRCCS, Milan, Italy
| | - Damini Dey
- Cedars-Sinai Medical Center, Department of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences and Imaging, Los Angeles, California, United States
| | - Piotr Slomka
- Cedars-Sinai Medical Center, Department of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences and Imaging, Los Angeles, California, United States
| |
Collapse
|
194
|
Chen S, Qiu C, Yang W, Zhang Z. Combining edge guidance and feature pyramid for medical image segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
195
|
Two-stage hybrid network for segmentation of COVID-19 pneumonia lesions in CT images: a multicenter study. Med Biol Eng Comput 2022; 60:2721-2736. [PMID: 35856130 PMCID: PMC9294771 DOI: 10.1007/s11517-022-02619-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 06/15/2022] [Indexed: 12/15/2022]
Abstract
COVID-19 has been spreading continuously since its outbreak, and the detection of its manifestations in the lung via chest computed tomography (CT) imaging is essential to investigate the diagnosis and prognosis of COVID-19 as an indispensable step. Automatic and accurate segmentation of infected lesions is highly required for fast and accurate diagnosis and further assessment of COVID-19 pneumonia. However, the two-dimensional methods generally neglect the intraslice context, while the three-dimensional methods usually have high GPU memory consumption and calculation cost. To address these limitations, we propose a two-stage hybrid UNet to automatically segment infected regions, which is evaluated on the multicenter data obtained from seven hospitals. Moreover, we train a 3D-ResNet for COVID-19 pneumonia screening. In segmentation tasks, the Dice coefficient reaches 97.23% for lung segmentation and 84.58% for lesion segmentation. In classification tasks, our model can identify COVID-19 pneumonia with an area under the receiver-operating characteristic curve value of 0.92, an accuracy of 92.44%, a sensitivity of 93.94%, and a specificity of 92.45%. In comparison with other state-of-the-art methods, the proposed approach could be implemented as an efficient assisting tool for radiologists in COVID-19 diagnosis from CT images.
Collapse
|
196
|
|
197
|
Gholamiankhah F, Mostafapour S, Abdi Goushbolagh N, Shojaerazavi S, Layegh P, Tabatabaei SM, Arabi H. Automated Lung Segmentation from Computed Tomography Images of Normal and COVID-19 Pneumonia Patients. IRANIAN JOURNAL OF MEDICAL SCIENCES 2022; 47:440-449. [PMID: 36117575 PMCID: PMC9445870 DOI: 10.30476/ijms.2022.90791.2178] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 10/01/2021] [Accepted: 12/10/2021] [Indexed: 11/30/2022]
Abstract
Background Automated image segmentation is an essential step in quantitative image analysis. This study assesses the performance of a deep learning-based model for lung segmentation from computed tomography (CT) images of normal and COVID-19 patients. Methods A descriptive-analytical study was conducted from December 2020 to April 2021 on the CT images of patients from various educational hospitals affiliated with Mashhad University of Medical Sciences (Mashhad, Iran). Of the selected images and corresponding lung masks of 1,200 confirmed COVID-19 patients, 1,080 were used to train a residual neural network. The performance of the residual network (ResNet) model was evaluated on two distinct external test datasets, namely the remaining 120 COVID-19 and 120 normal patients. Different evaluation metrics such as Dice similarity coefficient (DSC), mean absolute error (MAE), relative mean Hounsfield unit (HU) difference, and relative volume difference were calculated to assess the accuracy of the predicted lung masks. The Mann-Whitney U test was used to assess the difference between the corresponding values in the normal and COVID-19 patients. P<0.05 was considered statistically significant. Results The ResNet model achieved a DSC of 0.980 and 0.971 and a relative mean HU difference of -2.679% and -4.403% for the normal and COVID-19 patients, respectively. Comparable performance in lung segmentation of normal and COVID-19 patients indicated the model's accuracy for identifying lung tissue in the presence of COVID-19-associated infections. Although a slightly better performance was observed in normal patients. Conclusion The ResNet model provides an accurate and reliable automated lung segmentation of COVID-19 infected lung tissue.A preprint version of this article was published on arXiv before formal peer review (https://arxiv.org/abs/2104.02042).
Collapse
Affiliation(s)
- Faeze Gholamiankhah
- Department of Medical Physics, School of Medicine, Shahid Sadoughi University of Medical Sciences, Yazd, Iran
| | - Samaneh Mostafapour
- Department of Radiology Technology, School of Paramedical Sciences, Mashhad University of Sciences, Yazd, Iran
| | - Nouraddin Abdi Goushbolagh
- Department of Medical Physics, School of Medicine, Shahid Sadoughi University of Medical Sciences, Yazd, Iran
| | - Seyedjafar Shojaerazavi
- Department of Cardiology, Ghaem Hospital, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Parvaneh Layegh
- Department of Radiology, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Seyyed Mohammad Tabatabaei
- Department of Medical Informatics, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
- Clinical Research Development Unit, Imam Reza Hospital, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
198
|
Shah A, Shah M. Advancement of deep learning in pneumonia/Covid-19 classification and localization: A systematic review with qualitative and quantitative analysis. Chronic Dis Transl Med 2022; 8:154-171. [PMID: 35572951 PMCID: PMC9086991 DOI: 10.1002/cdt3.17] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 01/20/2022] [Indexed: 12/15/2022] Open
Abstract
Around 450 million people are affected by pneumonia every year, which results in 2.5 million deaths. Coronavirus disease 2019 (Covid-19) has also affected 181 million people, which led to 3.92 million casualties. The chances of death in both of these diseases can be significantly reduced if they are diagnosed early. However, the current methods of diagnosing pneumonia (complaints + chest X-ray) and Covid-19 (real-time polymerase chain reaction) require the presence of expert radiologists and time, respectively. With the help of deep learning models, pneumonia and Covid-19 can be detected instantly from chest X-rays or computerized tomography (CT) scans. The process of diagnosing pneumonia/Covid-19 can become faster and more widespread. In this paper, we aimed to elicit, explain, and evaluate qualitatively and quantitatively all advancements in deep learning methods aimed at detecting community-acquired pneumonia, viral pneumonia, and Covid-19 from images of chest X-rays and CT scans. Being a systematic review, the focus of this paper lies in explaining various deep learning model architectures, which have either been modified or created from scratch for the task at hand. For each model, this paper answers the question of why the model is designed the way it is, the challenges that a particular model overcomes, and the tradeoffs that come with modifying a model to the required specifications. A grouped quantitative analysis of all models described in the paper is also provided to quantify the effectiveness of different models with a similar goal. Some tradeoffs cannot be quantified and, hence, they are mentioned explicitly in the qualitative analysis, which is done throughout the paper. By compiling and analyzing a large quantum of research details in one place with all the data sets, model architectures, and results, we aimed to provide a one-stop solution to beginners and current researchers interested in this field.
Collapse
Affiliation(s)
- Aakash Shah
- Department of Computer Science & Engineering, Institute of TechnologyNirma UniversityAhmedabadIndia
| | - Manan Shah
- Department of Chemical Engineering, School of TechnologyPandit Deendayal Energy UniversityGandhinagarIndia
| |
Collapse
|
199
|
Self-supervised region-aware segmentation of COVID-19 CT images using 3D GAN and contrastive learning. Comput Biol Med 2022; 149:106033. [PMID: 36041270 PMCID: PMC9419627 DOI: 10.1016/j.compbiomed.2022.106033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 07/23/2022] [Accepted: 08/20/2022] [Indexed: 11/20/2022]
Abstract
Medical image segmentation is a key initial step in several therapeutic applications. While most of the automatic segmentation models are supervised, which require a well-annotated paired dataset, we introduce a novel annotation-free pipeline to perform segmentation of COVID-19 CT images. Our pipeline consists of three main subtasks: automatically generating a 3D pseudo-mask in self-supervised mode using a generative adversarial network (GAN), leveraging the quality of the pseudo-mask, and building a multi-objective segmentation model to predict lesions. Our proposed 3D GAN architecture removes infected regions from COVID-19 images and generates synthesized healthy images while keeping the 3D structure of the lung the same. Then, a 3D pseudo-mask is generated by subtracting the synthesized healthy images from the original COVID-19 CT images. We enhanced pseudo-masks using a contrastive learning approach to build a region-aware segmentation model to focus more on the infected area. The final segmentation model can be used to predict lesions in COVID-19 CT images without any manual annotation at the pixel level. We show that our approach outperforms the existing state-of-the-art unsupervised and weakly-supervised segmentation techniques on three datasets by a reasonable margin. Specifically, our method improves the segmentation results for the CT images with low infection by increasing sensitivity by 20% and the dice score up to 4%. The proposed pipeline overcomes some of the major limitations of existing unsupervised segmentation approaches and opens up a novel horizon for different applications of medical image segmentation.
Collapse
|
200
|
Salehi M, Ardekani MA, Taramsari AB, Ghaffari H, Haghparast M. Automated deep learning-based segmentation of COVID-19 lesions from chest computed tomography images. Pol J Radiol 2022; 87:e478-e486. [PMID: 36091652 PMCID: PMC9453472 DOI: 10.5114/pjr.2022.119027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 03/30/2022] [Indexed: 11/30/2022] Open
Abstract
Purpose The novel coronavirus COVID-19, which spread globally in late December 2019, is a global health crisis. Chest computed tomography (CT) has played a pivotal role in providing useful information for clinicians to detect COVID-19. However, segmenting COVID-19-infected regions from chest CT results is challenging. Therefore, it is desirable to develop an efficient tool for automated segmentation of COVID-19 lesions using chest CT. Hence, we aimed to propose 2D deep-learning algorithms to automatically segment COVID-19-infected regions from chest CT slices and evaluate their performance. Material and methods Herein, 3 known deep learning networks: U-Net, U-Net++, and Res-Unet, were trained from scratch for automated segmenting of COVID-19 lesions using chest CT images. The dataset consists of 20 labelled COVID-19 chest CT volumes. A total of 2112 images were used. The dataset was split into 80% for training and validation and 20% for testing the proposed models. Segmentation performance was assessed using Dice similarity coefficient, average symmetric surface distance (ASSD), mean absolute error (MAE), sensitivity, specificity, and precision. Results All proposed models achieved good performance for COVID-19 lesion segmentation. Compared with Res-Unet, the U-Net and U-Net++ models provided better results, with a mean Dice value of 85.0%. Compared with all models, U-Net gained the highest segmentation performance, with 86.0% sensitivity and 2.22 mm ASSD. The U-Net model obtained 1%, 2%, and 0.66 mm improvement over the Res-Unet model in the Dice, sensitivity, and ASSD, respectively. Compared with Res-Unet, U-Net++ achieved 1%, 2%, 0.1 mm, and 0.23 mm improvement in the Dice, sensitivity, ASSD, and MAE, respectively. Conclusions Our data indicated that the proposed models achieve an average Dice value greater than 84.0%. Two-dimensional deep learning models were able to accurately segment COVID-19 lesions from chest CT images, assisting the radiologists in faster screening and quantification of the lesion regions for further treatment. Nevertheless, further studies will be required to evaluate the clinical performance and robustness of the proposed models for COVID-19 semantic segmentation.
Collapse
Affiliation(s)
- Mohammad Salehi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Mahdieh Afkhami Ardekani
- Clinical Research Development Center, Shahid Mohammadi Hospital, Hormozgan University of Medical Sciences, Bandar-Abbas, Iran
- Department of Radiology, Faculty of Paramedicine, Hormozgan University of Medical Sciences, Bandar-Abbas, Iran
| | | | - Hamed Ghaffari
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Mohammad Haghparast
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
- Department of Radiology, Faculty of Paramedicine, Hormozgan University of Medical Sciences, Bandar-Abbas, Iran
| |
Collapse
|