201
|
Zammit J, Fung DLX, Liu Q, Leung CKS, Hu P. Semi-supervised COVID-19 CT image segmentation using deep generative models. BMC Bioinformatics 2022; 23:343. [PMID: 35974325 PMCID: PMC9381397 DOI: 10.1186/s12859-022-04878-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 08/03/2022] [Indexed: 11/29/2022] Open
Abstract
Background A recurring problem in image segmentation is a lack of labelled data. This problem is especially acute in the segmentation of lung computed tomography (CT) of patients with Coronavirus Disease 2019 (COVID-19). The reason for this is simple: the disease has not been prevalent long enough to generate a great number of labels. Semi-supervised learning promises a way to learn from data that is unlabelled and has seen tremendous advancements in recent years. However, due to the complexity of its label space, those advancements cannot be applied to image segmentation. That being said, it is this same complexity that makes it extremely expensive to obtain pixel-level labels, making semi-supervised learning all the more appealing. This study seeks to bridge this gap by proposing a novel model that utilizes the image segmentation abilities of deep convolution networks and the semi-supervised learning abilities of generative models for chest CT images of patients with the COVID-19. Results We propose a novel generative model called the shared variational autoencoder (SVAE). The SVAE utilizes a five-layer deep hierarchy of latent variables and deep convolutional mappings between them, resulting in a generative model that is well suited for lung CT images. Then, we add a novel component to the final layer of the SVAE which forces the model to reconstruct the input image using a segmentation that must match the ground truth segmentation whenever it is present. We name this final model StitchNet. Conclusion We compare StitchNet to other image segmentation models on a high-quality dataset of CT images from COVID-19 patients. We show that our model has comparable performance to the other segmentation models. We also explore the potential limitations and advantages in our proposed algorithm and propose some potential future research directions for this challenging issue.
Collapse
Affiliation(s)
- Judah Zammit
- Department of Computer Science, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada
| | - Daryl L X Fung
- Department of Computer Science, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada
| | - Qian Liu
- Department of Computer Science, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada.,Department of Biochemistry and Medical Genetics, University of Manitoba, Room 308 - Basic Medical Sciences Building, 745 Bannatyne Avenue, Winnipeg, MB, R3E 0J3, Canada
| | - Carson Kai-Sang Leung
- Department of Computer Science, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada
| | - Pingzhao Hu
- Department of Computer Science, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada. .,Department of Biochemistry and Medical Genetics, University of Manitoba, Room 308 - Basic Medical Sciences Building, 745 Bannatyne Avenue, Winnipeg, MB, R3E 0J3, Canada. .,CancerCare Manitoba Research Institute, Winnipeg, MB, Canada.
| |
Collapse
|
202
|
Qayyum A, Lalande A, Meriaudeau F. Effective multiscale deep learning model for COVID19 segmentation tasks: A further step towards helping radiologist. Neurocomputing 2022; 499:63-80. [PMID: 35578654 PMCID: PMC9095500 DOI: 10.1016/j.neucom.2022.05.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 01/28/2022] [Accepted: 05/02/2022] [Indexed: 12/14/2022]
Abstract
Infection by the SARS-CoV-2 leading to COVID-19 disease is still rising and techniques to either diagnose or evaluate the disease are still thoroughly investigated. The use of CT as a complementary tool to other biological tests is still under scrutiny as the CT scans are prone to many false positives as other lung diseases display similar characteristics on CT scans. However, fully investigating CT images is of tremendous interest to better understand the disease progression and therefore thousands of scans need to be segmented by radiologists to study infected areas. Over the last year, many deep learning models for segmenting CT-lungs were developed. Unfortunately, the lack of large and shared annotated multicentric datasets led to models that were either under-tested (small dataset) or not properly compared (own metrics, none shared dataset), often leading to poor generalization performance. To address, these issues, we developed a model that uses a multiscale and multilevel feature extraction strategy for COVID19 segmentation and extensively validated it on several datasets to assess its generalization capability for other segmentation tasks on similar organs. The proposed model uses a novel encoder and decoder with a proposed kernel-based atrous spatial pyramid pooling module that is used at the bottom of the model to extract small features with a multistage skip connection concatenation approach. The results proved that our proposed model could be applied on a small-scale dataset and still produce generalizable performances on other segmentation tasks. The proposed model produced an efficient Dice score of 90% on a 100 cases dataset, 95% on the NSCLC dataset, 88.49% on the COVID19 dataset, and 97.33 on the StructSeg 2019 dataset as compared to existing state-of-the-art models. The proposed solution could be used for COVID19 segmentation in clinic applications. The source code is publicly available at https://github.com/RespectKnowledge/Mutiscale-based-Covid-_segmentation-usingDeep-Learning-models.
Collapse
Affiliation(s)
- Abdul Qayyum
- ImViA Laboratory, University of Bourgogne Franche-Comt́e, Dijon, France
| | - Alain Lalande
- ImViA Laboratory, University of Bourgogne Franche-Comt́e, Dijon, France
- Medical Imaging Department, University Hospital of Dijon, Dijon, France
| | | |
Collapse
|
203
|
Chen X, Zhang Y, Cao G, Zhou J, Lin Y, Chen B, Nie K, Fu G, Su MY, Wang M. Dynamic change of COVID-19 lung infection evaluated using co-registration of serial chest CT images. Front Public Health 2022; 10:915615. [PMID: 36033815 PMCID: PMC9412202 DOI: 10.3389/fpubh.2022.915615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 07/18/2022] [Indexed: 01/22/2023] Open
Abstract
Purpose To evaluate the volumetric change of COVID-19 lesions in the lung of patients receiving serial CT imaging for monitoring the evolution of the disease and the response to treatment. Materials and methods A total of 48 patients, 28 males and 20 females, who were confirmed to have COVID-19 infection and received chest CT examination, were identified. The age range was 21-93 years old, with a mean of 54 ± 18 years. Of them, 33 patients received the first follow-up (F/U) scan, 29 patients received the second F/U scan, and 11 patients received the third F/U scan. The lesion region of interest (ROI) was manually outlined. A two-step registration method, first using the Affine alignment, followed by the non-rigid Demons algorithm, was developed to match the lung areas on the baseline and F/U images. The baseline lesion ROI was mapped to the F/U images using the obtained geometric transformation matrix, and the radiologist outlined the lesion ROI on F/U CT again. Results The median (interquartile range) lesion volume (cm3) was 30.9 (83.1) at baseline CT exam, 18.3 (43.9) at first F/U, 7.6 (18.9) at second F/U, and 0.6 (19.1) at third F/U, which showed a significant trend of decrease with time. The two-step registration could significantly decrease the mean squared error (MSE) between baseline and F/U images with p < 0.001. The method could match the lung areas and the large vessels inside the lung. When using the mapped baseline ROIs as references, the second-look ROI drawing showed a significantly increased volume, p < 0.05, presumably due to the consideration of all the infected areas at baseline. Conclusion The results suggest that the registration method can be applied to assist in the evaluation of longitudinal changes of COVID-19 lesions on chest CT.
Collapse
Affiliation(s)
- Xiao Chen
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yang Zhang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, United States,Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Guoquan Cao
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Jiahuan Zhou
- Department of Radiology, Yuyao Hospital of Traditional Chinese Medicine, Ningbo, China
| | - Ya Lin
- The People's Hospital of Cangnan, Wenzhou, China
| | | | - Ke Nie
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Gangze Fu
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China,*Correspondence: Gangze Fu
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, CA, United States,Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan,Min-Ying Su
| | - Meihao Wang
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China,Meihao Wang
| |
Collapse
|
204
|
Uçar M. Automatic segmentation of COVID-19 from computed tomography images using modified U-Net model-based majority voting approach. Neural Comput Appl 2022; 34:21927-21938. [PMID: 35968248 PMCID: PMC9362439 DOI: 10.1007/s00521-022-07653-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 07/18/2022] [Indexed: 12/03/2022]
Abstract
The coronavirus disease (COVID-19) is an important public health problem that has spread rapidly around the world and has caused the death of millions of people. Therefore, studies to determine the factors affecting the disease, to perform preventive actions and to find an effective treatment are at the forefront. In this study, a deep learning and segmentation-based approach is proposed for the detection of COVID-19 disease from computed tomography images. The proposed model was created by modifying the encoder part of the U-Net segmentation model. In the encoder part, VGG16, ResNet101, DenseNet121, InceptionV3 and EfficientNetB5 deep learning models were used, respectively. Then, the results obtained with each modified U-Net model were combined with the majority vote principle and a final result was reached. As a result of the experimental tests, the proposed model obtained 85.03% Dice score, 89.13% sensitivity and 99.38% specificity on the COVID-19 segmentation test dataset. The results obtained in the study show that the proposed model will especially benefit clinicians in terms of time and cost.
Collapse
Affiliation(s)
- Murat Uçar
- Department of Management Information Systems, Faculty of Business and Management Sciences, İskenderun Technical University, 31200 İskenderun, Hatay Turkey
| |
Collapse
|
205
|
Bakkouri I, Afdel K. MLCA2F: Multi-Level Context Attentional Feature Fusion for COVID-19 lesion segmentation from CT scans. SIGNAL, IMAGE AND VIDEO PROCESSING 2022; 17:1181-1188. [PMID: 35935538 PMCID: PMC9346062 DOI: 10.1007/s11760-022-02325-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 05/13/2022] [Accepted: 07/16/2022] [Indexed: 06/15/2023]
Abstract
In the field of diagnosis and treatment planning of Coronavirus disease 2019 (COVID-19), accurate infected area segmentation is challenging due to the significant variations in the COVID-19 lesion size, shape, and position, boundary ambiguity, as well as complex structure. To bridge these gaps, this study presents a robust deep learning model based on a novel multi-scale contextual information fusion strategy, called Multi-Level Context Attentional Feature Fusion (MLCA2F), which consists of the Multi-Scale Context-Attention Network (MSCA-Net) blocks for segmenting COVID-19 lesions from Computed Tomography (CT) images. Unlike the previous classical deep learning models, the MSCA-Net integrates Multi-Scale Contextual Feature Fusion (MC2F) and Multi-Context Attentional Feature (MCAF) to learn more lesion details and guide the model to estimate the position of the boundary of infected regions, respectively. Practically, extensive experiments are performed on the Kaggle CT dataset to explore the optimal structure of MLCA2F. In comparison with the current state-of-the-art methods, the experiments show that the proposed methodology provides efficient results. Therefore, we can conclude that the MLCA2F framework has the potential to dramatically improve the conventional segmentation methods for assisting clinical decision-making.
Collapse
Affiliation(s)
- Ibtissam Bakkouri
- Laboratory of Computer Systems and Vision (LabSIV), Department of Computer Science, Faculty of Science, Ibn Zohr University, BP 8106, 80000 Agadir, Morocco
| | - Karim Afdel
- Laboratory of Computer Systems and Vision (LabSIV), Department of Computer Science, Faculty of Science, Ibn Zohr University, BP 8106, 80000 Agadir, Morocco
| |
Collapse
|
206
|
Integrating Digital Twins and Deep Learning for Medical Image Analysis in the era of COVID-19. VIRTUAL REALITY & INTELLIGENT HARDWARE 2022; 4:292-305. [PMCID: PMC9458475 DOI: 10.1016/j.vrih.2022.03.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 03/13/2022] [Accepted: 03/17/2022] [Indexed: 10/18/2023]
Abstract
Digital twins is a virtual representation of a device and process that captures the physical properties of the environment and operational algorithms/techniques in the context of medical devices and technology. It may allow and facilitate healthcare organizations to determine ways to improve medical processes, enhance the patient experience, lower operating expenses, and extend the value of care. Considering the current pandemic situation of COVID-19, various medical devices, e.g., X-rays and CT scan machines and processes, are constantly being used to collect and analyze medical images. In this situation, while collecting and processing an extensive volume of data in the form of images, machines and processes sometimes suffer from system failures that can create critical issues for hospitals and patients. Thus, in this regard, we introduced a digital twin based smart healthcare system integrated with medical devices so that it can be utilized to collect information about the current health condition, configuration, and maintenance history of the device/machine/system. Furthermore, the medical images, i.e., X-rays, are further analyzed by a deep learning model to detect the infection of COVID-19. The designed system is based on Cascade RCNN architecture. In this architecture, detector stages are deeper and are more sequentially selective against close and small false positives. It is a multi stage extension of the Recurrent Convolution Neural Network (RCNN) model and sequentially trained using the output of one stage for the training of the other one. At each stage, the bounding boxes are adjusted in order to locate a suitable value of nearest false positives during training of the different stages. In this way, an arrangement of detectors is adjusted to increase Intersection over Union (IoU) that overcome the problem of overfitting. We trained the model for X-ray images as the model was previously trained on another data set. The developed system achieves good accuracy during the detection phase of the COVID-19. Experimental outcomes reveal the efficiency of the detection architecture, which gains a mean Average Precision (mAP) rate of 0.94.
Collapse
|
207
|
Zou K, Tao T, Yuan X, Shen X, Lai W, Long H. An interactive dual-branch network for hard palate segmentation of the oral cavity from CBCT images. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
208
|
Liang S, Nie R, Cao J, Wang X, Zhang G. FCF: Feature complement fusion network for detecting COVID-19 through CT scan images. Appl Soft Comput 2022; 125:109111. [PMID: 35693545 PMCID: PMC9167685 DOI: 10.1016/j.asoc.2022.109111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 05/12/2022] [Accepted: 05/26/2022] [Indexed: 11/17/2022]
Abstract
COVID-19 spreads and contracts people rapidly, to diagnose this disease accurately and timely is essential for quarantine and medical treatment. RT-PCR plays a crucial role in diagnosing the COVID-19, whereas computed tomography (CT) delivers a faster result when combining artificial assistance. Developing a Deep Learning classification model for detecting the COVID-19 through CT images is conducive to assisting doctors in consultation. We proposed a feature complement fusion network (FCF) for detecting COVID-19 through lung CT scan images. This framework can extract both local features and global features by CNN extractor and ViT extractor severally, which successfully complement the deficiency problem of the receptive field of the other. Due to the attention mechanism in our designed feature complement Transformer (FCT), extracted local and global feature embeddings achieve a better representation. We combined a supervised with a weakly supervised strategy to train our model, which can promote CNN to guide the VIT to converge faster. Finally, we got a 99.34% accuracy on our test set, which surpasses the current state-of-art popular classification model. Moreover, this proposed structure can easily extend to other classification tasks when changing other proper extractors.
Collapse
Affiliation(s)
- Shu Liang
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, Yunnan, China
| | - Rencan Nie
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, Yunnan, China.,School of Automation, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Jinde Cao
- School of Mathematics, Southeast University, Nanjing, 210096, Jiangsu, China.,Yonsei Frontier Lab, Yonsei University, Seoul, 03722, South Korea
| | - Xue Wang
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, Yunnan, China
| | - Gucheng Zhang
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, Yunnan, China
| |
Collapse
|
209
|
Ding W, Chakraborty S, Mali K, Chatterjee S, Nayak J, Das AK, Banerjee S. An Unsupervised Fuzzy Clustering Approach for Early Screening of COVID-19 From Radiological Images. IEEE TRANSACTIONS ON FUZZY SYSTEMS : A PUBLICATION OF THE IEEE NEURAL NETWORKS COUNCIL 2022; 30:2902-2914. [PMID: 36345371 PMCID: PMC9454279 DOI: 10.1109/tfuzz.2021.3097806] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 06/29/2021] [Accepted: 07/12/2021] [Indexed: 05/04/2023]
Abstract
A global pandemic scenario is witnessed worldwide owing to the menace of the rapid outbreak of the deadly COVID-19 virus. To save mankind from this apocalyptic onslaught, it is essential to curb the fast spreading of this dreadful virus. Moreover, the absence of specialized drugs has made the scenario even more badly and thus an early-stage adoption of necessary precautionary measures would provide requisite supportive treatment for its prevention. The prime objective of this article is to use radiological images as a tool to help in early diagnosis. The interval type 2 fuzzy clustering is blended with the concept of superpixels, and metaheuristics to efficiently segment the radiological images. Despite noise sensitivity of watershed-based approach, it is adopted for superpixel computation owing to its simplicity where the noise problem is handled by the important edge information of the gradient image is preserved with the help of morphological opening and closing based reconstruction operations. The traditional objective function of the fuzzy c-means clustering algorithm is modified to incorporate the spatial information from the neighboring superpixel-based local window. The computational overhead associated with the processing of a huge amount of spatial information is reduced by incorporating the concept of superpixels and the optimal clusters are determined by a modified version of the flower pollination algorithm. Although the proposed approach performs well but should not be considered as an alternative to gold standard detection tests of COVID-19. Experimental results are found to be promising enough to deploy this approach for real-life applications.
Collapse
Affiliation(s)
- Weiping Ding
- School of Information Science and TechnologyNantong UniversityNantong226019China
| | - Shouvik Chakraborty
- Department of Computer Science and EngineeringUniversity of KalyaniKalyani741235India
| | - Kalyani Mali
- Department of Computer Science and EngineeringUniversity of KalyaniKalyani741235India
| | - Sankhadeep Chatterjee
- Department of Computer Science and EngineeringUniversity of Engineering and ManagementKolkata700160India
| | - Janmenjoy Nayak
- Department of Computer Science and EngineeringAditya Institute of Technology and ManagementSrikakulam532201India
| | - Asit Kumar Das
- Department of Computer Science and TechnologyIndian Institute of Engineering Science and TechnologyHowrah711103India
| | - Soumen Banerjee
- Department of Electronics and Communication EngineeringUniversity of Engineering and ManagementKolkata700160India
| |
Collapse
|
210
|
Handling class imbalance in COVID-19 chest X-ray images classification: Using SMOTE and weighted loss. Appl Soft Comput 2022; 129:109588. [PMID: 36061418 PMCID: PMC9422401 DOI: 10.1016/j.asoc.2022.109588] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 08/07/2022] [Accepted: 08/24/2022] [Indexed: 11/24/2022]
|
211
|
Siddiqui S, Arifeen M, Hopgood A, Good A, Gegov A, Hossain E, Rahman W, Hossain S, Al Jannat S, Ferdous R, Masum S. Deep Learning Models for the Diagnosis and Screening of COVID-19: A Systematic Review. SN COMPUTER SCIENCE 2022; 3:397. [PMID: 35911439 PMCID: PMC9312319 DOI: 10.1007/s42979-022-01326-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 04/11/2022] [Indexed: 10/29/2022]
Abstract
COVID-19, caused by SARS-CoV-2, has been declared as a global pandemic by WHO. Early diagnosis of COVID-19 patients may reduce the impact of coronavirus using modern computational methods like deep learning. Various deep learning models based on CT and chest X-ray images are studied and compared in this study as an alternative solution to reverse transcription-polymerase chain reactions. This study consists of three stages: planning, conduction, and analysis/reporting. In the conduction stage, inclusion and exclusion criteria are applied to the literature searching and identification. Then, we have implemented quality assessment rules, where over 75 scored articles in the literature were included. Finally, in the analysis/reporting stage, all the papers are reviewed and analysed. After the quality assessment of the individual papers, this study adopted 57 articles for the systematic literature review. From these reviews, the critical analysis of each paper, including the represented matrix for the model evaluation, existing contributions, and motivation, has been tracked with suitable illustrations. We have also interpreted several insights of each paper with appropriate annotation. Further, a set of comparisons has been enumerated with suitable discussion. Convolutional neural networks are the most commonly used deep learning architecture for COVID-19 disease classification and identification from X-ray and CT images. Various prior studies did not include data from a hospital setting nor did they consider data preprocessing before training a deep learning model.
Collapse
Affiliation(s)
- Shah Siddiqui
- Faculty of Technology, The University of Portsmouth (UoP), Portland Building, Portland Street, Portsmouth, PO1 3AH UK
- School of Computing, University of Portsmouth (UoP), Lion Terrace, Portsmouth, PO1 3HE UK
| | - Murshedul Arifeen
- Time Research and Innovation (TRI), 189 Foundry Lane, Southampton, SO15 3JZ UK
- 336/7, TV Road East Rampura, Khilgaon, Dhaka 1219 Bangladesh
| | - Adrian Hopgood
- Faculty of Technology, The University of Portsmouth (UoP), Portland Building, Portland Street, Portsmouth, PO1 3AH UK
| | - Alice Good
- Faculty of Technology, The University of Portsmouth (UoP), Portland Building, Portland Street, Portsmouth, PO1 3AH UK
| | - Alexander Gegov
- Faculty of Technology, The University of Portsmouth (UoP), Portland Building, Portland Street, Portsmouth, PO1 3AH UK
| | - Elias Hossain
- Time Research and Innovation (TRI), 189 Foundry Lane, Southampton, SO15 3JZ UK
- 336/7, TV Road East Rampura, Khilgaon, Dhaka 1219 Bangladesh
| | - Wahidur Rahman
- Time Research and Innovation (TRI), 189 Foundry Lane, Southampton, SO15 3JZ UK
- 336/7, TV Road East Rampura, Khilgaon, Dhaka 1219 Bangladesh
| | - Shazzad Hossain
- Time Research and Innovation (TRI), 189 Foundry Lane, Southampton, SO15 3JZ UK
- 336/7, TV Road East Rampura, Khilgaon, Dhaka 1219 Bangladesh
| | - Sabila Al Jannat
- Time Research and Innovation (TRI), 189 Foundry Lane, Southampton, SO15 3JZ UK
- 336/7, TV Road East Rampura, Khilgaon, Dhaka 1219 Bangladesh
| | - Rezowan Ferdous
- Time Research and Innovation (TRI), 189 Foundry Lane, Southampton, SO15 3JZ UK
- 336/7, TV Road East Rampura, Khilgaon, Dhaka 1219 Bangladesh
| | - Shamsul Masum
- Faculty of Technology, The University of Portsmouth (UoP), Portland Building, Portland Street, Portsmouth, PO1 3AH UK
| |
Collapse
|
212
|
A Sustainable Deep Learning-Based Framework for Automated Segmentation of COVID-19 Infected Regions: Using U-Net with an Attention Mechanism and Boundary Loss Function. ELECTRONICS 2022. [DOI: 10.3390/electronics11152296] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
COVID-19 has been spreading rapidly, affecting billions of people globally, with significant public health impacts. Biomedical imaging, such as computed tomography (CT), has significant potential as a possible substitute for the screening process. Because of this, automatic segmentation of images is highly desirable as clinical decision support for an extensive evaluation of disease control and monitoring. It is a dynamic tool and performs a central role in precise or accurate segmentation of infected areas or regions in CT scans, thus helping in screening, diagnosing, and disease monitoring. For this purpose, we introduced a deep learning framework for automated segmentation of COVID-19 infected lesions/regions in lung CT scan images. Specifically, we adopted a segmentation model, i.e., U-Net, and utilized an attention mechanism to enhance the framework’s ability for the segmentation of virus-infected regions. Since all of the features extracted or obtained from the encoders are not valuable for segmentation; thus, we applied the U-Net architecture with a mechanism of attention for a better representation of the features. Moreover, we applied a boundary loss function to deal with small and unbalanced lesion segmentation’s. Using different public CT scan image data sets, we validated the framework’s effectiveness in contrast with other segmentation techniques. The experimental outcomes showed the improved performance of the presented framework for the automated segmentation of lungs and infected areas in CT scan images. We also considered both boundary loss and weighted binary cross-entropy dice loss function. The overall dice accuracies of the framework are 0.93 and 0.76 for lungs and COVID-19 infected areas/regions.
Collapse
|
213
|
Alshayeji MH, ChandraBhasi Sindhu S, Abed S. CAD systems for COVID-19 diagnosis and disease stage classification by segmentation of infected regions from CT images. BMC Bioinformatics 2022; 23:264. [PMID: 35794537 PMCID: PMC9261058 DOI: 10.1186/s12859-022-04818-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 06/30/2022] [Indexed: 11/19/2022] Open
Abstract
Background Here propose a computer-aided diagnosis (CAD) system to differentiate COVID-19 (the coronavirus disease of 2019) patients from normal cases, as well as to perform infection region segmentation along with infection severity estimation using computed tomography (CT) images. The developed system facilitates timely administration of appropriate treatment by identifying the disease stage without reliance on medical professionals. So far, this developed model gives the most accurate, fully automatic COVID-19 real-time CAD framework. Results The CT image dataset of COVID-19 and non-COVID-19 individuals were subjected to conventional ML stages to perform binary classification. In the feature extraction stage, SIFT, SURF, ORB image descriptors and bag of features technique were implemented for the appropriate differentiation of chest CT regions affected with COVID-19 from normal cases. This is the first work introducing this concept for COVID-19 diagnosis application. The preferred diverse database and selected features that are invariant to scale, rotation, distortion, noise etc. make this framework real-time applicable. Also, this fully automatic approach which is faster compared to existing models helps to incorporate it into CAD systems. The severity score was measured based on the infected regions along the lung field. Infected regions were segmented through a three-class semantic segmentation of the lung CT image. Using severity score, the disease stages were classified as mild if the lesion area covers less than 25% of the lung area; moderate if 25–50% and severe if greater than 50%. Our proposed model resulted in classification accuracy of 99.7% with a PNN classifier, along with area under the curve (AUC) of 0.9988, 99.6% sensitivity, 99.9% specificity and a misclassification rate of 0.0027. The developed infected region segmentation model gave 99.47% global accuracy, 94.04% mean accuracy, 0.8968 mean IoU (intersection over union), 0.9899 weighted IoU, and a mean Boundary F1 (BF) contour matching score of 0.9453, using Deepabv3+ with its weights initialized using ResNet-50. Conclusions The developed CAD system model is able to perform fully automatic and accurate diagnosis of COVID-19 along with infected region extraction and disease stage identification. The ORB image descriptor with bag of features technique and PNN classifier achieved the superior classification performance.
Collapse
Affiliation(s)
- Mohammad H Alshayeji
- Computer Engineering Department, College of Engineering and Petroleum, Kuwait University, P.O. Box 5969, 13060, Safat, Kuwait City, Kuwait.
| | | | - Sa'ed Abed
- Computer Engineering Department, College of Engineering and Petroleum, Kuwait University, P.O. Box 5969, 13060, Safat, Kuwait City, Kuwait
| |
Collapse
|
214
|
Rajamani K, Gowda SD, Tej VN, Rajamani ST. Deformable attention (DANet) for semantic image segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3781-3784. [PMID: 36086414 DOI: 10.1109/embc48229.2022.9871439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Deep learning based medical image segmentation is currently a widely researched topic. Attention mechanism used with deep networks significantly benefit semantic segmen-tation tasks. The recent criss-cross-attention module captures global self-attention while remaining memory and time efficient. However, capturing attention from only the pertinent non-local locations can cardinally boost the accuracy of semantic segmentation networks. We propose a new Deformable Attention Network (DANet) that enables a more accurate contextual information computation in a similarly efficient way. Our novel technique is based on learning the deformation of the query, key and value attention feature maps in a continuous way. A deep segmentation network with this attention mechanism is able to capture attention from germane non-local locations. This boosts the segmentation performance of COVID-19 lesion segmentation compared to criss-cross attention within aU-Net. Our validation experiments show that the performance gain of the recursively applied deformable attention blocks comes from their ability to capture dynamic and precise (wider) attention context. DANet achieves Dice scores of 60.17% for COVID-19 lesions segmentation and improves the accuracy by 4.4% points compared to a baseline U-Net.
Collapse
|
215
|
Chen X, Wang X, Zhang K, Fung KM, Thai TC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal 2022; 79:102444. [PMID: 35472844 PMCID: PMC9156578 DOI: 10.1016/j.media.2022.102444] [Citation(s) in RCA: 275] [Impact Index Per Article: 91.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 03/09/2022] [Accepted: 04/01/2022] [Indexed: 02/07/2023]
Abstract
Deep learning has received extensive research interest in developing new medical image processing algorithms, and deep learning based models have been remarkably successful in a variety of medical imaging tasks to support disease detection and diagnosis. Despite the success, the further improvement of deep learning models in medical image analysis is majorly bottlenecked by the lack of large-sized and well-annotated datasets. In the past five years, many studies have focused on addressing this challenge. In this paper, we reviewed and summarized these recent studies to provide a comprehensive overview of applying deep learning methods in various medical image analysis tasks. Especially, we emphasize the latest progress and contributions of state-of-the-art unsupervised and semi-supervised deep learning in medical image analysis, which are summarized based on different application scenarios, including classification, segmentation, detection, and image registration. We also discuss major technical challenges and suggest possible solutions in the future research efforts.
Collapse
Affiliation(s)
- Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Ximin Wang
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Ke Zhang
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Kar-Ming Fung
- Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Theresa C Thai
- Department of Radiology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Kathleen Moore
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Robert S Mannel
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Hong Liu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA.
| |
Collapse
|
216
|
Khalifa NEM, Manogaran G, Taha MHN, Loey M. A deep learning semantic segmentation architecture for COVID-19 lesions discovery in limited chest CT datasets. EXPERT SYSTEMS 2022; 39:e12742. [PMID: 34177038 PMCID: PMC8209878 DOI: 10.1111/exsy.12742] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 04/28/2021] [Accepted: 04/30/2021] [Indexed: 05/10/2023]
Abstract
During the epidemic of COVID-19, Computed Tomography (CT) is used to help in the diagnosis of patients. Most current studies on this subject appear to be focused on broad and private annotated data which are impractical to access from an organization, particularly while radiologists are fighting the coronavirus disease. It is challenging to equate these techniques since they were built on separate datasets, educated on various training sets, and tested using different metrics. In this research, a deep learning semantic segmentation architecture for COVID-19 lesions detection in limited chest CT datasets will be presented. The proposed model architecture consists of the encoder and the decoder components. The encoder component contains three layers of convolution and pooling, while the decoder contains three layers of deconvolutional and upsampling. The dataset consists of 20 CT scans of lungs belongs to 20 patients from two sources of data. The total number of images in the dataset is 3520 CT scans with its labelled images. The dataset is split into 70% for the training phase and 30% for the testing phase. Images of the dataset are passed through the pre-processing phase to be resized and normalized. Five experimental trials are conducted through the research with different images selected for the training and the testing phases for every trial. The proposed model achieves 0.993 in the global accuracy, and 0.987, 0.799, 0.874 for weighted IoU, mean IoU and mean BF score accordingly. The performance metrics such as precision, sensitivity, specificity and F1 score strengthens the obtained results. The proposed model outperforms the related works which use the same dataset in terms of performance and IoU metrics.
Collapse
Affiliation(s)
- Nour Eldeen M. Khalifa
- Department of Information TechnologyFaculty of Computers & Artificial Intelligence, Cairo UniversityCairoEgypt
| | - Gunasekaran Manogaran
- University of CaliforniaDavisCaliforniaUSA
- College of Information and Electrical EngineeringAsia UniversityTaichungTaiwan
| | - Mohamed Hamed N. Taha
- Department of Information TechnologyFaculty of Computers & Artificial Intelligence, Cairo UniversityCairoEgypt
| | - Mohamed Loey
- Department of Computer Science, Faculty of Computers and Artificial IntelligenceBenha UniversityBenhaEgypt
| |
Collapse
|
217
|
Chen T, Xiao J, Hu X, Zhang G, Wang S. Boundary-guided network for camouflaged object detection. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
218
|
Al-Omari B, Ahmad T, Al-Rifai RH. SARS-CoV-2 and COVID-19 Research Trend during the First Two Years of the Pandemic in the United Arab Emirates: A PRISMA-Compliant Bibliometric Analysis. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:7753. [PMID: 35805413 PMCID: PMC9266175 DOI: 10.3390/ijerph19137753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 06/21/2022] [Accepted: 06/22/2022] [Indexed: 12/10/2022]
Abstract
Scientific research is an integral part of fighting the COVID-19 pandemic. This bibliometric analysis describes the COVID-19 research productivity of the United Arab Emirates (UAE)-affiliated researchers during the first two years of the pandemic, 2020 to 2022. The Web of Science Core Collection (WoSCC) database was utilized to retrieve publications related to COVID-19 published by UAE-affiliated researcher(s). A total of 1008 publications met the inclusion criteria and were included in this bibliometric analysis. The most studied broad topics were general internal medicine (11.9%), public environmental occupational health (7.8%), pharmacology/pharmacy (6.3%), multidisciplinary sciences (5%), and infectious diseases (3.4%). About 67% were primary research articles, 16% were reviews, and the remaining were editorials letters (11.5%), meeting abstracts/proceedings papers (5%), and document corrections (0.4%). The University of Sharjah was the leading UAE-affiliated organization achieving 26.3% of the publications and funding 1.8% of the total 1008 published research. This study features the research trends in COVID-19 research affiliated with the UAE and shows the future directions. There was an observable nationally and international collaboration of the UAE-affiliated authors, particularly with researchers from the USA and England. This study highlights the need for in-depth systematic reviews addressing the specific COVID-19 research-related questions and studied populations.
Collapse
Affiliation(s)
- Basem Al-Omari
- Department of Epidemiology and Population Health, College of Medicine and Health Sciences, Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates;
- KU Research and Data Intelligence Support Center (RDISC), Khalifa University of Science and Technology, Abu Dhabi P.O. Box 127788, United Arab Emirates
- COVID-19 Research Epidemiology Sub-Committee of Abu Dhabi, Abu Dhabi Public Health Center, Abu Dhabi Department of Health, Abu Dhabi P.O. Box 5674, United Arab Emirates
| | - Tauseef Ahmad
- Vanke School of Public Health, Tsinghua University, Beijing 100084, China; or
- Department of Epidemiology and Health Statistics, School of Public Health, Southeast University, Nanjing 210096, China
| | - Rami H. Al-Rifai
- COVID-19 Research Epidemiology Sub-Committee of Abu Dhabi, Abu Dhabi Public Health Center, Abu Dhabi Department of Health, Abu Dhabi P.O. Box 5674, United Arab Emirates
- Institute of Public Health, College of Medicine and Health Sciences, United Arab Emirate University, Al Ain P.O. Box 15551, United Arab Emirates
| |
Collapse
|
219
|
Gao M, Feng X, Geng M, Jiang Z, Zhu L, Meng X, Zhou C, Ren Q, Lu Y. Bayesian statistics-guided label refurbishment mechanism: Mitigating label noise in medical image classification. Med Phys 2022; 49:5899-5913. [PMID: 35678232 DOI: 10.1002/mp.15799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 03/26/2022] [Accepted: 05/31/2022] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Deep neural networks (DNNs) have been widely applied in medical image classification, benefiting from its powerful mapping capability among medical images. However, these existing deep learning-based methods depend on an enormous amount of carefully labeled images. Meanwhile, noise is inevitably introduced in the labeling process, degrading the performance of models. Hence, it is significant to devise robust training strategies to mitigate label noise in the medical image classification tasks. METHODS In this work, we propose a novel Bayesian statistics-guided label refurbishment mechanism (BLRM) for DNNs to prevent overfitting noisy images. BLRM utilizes maximum a posteriori probability in the Bayesian statistics and the exponentially time-weighted technique to selectively correct the labels of noisy images. The training images are purified gradually with the training epochs when BLRM is activated, further improving classification performance. RESULTS Comprehensive experiments on both synthetic noisy images (public OCT & Messidor datasets) and real-world noisy images (ANIMAL-10N) demonstrate that BLRM refurbishes the noisy labels selectively, curbing the adverse effects of noisy data. Also, the anti-noise BLRMs integrated with DNNs are effective at different noise ratio and are independent of backbone DNN architectures. In addition, BLRM is superior to state-of-the-art comparative methods of anti-noise. CONCLUSIONS These investigations indicate that the proposed BLRM is well capable of mitigating label noise in medical image classification tasks.
Collapse
Affiliation(s)
- Mengdi Gao
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China.,Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China.,Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China.,Shenzhen Bay Laboratory 5F, Institute of Biomedical Engineering, Shenzhen, China
| | - Ximeng Feng
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China.,Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China.,Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China.,Shenzhen Bay Laboratory 5F, Institute of Biomedical Engineering, Shenzhen, China
| | - Mufeng Geng
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China.,Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China.,Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China.,Shenzhen Bay Laboratory 5F, Institute of Biomedical Engineering, Shenzhen, China
| | - Zhe Jiang
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China.,Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China.,Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China.,Shenzhen Bay Laboratory 5F, Institute of Biomedical Engineering, Shenzhen, China
| | - Lei Zhu
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China.,Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China.,Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China.,Shenzhen Bay Laboratory 5F, Institute of Biomedical Engineering, Shenzhen, China
| | - Xiangxi Meng
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Products Administration), Department of Nuclear Medicine, Beijing Cancer Hospital & Institute, Beijing, China
| | - Chuanqing Zhou
- Shenzhen Bay Laboratory 5F, Institute of Biomedical Engineering, Shenzhen, China
| | - Qiushi Ren
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China.,Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China.,Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China.,Shenzhen Bay Laboratory 5F, Institute of Biomedical Engineering, Shenzhen, China
| | - Yanye Lu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China.,Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| |
Collapse
|
220
|
Lung’s Segmentation Using Context-Aware Regressive Conditional GAN. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
After declaring COVID-19 pneumonia as a pandemic, researchers promptly advanced to seek solutions for patients fighting this fatal disease. Computed tomography (CT) scans offer valuable insight into how COVID-19 infection affects the lungs. Analysis of CT scans is very significant, especially when physicians are striving for quick solutions. This study successfully segmented lung infection due to COVID-19 and provided a physician with a quantitative analysis of the condition. COVID-19 lesions often occur near and over parenchyma walls, which are denser and exhibit lower contrast than the tissues outside the parenchyma. We applied Adoptive Wallis and Gaussian filter alternatively to regulate the outlining of the lungs and lesions near the parenchyma. We proposed a context-aware conditional generative adversarial network (CGAN) with gradient penalty and spectral normalization for automatic segmentation of lungs and lesion segmentation. The proposed CGAN implements higher-order statistics when compared to traditional deep-learning models. The proposed CGAN produced promising results for lung segmentation. Similarly, CGAN has shown outstanding results for COVID-19 lesions segmentation with an accuracy of 99.91%, DSC of 92.91%, and AJC of 92.91%. Moreover, we achieved an accuracy of 99.87%, DSC of 96.77%, and AJC of 95.59% for lung segmentation. Additionally, the suggested network attained a sensitivity of 100%, 81.02%, 76.45%, and 99.01%, respectively, for critical, severe, moderate, and mild infection severity levels. The proposed model outperformed state-of-the-art techniques for the COVID-19 segmentation and detection cases.
Collapse
|
221
|
Mubashar M, Ali H, Grönlund C, Azmat S. R2U++: a multiscale recurrent residual U-Net with dense skip connections for medical image segmentation. Neural Comput Appl 2022; 34:17723-17739. [PMID: 35694048 PMCID: PMC9165712 DOI: 10.1007/s00521-022-07419-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 05/09/2022] [Indexed: 01/09/2023]
Abstract
U-Net is a widely adopted neural network in the domain of medical image segmentation. Despite its quick embracement by the medical imaging community, its performance suffers on complicated datasets. The problem can be ascribed to its simple feature extracting blocks: encoder/decoder, and the semantic gap between encoder and decoder. Variants of U-Net (such as R2U-Net) have been proposed to address the problem of simple feature extracting blocks by making the network deeper, but it does not deal with the semantic gap problem. On the other hand, another variant UNET++ deals with the semantic gap problem by introducing dense skip connections but has simple feature extraction blocks. To overcome these issues, we propose a new U-Net based medical image segmentation architecture R2U++. In the proposed architecture, the adapted changes from vanilla U-Net are: (1) the plain convolutional backbone is replaced by a deeper recurrent residual convolution block. The increased field of view with these blocks aids in extracting crucial features for segmentation which is proven by improvement in the overall performance of the network. (2) The semantic gap between encoder and decoder is reduced by dense skip pathways. These pathways accumulate features coming from multiple scales and apply concatenation accordingly. The modified architecture has embedded multi-depth models, and an ensemble of outputs taken from varying depths improves the performance on foreground objects appearing at various scales in the images. The performance of R2U++ is evaluated on four distinct medical imaging modalities: electron microscopy, X-rays, fundus, and computed tomography. The average gain achieved in IoU score is 1.5 ± 0.37% and in dice score is 0.9 ± 0.33% over UNET++, whereas, 4.21 ± 2.72 in IoU and 3.47 ± 1.89 in dice score over R2U-Net across different medical imaging segmentation datasets.
Collapse
Affiliation(s)
- Mehreen Mubashar
- Present Address: Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Hazrat Ali
- Present Address: Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | | | - Shoaib Azmat
- Present Address: Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| |
Collapse
|
222
|
Liao N, Dai J, Tang Y, Zhong Q, Mo S. iCVM: An Interpretable Deep Learning Model for CVM Assessment under Label Uncertainty. IEEE J Biomed Health Inform 2022; 26:4325-4334. [PMID: 35653451 DOI: 10.1109/jbhi.2022.3179619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
The Cervical Vertebral Maturation (CVM) method aims to determine the craniofacial skeletal maturational stage, which is crucial for orthodontic and orthopedic treatment. In this paper, we explore the potential of deep learning for automatic CVM assessment. In particular, we propose a convolutional neural network named iCVM. Based on the residual network, it is specialized for the challenges unique to the task of CVM assessment. 1) To combat overfitting due to limited data size, multiple dropout layers are utilized. 2) To address the inevitable label ambiguity between adjacent maturational stages, we introduce the concept of label distribution learning in the loss function. Besides, we attempt to analyze the regions important for the prediction of the model by using the Grad-CAM technique. The learned strategy shows surprisingly high consistency with the clinical criteria. This indicates that the decisions made by our model are well interpretable, which is critical in evaluation of growth and development in orthodontics. Moreover, to drive future research in the field, we release a new dataset named CVM-900 along with the paper. It contains the cervical part of 900 lateral cephalograms collected from orthodontic patients of different ages and genders. Experimental results show that the proposed approach achieves superior performance on CVM-900 in terms of various evaluation metrics.
Collapse
|
223
|
Zhou Q, Wang S, Zhang X, Zhang YD. WVALE: Weak variational autoencoder for localisation and enhancement of COVID-19 lung infections. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106883. [PMID: 35597203 PMCID: PMC9107178 DOI: 10.1016/j.cmpb.2022.106883] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 05/04/2022] [Accepted: 05/10/2022] [Indexed: 05/02/2023]
Abstract
BACKGROUND AND OBJECTIVE The COVID-19 pandemic is a major global health crisis of this century. The use of neural networks with CT imaging can potentially improve clinicians' efficiency in diagnosis. Previous studies in this field have primarily focused on classifying the disease on CT images, while few studies targeted the localisation of disease regions. Developing neural networks for automating the latter task is impeded by limited CT images with pixel-level annotations available to the research community. METHODS This paper proposes a weakly-supervised framework named "Weak Variational Autoencoder for Localisation and Enhancement" (WVALE) to address this challenge for COVID-19 CT images. This framework includes two components: anomaly localisation with a novel WVAE model and enhancement of supervised segmentation models with WVALE. RESULTS The WVAE model have been shown to produce high-quality post-hoc attention maps with fine borders around infection regions, while weak supervision segmentation shows results comparable to conventional supervised segmentation models. The WVALE framework can enhance the performance of a range of supervised segmentation models, including state-of-art models for the segmentation of COVID-19 lung infection. CONCLUSIONS Our study provides a proof-of-concept for weakly supervised segmentation and an alternative approach to alleviate the lack of annotation, while its independence from classification & segmentation frameworks makes it easily integrable with existing systems.
Collapse
Affiliation(s)
- Qinghua Zhou
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK.
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK.
| | - Xin Zhang
- Department of Medical Imaging, The Fourth Peoples Hospital of Huaian, Huaian, Jiangsu Province 223002, China.
| | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK.
| |
Collapse
|
224
|
Wang Y, Yang Q, Tian L, Zhou X, Rekik I, Huang H. HFCF-Net: A hybrid-feature cross fusion network for COVID-19 lesion segmentation from CT volumetric images. Med Phys 2022; 49:3797-3815. [PMID: 35301729 PMCID: PMC9088496 DOI: 10.1002/mp.15600] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 02/16/2022] [Accepted: 02/21/2022] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) spreads rapidly across the globe, seriously threatening the health of people all over the world. To reduce the diagnostic pressure of front-line doctors, an accurate and automatic lesion segmentation method is highly desirable in clinic practice. PURPOSE Many proposed two-dimensional (2D) methods for sliced-based lesion segmentation cannot take full advantage of spatial information in the three-dimensional (3D) volume data, resulting in limited segmentation performance. Three-dimensional methods can utilize the spatial information but suffer from long training time and slow convergence speed. To solve these problems, we propose an end-to-end hybrid-feature cross fusion network (HFCF-Net) to fuse the 2D and 3D features at three scales for the accurate segmentation of COVID-19 lesions. METHODS The proposed HFCF-Net incorporates 2D and 3D subnets to extract features within and between slices effectively. Then the cross fusion module is designed to bridge 2D and 3D decoders at the same scale to fuse both types of features. The module consists of three cross fusion blocks, each of which contains a prior fusion path and a context fusion path to jointly learn better lesion representations. The former aims to explicitly provide the 3D subnet with lesion-related prior knowledge, and the latter utilizes the 3D context information as the attention guidance of the 2D subnet, which promotes the precise segmentation of the lesion regions. Furthermore, we explore an imbalance-robust adaptive learning loss function that includes image-level loss and pixel-level loss to tackle the problems caused by the apparent imbalance between the proportions of the lesion and non-lesion voxels, providing a learning strategy to dynamically adjust the learning focus between 2D and 3D branches during the training process for effective supervision. RESULT Extensive experiments conducted on a publicly available dataset demonstrate that the proposed segmentation network significantly outperforms some state-of-the-art methods for the COVID-19 lesion segmentation, yielding a Dice similarity coefficient of 74.85%. The visual comparison of segmentation performance also proves the superiority of the proposed network in segmenting different-sized lesions. CONCLUSIONS In this paper, we propose a novel HFCF-Net for rapid and accurate COVID-19 lesion segmentation from chest computed tomography volume data. It innovatively fuses hybrid features in a cross manner for lesion segmentation, aiming to utilize the advantages of 2D and 3D subnets to complement each other for enhancing the segmentation performance. Benefitting from the cross fusion mechanism, the proposed HFCF-Net can segment the lesions more accurately with the knowledge acquired from both subnets.
Collapse
Affiliation(s)
- Yanting Wang
- School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
| | - Qingyu Yang
- School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
| | - Lixia Tian
- School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
| | - Xuezhong Zhou
- School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
| | - Islem Rekik
- BASIRA LaboratoryFaculty of Computer and InformaticsIstanbul Technical UniversityIstanbulTurkey
- School of Science and EngineeringComputingUniversity of DundeeDundeeUK
| | - Huifang Huang
- School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
| |
Collapse
|
225
|
Zhao H, Fang Z, Ren J, MacLellan C, Xia Y, Li S, Sun M, Ren K. SC2Net: A Novel Segmentation-based Classification Network for Detection of COVID-19 in Chest X-ray Images. IEEE J Biomed Health Inform 2022; 26:4032-4043. [PMID: 35613061 DOI: 10.1109/jbhi.2022.3177854] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
The pandemic of COVID-19 has become a global crisis in public health, which has led to a massive number of deaths and severe economic degradation. To suppress the spread of COVID-19, accurate diagnosis at an early stage is crucial. As the popularly used real-time reverse transcriptase polymerase chain reaction (RT-PCR) swab test can be lengthy and inaccurate, chest screening with radiography imaging is still preferred. However, due to limited image data and the difficulty of the early-stage diagnosis, existing models suffer from ineffective feature extraction and poor network convergence and optimisation. To tackle these issues, a segmentation-based COVID-19 classification network, namely SC2Net, is proposed for effective detection of the COVID-19 from chest x-ray (CXR) images. The SC2Net consists of two subnets: a COVID-19 lung segmentation network (CLSeg), and a spatial attention network (SANet). In order to supress the interference from the background, the CLSeg is first applied to segment the lung region from the CXR. The segmented lung region is then fed to the SANet for classification and diagnosis of the COVID-19. As a shallow yet effective classifier, SANet takes the ResNet-18 as the feature extractor and enhances highlevel feature via the proposed spatial attention module. For performance evaluation, the COVIDGR 1.0 dataset is used, which is a high-quality dataset with various severity levels of the COVID-19. Experimental results have shown that, our SC2Net has an average accuracy of 84.23% and an average F1 score of 81.31% in detection of COVID-19, outperforming several state-of-the-art approaches.
Collapse
|
226
|
Suri JS, Agarwal S, Chabert GL, Carriero A, Paschè A, Danna PSC, Saba L, Mehmedović A, Faa G, Singh IM, Turk M, Chadha PS, Johri AM, Khanna NN, Mavrogeni S, Laird JR, Pareek G, Miner M, Sobel DW, Balestrieri A, Sfikakis PP, Tsoulfas G, Protogerou AD, Misra DP, Agarwal V, Kitas GD, Teji JS, Al-Maini M, Dhanjil SK, Nicolaides A, Sharma A, Rathore V, Fatemi M, Alizad A, Krishnan PR, Nagy F, Ruzsa Z, Fouda MM, Naidu S, Viskovic K, Kalra MK. COVLIAS 1.0 Lesion vs. MedSeg: An Artificial Intelligence Framework for Automated Lesion Segmentation in COVID-19 Lung Computed Tomography Scans. Diagnostics (Basel) 2022; 12:1283. [PMID: 35626438 PMCID: PMC9141749 DOI: 10.3390/diagnostics12051283] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 05/18/2022] [Accepted: 05/19/2022] [Indexed: 02/01/2023] Open
Abstract
Background: COVID-19 is a disease with multiple variants, and is quickly spreading throughout the world. It is crucial to identify patients who are suspected of having COVID-19 early, because the vaccine is not readily available in certain parts of the world. Methodology: Lung computed tomography (CT) imaging can be used to diagnose COVID-19 as an alternative to the RT-PCR test in some cases. The occurrence of ground-glass opacities in the lung region is a characteristic of COVID-19 in chest CT scans, and these are daunting to locate and segment manually. The proposed study consists of a combination of solo deep learning (DL) and hybrid DL (HDL) models to tackle the lesion location and segmentation more quickly. One DL and four HDL models—namely, PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet—were trained by an expert radiologist. The training scheme adopted a fivefold cross-validation strategy on a cohort of 3000 images selected from a set of 40 COVID-19-positive individuals. Results: The proposed variability study uses tracings from two trained radiologists as part of the validation. Five artificial intelligence (AI) models were benchmarked against MedSeg. The best AI model, ResNet-UNet, was superior to MedSeg by 9% and 15% for Dice and Jaccard, respectively, when compared against MD 1, and by 4% and 8%, respectively, when compared against MD 2. Statistical tests—namely, the Mann−Whitney test, paired t-test, and Wilcoxon test—demonstrated its stability and reliability, with p < 0.0001. The online system for each slice was <1 s. Conclusions: The AI models reliably located and segmented COVID-19 lesions in CT scans. The COVLIAS 1.0Lesion lesion locator passed the intervariability test.
Collapse
Affiliation(s)
- Jasjit S. Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA;
| | - Sushant Agarwal
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA;
- Department of Computer Science Engineering, PSIT, Kanpur 209305, India
| | - Gian Luca Chabert
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Alessandro Carriero
- Department of Radiology, “Maggiore della Carità” Hospital, University of Piemonte Orientale (UPO), Via Solaroli 17, 28100 Novara, Italy;
| | - Alessio Paschè
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Pietro S. C. Danna
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Armin Mehmedović
- University Hospital for Infectious Diseases, 10000 Zagreb, Croatia; (A.M.); (K.V.)
| | - Gavino Faa
- Department of Pathology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy;
| | - Inder M. Singh
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, 27753 Delmenhorst, Germany;
| | - Paramjit S. Chadha
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Amer M. Johri
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada;
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110076, India;
| | - Sophie Mavrogeni
- Cardiology Clinic, Onassis Cardiac Surgery Center, 17674 Athens, Greece;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St Helena, CA 94574, USA;
| | - Gyan Pareek
- Minimally Invasive Urology Institute, Brown University, Providence, RI 02912, USA; (G.P.); (D.W.S.)
| | - Martin Miner
- Men’s Health Center, Miriam Hospital, Providence, RI 02906, USA;
| | - David W. Sobel
- Minimally Invasive Urology Institute, Brown University, Providence, RI 02912, USA; (G.P.); (D.W.S.)
| | - Antonella Balestrieri
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Petros P. Sfikakis
- Rheumatology Unit, National Kapodistrian University of Athens, 15772 Athens, Greece;
| | - George Tsoulfas
- Department of Surgery, Aristoteleion University of Thessaloniki, 54124 Thessaloniki, Greece;
| | - Athanasios D. Protogerou
- Cardiovascular Prevention and Research Unit, Department of Pathophysiology, National & Kapodistrian University of Athens, 15772 Athens, Greece;
| | - Durga Prasanna Misra
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - Vikas Agarwal
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - George D. Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley DY1 2HQ, UK;
- Arthritis Research UK Epidemiology Unit, Manchester University, Manchester M13 9PL, UK
| | - Jagjit S. Teji
- Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA;
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON L4Z 4C4, Canada;
| | | | - Andrew Nicolaides
- Vascular Screening and Diagnostic Centre, University of Nicosia Medical School, Nicosia 2408, Cyprus;
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA 22908, USA;
| | - Vijay Rathore
- AtheroPoint LLC, Roseville, CA 95661, USA; (S.K.D.); (V.R.)
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | - Azra Alizad
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | | | - Ferenc Nagy
- Internal Medicine Department, University of Szeged, 6725 Szeged, Hungary;
| | - Zoltan Ruzsa
- Invasive Cardiology Division, University of Szeged, 6725 Szeged, Hungary;
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA;
| | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, MN 55812, USA;
| | - Klaudija Viskovic
- University Hospital for Infectious Diseases, 10000 Zagreb, Croatia; (A.M.); (K.V.)
| | - Manudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA;
| |
Collapse
|
227
|
A Deep Learning-Based Diagnosis System for COVID-19 Detection and Pneumonia Screening Using CT Imaging. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12104825] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Background: Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is a global threat impacting the lives of millions of people worldwide. Automated detection of lung infections from Computed Tomography scans represents an excellent alternative; however, segmenting infected regions from CT slices encounters many challenges. Objective: Developing a diagnosis system based on deep learning techniques to detect and quantify COVID-19 infection and pneumonia screening using CT imaging. Method: Contrast Limited Adaptive Histogram Equalization pre-processing method was used to remove the noise and intensity in homogeneity. Black slices were also removed to crop only the region of interest containing the lungs. A U-net architecture, based on CNN encoder and CNN decoder approaches, is then introduced for a fast and precise image segmentation to obtain the lung and infection segmentation models. For better estimation of skill on unseen data, a fourfold cross-validation as a resampling procedure has been used. A three-layered CNN architecture, with additional fully connected layers followed by a Softmax layer, was used for classification. Lung and infection volumes have been reconstructed to allow volume ratio computing and obtain infection rate. Results: Starting with the 20 CT scan cases, data has been divided into 70% for the training dataset and 30% for the validation dataset. Experimental results demonstrated that the proposed system achieves a dice score of 0.98 and 0.91 for the lung and infection segmentation tasks, respectively, and an accuracy of 0.98 for the classification task. Conclusions: The proposed workflow aimed at obtaining good performances for the different system’s components, and at the same time, dealing with reduced datasets used for training.
Collapse
|
228
|
Yang S, Wang G, Sun H, Luo X, Sun P, Li K, Wang Q, Zhang S. Learning COVID-19 Pneumonia Lesion Segmentation from Imperfect Annotations via Divergence-Aware Selective Training. IEEE J Biomed Health Inform 2022; 26:3673-3684. [PMID: 35522641 DOI: 10.1109/jbhi.2022.3172978] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The COVID-19 pandemic has spread the world like no other crisis in recent history. Automatic segmentation of COVID-19 pneumonia lesions is critical for quantitative measurement for diagnosis and treatment management. For this task, deep learning is the state-of-the-art method while requires a large set of accurately annotated images for training, which is difficult to obtain due to limited access to experts and the time-consuming annotation process. To address this problem, we aim to train the segmentation network from imperfect annotations, where the training set consists of a small clean set of accurately annotated images by experts and a large noisy set of inaccurate annotations by non-experts. To avoid the labels with different qualities corrupting the segmentation model, we propose a new approach to train segmentation networks to deal with noisy labels. We introduce a dual-branch network to separately learn from the accurate and noisy annotations. To fully exploit the imperfect annotations as well as suppressing the noise, we design a Divergence-Aware Selective Training (DAST) strategy, where a divergence-aware noisiness score is used to identify severely noisy annotations and slightly noisy annotations. For severely noisy samples we use an unsupervised regularization through dual-branch consistency between predictions from the two branches. We also refine slightly noisy samples and use them as supplementary data for the clean branch to avoid overfitting. Experimental results show that our method achieves a higher performance than standard training process for COVID-19 pneumonia lesion segmentation when learning from imperfect labels, and our framework outperforms the state-of-the-art noise-tolerate methods significantly with various clean label percentages.
Collapse
|
229
|
Li CF, Xu YD, Ding XH, Zhao JJ, Du RQ, Wu LZ, Sun WP. MultiR-Net: A Novel Joint Learning Network for COVID-19 segmentation and classification. Comput Biol Med 2022; 144:105340. [PMID: 35305504 PMCID: PMC8912982 DOI: 10.1016/j.compbiomed.2022.105340] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 02/18/2022] [Accepted: 02/20/2022] [Indexed: 12/16/2022]
Abstract
The outbreak of COVID-19 has caused a severe shortage of healthcare resources. Ground Glass Opacity (GGO) and consolidation of chest CT scans have been an essential basis for imaging diagnosis since 2020. The similarity of imaging features between COVID-19 and other pneumonia makes it challenging to distinguish between them and affects radiologists' diagnosis. Recently, deep learning in COVID-19 has been mainly divided into disease classification and lesion segmentation, yet little work has focused on the feature correlation between the two tasks. To address these issues, in this study, we propose MultiR-Net, a 3D deep learning model for combined COVID-19 classification and lesion segmentation, to achieve real-time and interpretable COVID-19 chest CT diagnosis. Precisely, the proposed network consists of two subnets: a multi-scale feature fusion UNet-like subnet for lesion segmentation and a classification subnet for disease diagnosis. The features between the two subnets are fused by the reverse attention mechanism and the iterable training strategy. Meanwhile, we proposed a loss function to enhance the interaction between the two subnets. Individual metrics can not wholly reflect network effectiveness. Thus we quantify the segmentation results with various evaluation metrics such as average surface distance, volume Dice, and test on the dataset. We employ a dataset containing 275 3D CT scans for classifying COVID-19, Community-acquired Pneumonia (CAP), and healthy people and segmented lesions in pneumonia patients. We split the dataset into 70% and 30% for training and testing. Extensive experiments showed that our multi-task model framework obtained an average recall of 93.323%, an average precision of 94.005% on the classification test set, and a 69.95% Volume Dice score on the segmentation test set of our dataset.
Collapse
Affiliation(s)
- Cheng-Fan Li
- School of Computer Engineering and Science, Shanghai University, Shangda Rd, Shanghai, 200444, China
| | - Yi-Duo Xu
- School of Computer Engineering and Science, Shanghai University, Shangda Rd, Shanghai, 200444, China
| | - Xue-Hai Ding
- School of Computer Engineering and Science, Shanghai University, Shangda Rd, Shanghai, 200444, China.
| | - Jun-Juan Zhao
- School of Computer Engineering and Science, Shanghai University, Shangda Rd, Shanghai, 200444, China
| | - Rui-Qi Du
- School of Computer Engineering and Science, Shanghai University, Shangda Rd, Shanghai, 200444, China
| | - Li-Zhong Wu
- Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Mohe Rd, Shanghai, 200111, China
| | - Wen-Ping Sun
- Institute of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Yishan Rd, Shanghai, 200233, China.
| |
Collapse
|
230
|
Karthik R, Menaka R, M H, Won D. Contour-enhanced attention CNN for CT-based COVID-19 segmentation. PATTERN RECOGNITION 2022; 125:108538. [PMID: 35068591 PMCID: PMC8767763 DOI: 10.1016/j.patcog.2022.108538] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 09/14/2021] [Accepted: 01/14/2022] [Indexed: 05/14/2023]
Abstract
Accurate detection of COVID-19 is one of the challenging research topics in today's healthcare sector to control the coronavirus pandemic. Automatic data-powered insights for COVID-19 localization from medical imaging modality like chest CT scan tremendously augment clinical care assistance. In this research, a Contour-aware Attention Decoder CNN has been proposed to precisely segment COVID-19 infected tissues in a very effective way. It introduces a novel attention scheme to extract boundary, shape cues from CT contours and leverage these features in refining the infected areas. For every decoded pixel, the attention module harvests contextual information in its spatial neighborhood from the contour feature maps. As a result of incorporating such rich structural details into decoding via dense attention, the CNN is able to capture even intricate morphological details. The decoder is also augmented with a Cross Context Attention Fusion Upsampling to robustly reconstruct deep semantic features back to high-resolution segmentation map. It employs a novel pixel-precise attention model that draws relevant encoder features to aid in effective upsampling. The proposed CNN was evaluated on 3D scans from MosMedData and Jun Ma benchmarked datasets. It achieved state-of-the-art performance with a high dice similarity coefficient of 85.43% and a recall of 88.10%.
Collapse
Affiliation(s)
- R Karthik
- Centre for Cyber Physical Systems (CCPS), Vellore Institute of Technology, Chennai, India
| | - R Menaka
- Centre for Cyber Physical Systems (CCPS), Vellore Institute of Technology, Chennai, India
| | - Hariharan M
- School of Computing Sciences and Engineering, Vellore Institute of Technology, Chennai, India
| | - Daehan Won
- System Sciences and Industrial Engineering, Binghamton University, United States
| |
Collapse
|
231
|
CGRNet: Contour-guided graph reasoning network for ambiguous biomedical image segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
232
|
Deep features to detect pulmonary abnormalities in chest X-rays due to infectious diseaseX: Covid-19, pneumonia, and tuberculosis. Inf Sci (N Y) 2022; 592:389-401. [DOI: 10.1016/j.ins.2022.01.062] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 01/28/2022] [Accepted: 01/30/2022] [Indexed: 12/12/2022]
|
233
|
Aggarwal P, Mishra NK, Fatimah B, Singh P, Gupta A, Joshi SD. COVID-19 image classification using deep learning: Advances, challenges and opportunities. Comput Biol Med 2022; 144:105350. [PMID: 35305501 PMCID: PMC8890789 DOI: 10.1016/j.compbiomed.2022.105350] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 02/10/2022] [Accepted: 02/22/2022] [Indexed: 12/16/2022]
Abstract
Corona Virus Disease-2019 (COVID-19), caused by Severe Acute Respiratory Syndrome-Corona Virus-2 (SARS-CoV-2), is a highly contagious disease that has affected the lives of millions around the world. Chest X-Ray (CXR) and Computed Tomography (CT) imaging modalities are widely used to obtain a fast and accurate diagnosis of COVID-19. However, manual identification of the infection through radio images is extremely challenging because it is time-consuming and highly prone to human errors. Artificial Intelligence (AI)-techniques have shown potential and are being exploited further in the development of automated and accurate solutions for COVID-19 detection. Among AI methodologies, Deep Learning (DL) algorithms, particularly Convolutional Neural Networks (CNN), have gained significant popularity for the classification of COVID-19. This paper summarizes and reviews a number of significant research publications on the DL-based classification of COVID-19 through CXR and CT images. We also present an outline of the current state-of-the-art advances and a critical discussion of open challenges. We conclude our study by enumerating some future directions of research in COVID-19 imaging classification.
Collapse
Affiliation(s)
| | | | - Binish Fatimah
- The Department of ECE, CMR Institute of Technology, Bengaluru, India.
| | - Pushpendra Singh
- The Department of ECE, National Institute of Technology Hamirpur, HP, India.
| | - Anubha Gupta
- The Department of ECE, IIIT-Delhi, Delhi, 110020, India.
| | - Shiv Dutt Joshi
- The Department of EE, Indian Institute of Technology Delhi, Delhi 110016, India.
| |
Collapse
|
234
|
Xu W, Chen B, Shi H, Tian H, Xu X. Real-time COVID-19 detection over chest x-ray images in edge computing. Comput Intell 2022; 39:COIN12528. [PMID: 35941908 PMCID: PMC9348433 DOI: 10.1111/coin.12528] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 02/28/2022] [Accepted: 04/17/2022] [Indexed: 11/29/2022]
Abstract
Severe Coronavirus Disease 2019 (COVID-19) has been a global pandemic which provokes massive devastation to the society, economy, and culture since January 2020. The pandemic demonstrates the inefficiency of superannuated manual detection approaches and inspires novel approaches that detect COVID-19 by classifying chest x-ray (CXR) images with deep learning technology. Although a wide range of researches about bran-new COVID-19 detection methods that classify CXR images with centralized convolutional neural network (CNN) models have been proposed, the latency, privacy, and cost of information transmission between the data resources and the centralized data center will make the detection inefficient. Hence, in this article, a COVID-19 detection scheme via CXR images classification with a lightweight CNN model called MobileNet in edge computing is proposed to alleviate the computing pressure of centralized data center and ameliorate detection efficiency. Specifically, the general framework is introduced first to manifest the overall arrangement of the computing and information services ecosystem. Then, an unsupervised model DCGAN is employed to make up for the small scale of data set. Moreover, the implementation of the MobileNet for CXR images classification is presented at great length. The specific distribution strategy of MobileNet models is followed. The extensive evaluations of the experiments demonstrate the efficiency and accuracy of the proposed scheme for detecting COVID-19 over CXR images in edge computing.
Collapse
Affiliation(s)
- Weijie Xu
- School of Computer ScienceNanjing University of Information Science and Technology210044NanjingChina
| | - Beijing Chen
- School of Computer ScienceNanjing University of Information Science and Technology210044NanjingChina
- Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET)Nanjing University of Information Science and TechnologyNanjingChina
| | - Haoyang Shi
- School of Computer ScienceNanjing University of Information Science and Technology210044NanjingChina
| | - Hao Tian
- School of Computer ScienceNanjing University of Information Science and Technology210044NanjingChina
| | - Xiaolong Xu
- School of Computer ScienceNanjing University of Information Science and Technology210044NanjingChina
- State Key Laboratory Novel Software TechnologyNanjing UniversityNanjingChina
| |
Collapse
|
235
|
Wang Y, Yan WQ. Colorizing Grayscale CT images of human lungs using deep learning methods. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:37805-37819. [PMID: 35475169 PMCID: PMC9027015 DOI: 10.1007/s11042-022-13062-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 07/20/2021] [Accepted: 04/04/2022] [Indexed: 06/14/2023]
Abstract
Image colorization refers to computer-aided rendering technology which transfers colors from a reference color image to grayscale images or video frames. Deep learning elevated notably in the field of image colorization in the past years. In this paper, we formulate image colorization methods relying on exemplar colorization and automatic colorization, respectively. For hybrid colorization, we select appropriate reference images to colorize the grayscale CT images. The colours of meat resemble those of human lungs, so the images of fresh pork, lamb, beef, and even rotten meat are collected as our dataset for model training. Three sets of training data consisting of meat images are analysed to extract the pixelar features for colorizing lung CT images by using an automatic approach. Pertaining to the results, we consider numerous methods (i.e., loss functions, visual analysis, PSNR, and SSIM) to evaluate the proposed deep learning models. Moreover, compared with other methods of colorizing lung CT images, the results of rendering the images by using deep learning methods are significantly genuine and promising. The metrics for measuring image similarity such as SSIM and PSNR have satisfactory performance, up to 0.55 and 28.0, respectively. Additionally, the methods may provide novel ideas for rendering grayscale X-ray images in airports, ferries, and railway stations.
Collapse
Affiliation(s)
- Yuewei Wang
- Auckland University of Technology, Auckland, 1010 New Zealand
| | - Wei Qi Yan
- Auckland University of Technology, Auckland, 1010 New Zealand
| |
Collapse
|
236
|
Santosh KC, Ghosh S, GhoshRoy D. Deep Learning for Covid-19 Screening Using Chest X-Rays in 2020: A Systematic Review. INT J PATTERN RECOGN 2022. [DOI: 10.1142/s0218001422520103] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Artificial Intelligence (AI) has promoted countless contributions in the field of healthcare and medical imaging. In this paper, we thoroughly analyze peer-reviewed research findings/articles on AI-guided tools for Covid-19 analysis/screening using chest X-ray images in the year 2020. We discuss on how far deep learning algorithms help in decision-making. We identify/address data collections, methodical contributions, promising methods, and challenges. However, a fair comparison is not trivial as dataset sizes vary over time, throughout the year 2020. Even though their unprecedented efforts in building AI-guided tools to detect, localize, and segment Covid-19 cases are limited to education and training, we elaborate on their strengths and possible weaknesses when we consider the need of cross-population train/test models. In total, with search keywords: (Covid-19 OR Coronavirus) AND chest x-ray AND deep learning AND artificial intelligence AND medical imaging in both PubMed Central Repository and Web of Science, we systematically reviewed 58 research articles and performed meta-analysis.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab – Computer Science, University of South Dakota, Vermillion, SD 57069, USA
| | - Supriti Ghosh
- 2AI: Applied Artificial Intelligence Research Lab – Computer Science, University of South Dakota, Vermillion, SD 57069, USA
| | | |
Collapse
|
237
|
Scarpiniti M, Sarv Ahrabi S, Baccarelli E, Piazzo L, Momenzadeh A. A novel unsupervised approach based on the hidden features of Deep Denoising Autoencoders for COVID-19 disease detection. EXPERT SYSTEMS WITH APPLICATIONS 2022; 192:116366. [PMID: 34937995 PMCID: PMC8675154 DOI: 10.1016/j.eswa.2021.116366] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 10/15/2021] [Accepted: 11/30/2021] [Indexed: 05/02/2023]
Abstract
Chest imaging can represent a powerful tool for detecting the Coronavirus disease 2019 (COVID-19). Among the available technologies, the chest Computed Tomography (CT) scan is an effective approach for reliable and early detection of the disease. However, it could be difficult to rapidly identify by human inspection anomalous area in CT images belonging to the COVID-19 disease. Hence, it becomes necessary the exploitation of suitable automatic algorithms able to quick and precisely identify the disease, possibly by using few labeled input data, because large amounts of CT scans are not usually available for the COVID-19 disease. The method proposed in this paper is based on the exploitation of the compact and meaningful hidden representation provided by a Deep Denoising Convolutional Autoencoder (DDCAE). Specifically, the proposed DDCAE, trained on some target CT scans in an unsupervised way, is used to build up a robust statistical representation generating a target histogram. A suitable statistical distance measures how this target histogram is far from a companion histogram evaluated on an unknown test scan: if this distance is greater of a threshold, the test image is labeled as anomaly, i.e. the scan belongs to a patient affected by COVID-19 disease. Some experimental results and comparisons with other state-of-the-art methods show the effectiveness of the proposed approach reaching a top accuracy of 100% and similar high values for other metrics. In conclusion, by using a statistical representation of the hidden features provided by DDCAEs, the developed architecture is able to differentiate COVID-19 from normal and pneumonia scans with high reliability and at low computational cost.
Collapse
Affiliation(s)
- Michele Scarpiniti
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Sima Sarv Ahrabi
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Enzo Baccarelli
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Alireza Momenzadeh
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| |
Collapse
|
238
|
Han K, Liu L, Song Y, Liu Y, Qiu C, Tang Y, Teng Q, Liu Z. An Effective Semi-supervised Approach for Liver CT Image Segmentation. IEEE J Biomed Health Inform 2022; 26:3999-4007. [PMID: 35420991 DOI: 10.1109/jbhi.2022.3167384] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite the substantial progress made by deep networks in the field of medical image segmentation, they generally require sufficient pixel-level annotated data for training. The scale of training data remains to be the main bottleneck to obtain a better deep segmentation model. Semi-supervised learning is an effective approach that alleviates the dependence on labeled data. However, most existing semi-supervised image segmentation methods usually do not generate high-quality pseudo labels to expand training dataset. In this paper, we propose a deep semi-supervised approach for liver CT image segmentation by expanding pseudo-labeling algorithm under the very low annotated-data paradigm. Specifically, the output features of labeled images from the pretrained network combine with corresponding pixel-level annotations to produce class representations according to the mean operation. Then pseudo labels of unlabeled images are generated by calculating the distances between unlabeled feature vectors and each class representation. To further improve the quality of pseudo labels, we adopt a series of operations to optimize pseudo labels. A more accurate segmentation network is obtained by expanding the training dataset and adjusting the contributions between supervised and unsupervised loss. Besides, the novel random patch based on prior locations is introduced for unlabeled images in the training procedure. Extensive experiments show our method has achieved more competitive results compared with other semi-supervised methods when fewer labeled slices of LiTS dataset are available.
Collapse
|
239
|
Yan J, Wang X, Cai J, Qin Q, Yang H, Wang Q, Cheng Y, Gan T, Jiang H, Deng J, Chen B. Medical image segmentation model based on triple gate MultiLayer perceptron. Sci Rep 2022; 12:6103. [PMID: 35413958 PMCID: PMC9002230 DOI: 10.1038/s41598-022-09452-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Accepted: 03/21/2022] [Indexed: 12/26/2022] Open
Abstract
To alleviate the social contradiction between limited medical resources and increasing medical needs, the medical image-assisted diagnosis based on deep learning has become the research focus in Wise Information Technology of med. Most of the existing medical segmentation models based on Convolution or Transformer have achieved relatively sound effects. However, the Convolution-based model with a limited receptive field cannot establish long-distance dependencies between features as the Network deepens. The Transformer-based model produces large computation overhead and cannot generalize the bias of local features and perceive the position feature of medical images, which are essential in medical image segmentation. To address those issues, we present Triple Gate MultiLayer Perceptron U-Net (TGMLP U-Net), a medical image segmentation model based on MLP, in which we design the Triple Gate MultiLayer Perceptron (TGMLP), composed of three parts. Firstly, considering encoding the position information of features, we propose the Triple MLP module based on MultiLayer Perceptron in this model. It uses linear projection to encode features from the high, wide, and channel dimensions, enabling the model to capture the long-distance dependence of features along the spatial dimension and the precise position information of features in three dimensions with less computational overhead. Then, we design the Local Priors and Global Perceptron module. The Global Perceptron divides the feature map into different partitions and conducts correlation modelling for each partition to establish the global dependency between partitions. The Local Priors uses multi-scale Convolution with high local feature extraction ability to explore further the relationship of context feature information within the structure. At last, we suggest a Gate-controlled Mechanism to effectively solves the problem that the dependence of position embeddings between Patches and within Patches in medical images cannot be well learned due to the relatively small number of samples in medical images segmentation data. Experimental results indicate that the proposed model outperforms other state-of-the-art models in most evaluation indicators, demonstrating its excellent performance in segmenting medical images.
Collapse
Affiliation(s)
- Jingke Yan
- Guilin University of Electronic Technology, School of Marine Engineering, Beihai, 536000, China
| | - Xin Wang
- Guilin University of Electronic Technology, School of Marine Engineering, Beihai, 536000, China.
- University of Electronic Science and Technology of China,School of Information and Software Engineering, Chengdu, 610000, China.
- Guilin University of Electronic Technology, School of Computer Science and Information Security, Guilin, 541004, China.
| | - Jingye Cai
- University of Electronic Science and Technology of China,School of Information and Software Engineering, Chengdu, 610000, China
| | - Qin Qin
- Guilin University of Electronic Technology, School of Marine Engineering, Beihai, 536000, China.
| | - Hao Yang
- China Academy of Engineering Physics, Institute of Applied Electronics, Mianyang, 621900, China
| | - Qin Wang
- Basic Teaching Department, Guilin University of Electronic Technology, Beihai, 536000, China.
| | - Yao Cheng
- Southwest Jiaotong University, State Key Laboratory of Traction Power, Chengdu, 610000, China
| | - Tian Gan
- Guilin University of Electronic Technology, School of Computer Science and Information Security, Guilin, 541004, China
| | - Hua Jiang
- Guilin University of Electronic Technology, School of Computer Science and Information Security, Guilin, 541004, China
| | - Jianhua Deng
- University of Electronic Science and Technology of China,School of Information and Software Engineering, Chengdu, 610000, China
| | - Bingxu Chen
- Guilin University of Electronic Technology, School of Marine Engineering, Beihai, 536000, China
| |
Collapse
|
240
|
Shamim S, Awan MJ, Mohd Zain A, Naseem U, Mohammed MA, Garcia-Zapirain B. Automatic COVID-19 Lung Infection Segmentation through Modified Unet Model. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:6566982. [PMID: 35422980 PMCID: PMC9002904 DOI: 10.1155/2022/6566982] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 02/23/2022] [Accepted: 02/28/2022] [Indexed: 11/23/2022]
Abstract
The coronavirus (COVID-19) pandemic has had a terrible impact on human lives globally, with far-reaching consequences for the health and well-being of many people around the world. Statistically, 305.9 million people worldwide tested positive for COVID-19, and 5.48 million people died due to COVID-19 up to 10 January 2022. CT scans can be used as an alternative to time-consuming RT-PCR testing for COVID-19. This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level. The problem with segmentation is that the GGO often appears indistinguishable from a healthy lung in the initial stages of COVID-19, and so, to cope with this, the increased set of weights in contracting and expanding the Unet path and an improved convolutional module is added in order to establish the connection between the encoder and decoder pipeline. This has a major capacity to segment the GGO in the case of COVID-19, with the proposed model being referred to as "convUnet." The experiment was performed on the Medseg1 dataset, and the addition of a set of weights at each layer of the model and modification in the connected module in Unet led to an improvement in overall segmentation results. The quantitative results obtained using accuracy, recall, precision, dice-coefficient, F1score, and IOU were 93.29%, 93.01%, 93.67%, 92.46%, 93.34%, 86.96%, respectively, which is better than that obtained using Unet and other state-of-the-art models. Therefore, this segmentation approach proved to be more accurate, fast, and reliable in helping doctors to diagnose COVID-19 quickly and efficiently.
Collapse
Affiliation(s)
- Sania Shamim
- Department of Software Engineering, University of Management and Technology, Lahore, Pakistan
| | - Mazhar Javed Awan
- Department of Software Engineering, University of Management and Technology, Lahore, Pakistan
| | - Azlan Mohd Zain
- School of Computing, UTM Big Data Centre, Universiti Teknologi Malaysia, Skudai 81310, Johor, Malaysia
| | - Usman Naseem
- School of Computer Science, The University of Sydney, Sydney, Australia
| | - Mazin Abed Mohammed
- College of Computer Science and Information Technology, University of Anbar, 11, Ramadi 31001, Iraq
| | | |
Collapse
|
241
|
Fang C, Liu Y, Liu Y, Liu M, Qiu X, Li Y, Wen J, Yang Y. Label-Free Covid-19 lesion segmentation based on synthetic healthy lung image subtraction. Med Phys 2022; 49:4632-4641. [PMID: 35397134 PMCID: PMC9088629 DOI: 10.1002/mp.15661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 01/06/2022] [Accepted: 03/07/2022] [Indexed: 11/08/2022] Open
Abstract
PURPOSE COVID-19 has become a global pandemic and is still posing a severe health risk to the public. Accurate and efficient segmentation of pneumonia lesions in CT scans is vital for treatment decision-making. We proposed a novel unsupervised approach using cycle consistent generative adversarial network (cycle-GAN) which automates and accelerates the process of lesion delineation. METHOD The workflow includes lung volume segmentation, healthy lung image synthesis, infected and healthy image subtraction, and binary lesion mask creation. The lung volume was first delineated using a pre-trained U-net and worked as the input for the following network. A cycle-GAN was developed to generate synthetic healthy lung CT images from infected lung images. After that, the pneumonia lesions are extracted by subtracting the synthetic healthy lung CT images from the infected lung CT images. A median filter and K-means clustering were then applied to contour the lesions. The auto segmentation approach was validated on three different datasets. RESULTS The average Dice coefficient reached 0.666±0.178 on the three datasets. Especially, the dice reached 0.748±0.121 and 0.730±0.095, respectively, on two public datasets Coronacases and Radiopedia. Meanwhile, the average precision and sensitivity for lesion segmentation on the three datasets were 0.679±0.244 and 0.756±0.162. The performance is comparable to existing supervised segmentation networks and outperforms unsupervised ones. CONCLUSION The proposed label-free segmentation method achieved high accuracy and efficiency in automatic COVID-19 lesion delineation. The segmentation result can serve as a baseline for further manual modification and a quality assurance tool for lesion diagnosis. Furthermore, due to its unsupervised nature, the result is not influenced by physicians' experience which otherwise is crucial for supervised methods. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Chengyijue Fang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, 230026, China
| | - Yingao Liu
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, 230026, China
| | - Ying Liu
- Department of Radiology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230001, China
| | - Mengqiu Liu
- Department of Radiology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230001, China
| | - Xiaohui Qiu
- Department of Radiology, Bozhou People's Hospital, Bozhou, Anhui, 236800, China
| | - Yang Li
- Department of Radiology, the First Affiliated Hospital of Bengbu Medical College, Bengbu, Anhui, 233000, China
| | - Jie Wen
- Department of Radiology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230001, China
| | - Yidong Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, 230026, China.,Department of Radiation Oncology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230001, China
| |
Collapse
|
242
|
Fahmy D, Kandil H, Khelifi A, Yaghi M, Ghazal M, Sharafeldeen A, Mahmoud A, El-Baz A. How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules. Cancers (Basel) 2022; 14:cancers14071840. [PMID: 35406614 PMCID: PMC8997734 DOI: 10.3390/cancers14071840] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Pulmonary nodules are considered a sign of bronchogenic carcinoma, detecting them early will reduce their progression and can save lives. Lung cancer is the second most common type of cancer in both men and women. This manuscript discusses the current applications of artificial intelligence (AI) in lung segmentation as well as pulmonary nodule segmentation and classification using computed tomography (CT) scans, published in the last two decades, in addition to the limitations and future prospects in the field of AI. Abstract Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.
Collapse
Affiliation(s)
- Dalia Fahmy
- Diagnostic Radiology Department, Mansoura University Hospital, Mansoura 35516, Egypt;
| | - Heba Kandil
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Information Technology Department, Faculty of Computers and Informatics, Mansoura University, Mansoura 35516, Egypt
| | - Adel Khelifi
- Computer Science and Information Technology Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Maha Yaghi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Correspondence:
| |
Collapse
|
243
|
Singh A, Kaur A, Dhillon A, Ahuja S, Vohra H. Software system to predict the infection in COVID-19 patients using deep learning and web of things. SOFTWARE: PRACTICE & EXPERIENCE 2022; 52:868-886. [PMID: 34538962 PMCID: PMC8441673 DOI: 10.1002/spe.3011] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 05/06/2021] [Accepted: 05/27/2021] [Indexed: 05/09/2023]
Abstract
Since the end of 2019, computed tomography (CT) images have been used as an important substitute for the time-consuming Reverse Transcriptase polymerase chain reaction (RT-PCR) test; a new coronavirus 2019 (COVID-19) disease has been detected and has quickly spread through many countries across the world. Medical imaging such as computed tomography provides great potential due to growing skepticism toward the sensitivity of RT-PCR as a screening tool. For this purpose, automated image segmentation is highly desired for a clinical decision aid and disease monitoring. However, there is limited publicly accessible COVID-19 image knowledge, leading to the overfitting of conventional approaches. To address this issue, the present paper focuses on data augmentation techniques to create synthetic data. Further, a framework has been proposed using WoT and traditional U-Net with EfficientNet B0 to segment the COVID Radiopedia and Medseg datasets automatically. The framework achieves an F-score of 0.96, which is best among state-of-the-art methods. The performance of the proposed framework also computed using Sensitivity, Specificity, and Dice-coefficient, achieves 84.5%, 93.9%, and 65.0%, respectively. Finally, the proposed work is validated using three quality of service (QoS) parameters such as server latency, response time, and network latency which improves the performance by 8%, 7%, and 10%, respectively.
Collapse
Affiliation(s)
- Ashima Singh
- CSEDThapar Institute of Engineering and TechnologyPatialaIndia
| | - Amrita Kaur
- CSEDThapar Institute of Engineering and TechnologyPatialaIndia
| | | | - Sahil Ahuja
- CSEDThapar Institute of Engineering and TechnologyPatialaIndia
| | - Harpreet Vohra
- ECEDThapar Institute of Engineering and TechnologyPatialaIndia
| |
Collapse
|
244
|
Hu H, Shen L, Guan Q, Li X, Zhou Q, Ruan S. Deep co-supervision and attention fusion strategy for automatic COVID-19 lung infection segmentation on CT images. PATTERN RECOGNITION 2022; 124:108452. [PMID: 34848897 PMCID: PMC8612757 DOI: 10.1016/j.patcog.2021.108452] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 09/20/2021] [Accepted: 11/22/2021] [Indexed: 05/13/2023]
Abstract
Due to the irregular shapes,various sizes and indistinguishable boundaries between the normal and infected tissues, it is still a challenging task to accurately segment the infected lesions of COVID-19 on CT images. In this paper, a novel segmentation scheme is proposed for the infections of COVID-19 by enhancing supervised information and fusing multi-scale feature maps of different levels based on the encoder-decoder architecture. To this end, a deep collaborative supervision (Co-supervision) scheme is proposed to guide the network learning the features of edges and semantics. More specifically, an Edge Supervised Module (ESM) is firstly designed to highlight low-level boundary features by incorporating the edge supervised information into the initial stage of down-sampling. Meanwhile, an Auxiliary Semantic Supervised Module (ASSM) is proposed to strengthen high-level semantic information by integrating mask supervised information into the later stage. Then an Attention Fusion Module (AFM) is developed to fuse multiple scale feature maps of different levels by using an attention mechanism to reduce the semantic gaps between high-level and low-level feature maps. Finally, the effectiveness of the proposed scheme is demonstrated on four various COVID-19 CT datasets. The results show that the proposed three modules are all promising. Based on the baseline (ResUnet), using ESM, ASSM, or AFM alone can respectively increase Dice metric by 1.12%, 1.95%,1.63% in our dataset, while the integration by incorporating three models together can rise 3.97%. Compared with the existing approaches in various datasets, the proposed method can obtain better segmentation performance in some main metrics, and can achieve the best generalization and comprehensive performance.
Collapse
Affiliation(s)
- Haigen Hu
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, PR China
- Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, PR China
| | - Leizhao Shen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, PR China
- Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, PR China
| | - Qiu Guan
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, PR China
- Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, PR China
| | - Xiaoxin Li
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, PR China
- Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, PR China
| | - Qianwei Zhou
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, PR China
- Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, PR China
| | - Su Ruan
- University of Rouen Normandy, LITIS EA 4108, Rouen 76183, France
| |
Collapse
|
245
|
Bose S, Sur Chowdhury R, Das R, Maulik U. Dense Dilated Deep Multiscale Supervised U-Network for biomedical image segmentation. Comput Biol Med 2022; 143:105274. [PMID: 35123135 DOI: 10.1016/j.compbiomed.2022.105274] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 01/26/2022] [Accepted: 01/26/2022] [Indexed: 12/24/2022]
Abstract
Biomedical image segmentation is essential for computerized medical image analysis. Deep learning algorithms allow us to design state-of-the-art models for solving segmentation problems. The U-Net and its variants have provided positive results across various datasets. However, the existing networks have the same receptive field at each level and the models are supervised only at the shallow level. Considering these two ideas, we have proposed the D3MSU-Net where the field of view in each level is varied depending upon the depth of the resolution layer and the model is supervised at each resolution level. We have evaluated our network in eight benchmark datasets such as Electron Microscopy, Lung segmentation, Montgomery Chest X-ray, Covid-Radiopaedia, Wound, Medetec, Brain MRI, and Covid-19 lung CT dataset. Additionally, we have provided the performance for various ablations. The experimental results show the superiority of the proposed network. The proposed D3MSU-Net and ablation models are available at www.github.com/shirshabose/D3MSUNET.
Collapse
Affiliation(s)
- Shirsha Bose
- Department of Electronics and Telecommunication Engineering, Jadavpur University, 188, Raja S.C. Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, 188, Raja S.C. Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Rangan Das
- Department of Computer Science Engineering, Jadavpur University, 188, Raja S.C. Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, 188, Raja S.C. Mallick Rd, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
246
|
Bao G, Chen H, Liu T, Gong G, Yin Y, Wang L, Wang X. COVID-MTL: Multitask learning with Shift3D and random-weighted loss for COVID-19 diagnosis and severity assessment. PATTERN RECOGNITION 2022; 124:108499. [PMID: 34924632 PMCID: PMC8666107 DOI: 10.1016/j.patcog.2021.108499] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 11/11/2021] [Accepted: 12/10/2021] [Indexed: 05/07/2023]
Abstract
There is an urgent need for automated methods to assist accurate and effective assessment of COVID-19. Radiology and nucleic acid test (NAT) are complementary COVID-19 diagnosis methods. In this paper, we present an end-to-end multitask learning (MTL) framework (COVID-MTL) that is capable of automated and simultaneous detection (against both radiology and NAT) and severity assessment of COVID-19. COVID-MTL learns different COVID-19 tasks in parallel through our novel random-weighted loss function, which assigns learning weights under Dirichlet distribution to prevent task dominance; our new 3D real-time augmentation algorithm (Shift3D) introduces space variances for 3D CNN components by shifting low-level feature representations of volumetric inputs in three dimensions; thereby, the MTL framework is able to accelerate convergence and improve joint learning performance compared to single-task models. By only using chest CT scans, COVID-MTL was trained on 930 CT scans and tested on separate 399 cases. COVID-MTL achieved AUCs of 0.939 and 0.846, and accuracies of 90.23% and 79.20% for detection of COVID-19 against radiology and NAT, respectively, which outperformed the state-of-the-art models. Meanwhile, COVID-MTL yielded AUC of 0.800 ± 0.020 and 0.813 ± 0.021 (with transfer learning) for classifying control/suspected, mild/regular, and severe/critically-ill cases. To decipher the recognition mechanism, we also identified high-throughput lung features that were significantly related (P < 0.001) to the positivity and severity of COVID-19.
Collapse
Affiliation(s)
- Guoqing Bao
- School of Computer Science, The University of Sydney, J12/1 Cleveland St, Darlington, Sydney, NSW 2008, Australia
| | - Huai Chen
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Tongliang Liu
- School of Computer Science, The University of Sydney, J12/1 Cleveland St, Darlington, Sydney, NSW 2008, Australia
| | - Guanzhong Gong
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Yong Yin
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Lisheng Wang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, J12/1 Cleveland St, Darlington, Sydney, NSW 2008, Australia
| |
Collapse
|
247
|
Wang X, Yuan Y, Guo D, Huang X, Cui Y, Xia M, Wang Z, Bai C, Chen S. SSA-Net: Spatial Self-Attention Network for COVID-19 Pneumonia Infection Segmentation with Semi-supervised Few-shot Learning. Med Image Anal 2022; 79:102459. [PMID: 35544999 PMCID: PMC9027296 DOI: 10.1016/j.media.2022.102459] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/31/2022] [Accepted: 04/11/2022] [Indexed: 12/21/2022]
Abstract
Coronavirus disease (COVID-19) broke out at the end of 2019, and has resulted in an ongoing global pandemic. Segmentation of pneumonia infections from chest computed tomography (CT) scans of COVID-19 patients is significant for accurate diagnosis and quantitative analysis. Deep learning-based methods can be developed for automatic segmentation and offer a great potential to strengthen timely quarantine and medical treatment. Unfortunately, due to the urgent nature of the COVID-19 pandemic, a systematic collection of CT data sets for deep neural network training is quite difficult, especially high-quality annotations of multi-category infections are limited. In addition, it is still a challenge to segment the infected areas from CT slices because of the irregular shapes and fuzzy boundaries. To solve these issues, we propose a novel COVID-19 pneumonia lesion segmentation network, called Spatial Self-Attention network (SSA-Net), to identify infected regions from chest CT images automatically. In our SSA-Net, a self-attention mechanism is utilized to expand the receptive field and enhance the representation learning by distilling useful contextual information from deeper layers without extra training time, and spatial convolution is introduced to strengthen the network and accelerate the training convergence. Furthermore, to alleviate the insufficiency of labeled multi-class data and the long-tailed distribution of training data, we present a semi-supervised few-shot iterative segmentation framework based on re-weighting the loss and selecting prediction values with high confidence, which can accurately classify different kinds of infections with a small number of labeled image data. Experimental results show that SSA-Net outperforms state-of-the-art medical image segmentation networks and provides clinically interpretable saliency maps, which are useful for COVID-19 diagnosis and patient triage. Meanwhile, our semi-supervised iterative segmentation model can improve the learning ability in small and unbalanced training set and can achieve higher performance.
Collapse
Affiliation(s)
- Xiaoyan Wang
- School of Computer Science and Technology, Zhejiang University of Technology, Zhejiang, Hangzhou 310023, China; Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou, China
| | - Yiwen Yuan
- School of Computer Science and Technology, Zhejiang University of Technology, Zhejiang, Hangzhou 310023, China
| | - Dongyan Guo
- School of Computer Science and Technology, Zhejiang University of Technology, Zhejiang, Hangzhou 310023, China; Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou, China.
| | - Xiaojie Huang
- The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310009, China.
| | - Ying Cui
- School of Computer Science and Technology, Zhejiang University of Technology, Zhejiang, Hangzhou 310023, China; Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou, China
| | - Ming Xia
- School of Computer Science and Technology, Zhejiang University of Technology, Zhejiang, Hangzhou 310023, China; Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou, China
| | - Zhenhua Wang
- School of Computer Science and Technology, Zhejiang University of Technology, Zhejiang, Hangzhou 310023, China; Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou, China
| | - Cong Bai
- School of Computer Science and Technology, Zhejiang University of Technology, Zhejiang, Hangzhou 310023, China; Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou, China
| | - Shengyong Chen
- School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, China
| |
Collapse
|
248
|
Wang X, Wang L, Sheng Y, Zhu C, Jiang N, Bai C, Xia M, Shao Z, Gu Z, Huang X, Zhao R, Liu Z. Automatic and accurate segmentation of peripherally inserted central catheter (PICC) from chest X-rays using multi-stage attention-guided learning. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
249
|
AI-driven quantification of ground glass opacities in lungs of COVID-19 patients using 3D computed tomography imaging. PLoS One 2022; 17:e0263916. [PMID: 35286309 PMCID: PMC8920286 DOI: 10.1371/journal.pone.0263916] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 01/29/2022] [Indexed: 01/19/2023] Open
Abstract
Objectives Ground-glass opacity (GGO)—a hazy, gray appearing density on computed tomography (CT) of lungs—is one of the hallmark features of SARS-CoV-2 in COVID-19 patients. This AI-driven study is focused on segmentation, morphology, and distribution patterns of GGOs. Method We use an AI-driven unsupervised machine learning approach called PointNet++ to detect and quantify GGOs in CT scans of COVID-19 patients and to assess the severity of the disease. We have conducted our study on the “MosMedData”, which contains CT lung scans of 1110 patients with or without COVID-19 infections. We quantify the morphologies of GGOs using Minkowski tensors and compute the abnormality score of individual regions of segmented lung and GGOs. Results PointNet++ detects GGOs with the highest evaluation accuracy (98%), average class accuracy (95%), and intersection over union (92%) using only a fraction of 3D data. On average, the shapes of GGOs in the COVID-19 datasets deviate from sphericity by 15% and anisotropies in GGOs are dominated by dipole and hexapole components. These anisotropies may help to quantitatively delineate GGOs of COVID-19 from other lung diseases. Conclusion The PointNet++ and the Minkowski tensor based morphological approach together with abnormality analysis will provide radiologists and clinicians with a valuable set of tools when interpreting CT lung scans of COVID-19 patients. Implementation would be particularly useful in countries severely devastated by COVID-19 such as India, where the number of cases has outstripped available resources creating delays or even breakdowns in patient care. This AI-driven approach synthesizes both the unique GGO distribution pattern and severity of the disease to allow for more efficient diagnosis, triaging and conservation of limited resources.
Collapse
|
250
|
Khan A, Garner R, Rocca ML, Salehi S, Duncan D. A Novel Threshold-Based Segmentation Method for Quantification of COVID-19 Lung Abnormalities. SIGNAL, IMAGE AND VIDEO PROCESSING 2022; 17:907-914. [PMID: 35371333 PMCID: PMC8958480 DOI: 10.1007/s11760-022-02183-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 11/23/2021] [Accepted: 02/17/2022] [Indexed: 06/14/2023]
Abstract
Since December 2019, the novel coronavirus disease 2019 (COVID-19) has claimed the lives of more than 3.75 million people worldwide. Consequently, methods for accurate COVID-19 diagnosis and classification are necessary to facilitate rapid patient care and terminate viral spread. Lung infection segmentations are useful to identify unique infection patterns that may support rapid diagnosis, severity assessment, and patient prognosis prediction, but manual segmentations are time-consuming and depend on radiologic expertise. Deep learning-based methods have been explored to reduce the burdens of segmentation; however, their accuracies are limited due to the lack of large, publicly available annotated datasets that are required to establish ground truths. For these reasons, we propose a semi-automatic, threshold-based segmentation method to generate region of interest (ROI) segmentations of infection visible on lung computed tomography (CT) scans. Infection masks are then used to calculate the percentage of lung abnormality (PLA) to determine COVID-19 severity and to analyze the disease progression in follow-up CTs. Compared with other COVID-19 ROI segmentation methods, on average, the proposed method achieved improved precision ( 47.49 % ) and specificity ( 98.40 % ) scores. Furthermore, the proposed method generated PLAs with a difference of ± 3.89 % from the ground-truth PLAs. The improved ROI segmentation results suggest that the proposed method has potential to assist radiologists in assessing infection severity and analyzing disease progression in follow-up CTs.
Collapse
Affiliation(s)
- Azrin Khan
- Laboratory of Neuro Imaging, Keck School of Medicine of USC, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA USA
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA USA
| | - Rachael Garner
- Laboratory of Neuro Imaging, Keck School of Medicine of USC, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA USA
| | - Marianna La Rocca
- Laboratory of Neuro Imaging, Keck School of Medicine of USC, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA USA
- Dipartimento Interateneo di Fisica, Universitá degli Studi di Bari Aldo Moro, Bari, Italy
| | - Sana Salehi
- Laboratory of Neuro Imaging, Keck School of Medicine of USC, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA USA
| | - Dominique Duncan
- Laboratory of Neuro Imaging, Keck School of Medicine of USC, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA USA
| |
Collapse
|