251
|
Ortiz S, Rojas F, Valenzuela O, Herrera LJ, Rojas I. Determination of the Severity and Percentage of COVID-19 Infection through a Hierarchical Deep Learning System. J Pers Med 2022; 12:535. [PMID: 35455654 PMCID: PMC9027976 DOI: 10.3390/jpm12040535] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/21/2022] [Accepted: 03/24/2022] [Indexed: 12/18/2022] Open
Abstract
The coronavirus disease 2019 (COVID-19) has caused millions of deaths and one of the greatest health crises of all time. In this disease, one of the most important aspects is the early detection of the infection to avoid the spread. In addition to this, it is essential to know how the disease progresses in patients, to improve patient care. This contribution presents a novel method based on a hierarchical intelligent system, that analyzes the application of deep learning models to detect and classify patients with COVID-19 using both X-ray and chest computed tomography (CT). The methodology was divided into three phases, the first being the detection of whether or not a patient suffers from COVID-19, the second step being the evaluation of the percentage of infection of this disease and the final phase is to classify the patients according to their severity. Stratification of patients suffering from COVID-19 according to their severity using automatic systems based on machine learning on medical images (especially X-ray and CT of the lungs) provides a powerful tool to help medical experts in decision making. In this article, a new contribution is made to a stratification system with three severity levels (mild, moderate and severe) using a novel histogram database (which defines how the infection is in the different CT slices for a patient suffering from COVID-19). The first two phases use CNN Densenet-161 pre-trained models, and the last uses SVM with LDA supervised learning algorithms as classification models. The initial stage detects the presence of COVID-19 through X-ray multi-class (COVID-19 vs. No-Findings vs. Pneumonia) and the results obtained for accuracy, precision, recall, and F1-score values are 88%, 91%, 87%, and 89%, respectively. The following stage manifested the percentage of COVID-19 infection in the slices of the CT-scans for a patient and the results in the metrics evaluation are 0.95 in Pearson Correlation coefficient, 5.14 in MAE and 8.47 in RMSE. The last stage finally classifies a patient in three degrees of severity as a function of global infection of the lungs and the results achieved are 95% accurate.
Collapse
Affiliation(s)
- Sergio Ortiz
- School of Technology and Telecommunications Engineering, University of Granada, 18071 Granada, Spain; (F.R.); (L.J.H.)
| | - Fernando Rojas
- School of Technology and Telecommunications Engineering, University of Granada, 18071 Granada, Spain; (F.R.); (L.J.H.)
| | - Olga Valenzuela
- Department of Applied Mathematics, University of Granada, 18071 Granada, Spain;
| | - Luis Javier Herrera
- School of Technology and Telecommunications Engineering, University of Granada, 18071 Granada, Spain; (F.R.); (L.J.H.)
| | - Ignacio Rojas
- School of Technology and Telecommunications Engineering, University of Granada, 18071 Granada, Spain; (F.R.); (L.J.H.)
| |
Collapse
|
252
|
Punn NS, Agarwal S. CHS-Net: A Deep Learning Approach for Hierarchical Segmentation of COVID-19 via CT Images. Neural Process Lett 2022; 54:3771-3792. [PMID: 35310011 PMCID: PMC8924740 DOI: 10.1007/s11063-022-10785-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/24/2022] [Indexed: 01/19/2023]
Abstract
The pandemic of novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) also known as COVID-19 has been spreading worldwide, causing rampant loss of lives. Medical imaging such as computed tomography (CT), X-ray, etc., plays a significant role in diagnosing the patients by presenting the visual representation of the functioning of the organs. However, for any radiologist analyzing such scans is a tedious and time-consuming task. The emerging deep learning technologies have displayed its strength in analyzing such scans to aid in the faster diagnosis of the diseases and viruses such as COVID-19. In the present article, an automated deep learning based model, COVID-19 hierarchical segmentation network (CHS-Net) is proposed that functions as a semantic hierarchical segmenter to identify the COVID-19 infected regions from lungs contour via CT medical imaging using two cascaded residual attention inception U-Net (RAIU-Net) models. RAIU-Net comprises of a residual inception U-Net model with spectral spatial and depth attention network (SSD) that is developed with the contraction and expansion phases of depthwise separable convolutions and hybrid pooling (max and spectral pooling) to efficiently encode and decode the semantic and varying resolution information. The CHS-Net is trained with the segmentation loss function that is the defined as the average of binary cross entropy loss and dice loss to penalize false negative and false positive predictions. The approach is compared with the recently proposed approaches and evaluated using the standard metrics like accuracy, precision, specificity, recall, dice coefficient and Jaccard similarity along with the visualized interpretation of the model prediction with GradCam++ and uncertainty maps. With extensive trials, it is observed that the proposed approach outperformed the recently proposed approaches and effectively segments the COVID-19 infected regions in the lungs.
Collapse
|
253
|
COVI3D: Automatic COVID-19 CT Image-Based Classification and Visualization Platform Utilizing Virtual and Augmented Reality Technologies. Diagnostics (Basel) 2022; 12:diagnostics12030649. [PMID: 35328202 PMCID: PMC8947514 DOI: 10.3390/diagnostics12030649] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/28/2022] [Accepted: 03/02/2022] [Indexed: 11/25/2022] Open
Abstract
Recently many studies have shown the effectiveness of using augmented reality (AR) and virtual reality (VR) in biomedical image analysis. However, they are not automating the COVID level classification process. Additionally, even with the high potential of CT scan imagery to contribute to research and clinical use of COVID-19 (including two common tasks in lung image analysis: segmentation and classification of infection regions), publicly available data-sets are still a missing part in the system care for Algerian patients. This article proposes designing an automatic VR and AR platform for the severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) pandemic data analysis, classification, and visualization to address the above-mentioned challenges including (1) utilizing a novel automatic CT image segmentation and localization system to deliver critical information about the shapes and volumes of infected lungs, (2) elaborating volume measurements and lung voxel-based classification procedure, and (3) developing an AR and VR user-friendly three-dimensional interface. It also centered on developing patient questionings and medical staff qualitative feedback, which led to advances in scalability and higher levels of engagement/evaluations. The extensive computer simulations on CT image classification show a better efficiency against the state-of-the-art methods using a COVID-19 dataset of 500 Algerian patients. The developed system has been used by medical professionals for better and faster diagnosis of the disease and providing an effective treatment plan more accurately by using real-time data and patient information.
Collapse
|
254
|
Bartoli A, Fournel J, Maurin A, Marchi B, Habert P, Castelli M, Gaubert JY, Cortaredona S, Lagier JC, Million M, Raoult D, Ghattas B, Jacquier A. Value and prognostic impact of a deep learning segmentation model of COVID-19 lung lesions on low-dose chest CT. RESEARCH IN DIAGNOSTIC AND INTERVENTIONAL IMAGING 2022; 1:100003. [PMID: 37520010 PMCID: PMC8939894 DOI: 10.1016/j.redii.2022.100003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 03/02/2022] [Accepted: 03/09/2022] [Indexed: 12/23/2022]
Abstract
Objectives 1) To develop a deep learning (DL) pipeline allowing quantification of COVID-19 pulmonary lesions on low-dose computed tomography (LDCT). 2) To assess the prognostic value of DL-driven lesion quantification. Methods This monocentric retrospective study included training and test datasets taken from 144 and 30 patients, respectively. The reference was the manual segmentation of 3 labels: normal lung, ground-glass opacity(GGO) and consolidation(Cons). Model performance was evaluated with technical metrics, disease volume and extent. Intra- and interobserver agreement were recorded. The prognostic value of DL-driven disease extent was assessed in 1621 distinct patients using C-statistics. The end point was a combined outcome defined as death, hospitalization>10 days, intensive care unit hospitalization or oxygen therapy. Results The Dice coefficients for lesion (GGO+Cons) segmentations were 0.75±0.08, exceeding the values for human interobserver (0.70±0.08; 0.70±0.10) and intraobserver measures (0.72±0.09). DL-driven lesion quantification had a stronger correlation with the reference than inter- or intraobserver measures. After stepwise selection and adjustment for clinical characteristics, quantification significantly increased the prognostic accuracy of the model (0.82 vs. 0.90; p<0.0001). Conclusions A DL-driven model can provide reproducible and accurate segmentation of COVID-19 lesions on LDCT. Automatic lesion quantification has independent prognostic value for the identification of high-risk patients.
Collapse
Key Words
- ACE, angiotensin-converting enzyme
- Artificial intelligence
- BMI, body mass index
- CNN, convolutional neural network
- COVID-19
- COVID-19, coronavirus disease 2019
- CT-SS, chest tomography severity score
- Cons, consolidation
- DL, deep learning
- DSC, Dice similarity coefficient
- Deep learning
- Diagnostic imaging
- GGO, ground-glass opacity
- ICU, intensive care unit
- LDCT, low-dose computed tomography
- MAE, mean absolute error
- MVSF, mean volume similarity fraction
- Multidetector computed tomography
- ROC, receiver operating characteristic
Collapse
Affiliation(s)
- Axel Bartoli
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- CRMBM - UMR CNRS 7339, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Joris Fournel
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- CRMBM - UMR CNRS 7339, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Arnaud Maurin
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
| | - Baptiste Marchi
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
| | - Paul Habert
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- LIEE, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
- CERIMED, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Maxime Castelli
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
| | - Jean-Yves Gaubert
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- LIEE, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
- CERIMED, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Sebastien Cortaredona
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, VITROME, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Jean-Christophe Lagier
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, MEPHI, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Matthieu Million
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, MEPHI, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Didier Raoult
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, MEPHI, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Badih Ghattas
- I2M - UMR CNRS 7373, Aix-Marseille University. CNRS, Centrale Marseille, 13453 Marseille, France
| | - Alexis Jacquier
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- CRMBM - UMR CNRS 7339, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| |
Collapse
|
255
|
Nayak J, Naik B, Dinesh P, Vakula K, Dash PB, Pelusi D. Significance of deep learning for Covid-19: state-of-the-art review. RESEARCH ON BIOMEDICAL ENGINEERING 2022. [PMCID: PMC7980106 DOI: 10.1007/s42600-021-00135-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Purpose The appearance of the 2019 novel coronavirus (Covid-19), for which there is no treatment or a vaccine, formed a sense of necessity for new drug discovery advances. The pandemic of NCOV-19 (novel coronavirus-19) has been engaged as a public health disaster of overall distress by the World Health Organization. Different pandemic models for NCOV-19 are being exploited by researchers all over the world to acquire experienced assessments and impose major control measures. Among the standard techniques for NCOV-19 global outbreak prediction, epidemiological and simple statistical techniques have attained more concern by researchers. Insufficiency and deficiency of health tests for identifying a solution became a major difficulty in controlling the spread of NCOV-19. To solve this problem, deep learning has emerged as a novel solution over a dozen of machine learning techniques. Deep learning has attained advanced performance in medical applications. Deep learning has the capacity of recognizing patterns in large complex datasets. They are identified as an appropriate method for analyzing affected patients of NCOV-19. Conversely, these techniques for disease recognition focus entirely on enhancing the accurateness of forecasts or classifications without the ambiguity measure in a decision. Knowing how much assurance present in a computer-based health analysis is necessary for gaining clinicians’ expectations in the technology and progress treatment consequently. Today, NCOV-19 diseases are the main healthcare confront throughout the world. Detecting NCOV-19 in X-ray images is vital for diagnosis, treatment, and evaluation. Still, analytical ambiguity in a report is a difficult yet predictable task for radiologists. Method In this paper, an in-depth analysis has been performed on the significance of deep learning for Covid-19 and as per the standard search database, this is the first review research work ever made concentrating particularly on Deep Learning for NCOV-19. Conclusion The main aim behind this research work is to inspire the research community and to innovate novel research using deep learning. Moreover, the outcome of this detailed structured review on the impact of deep learning in covid-19 analysis will be helpful for further investigations on various modalities of diseases detection, prevention and finding novel solutions.
Collapse
Affiliation(s)
- Janmenjoy Nayak
- Department of Computer Science and Engineering, Aditya Institute of Technology and Management (AITAM), K Kotturu, Tekkali, AP 532201 India
| | - Bighnaraj Naik
- Department of Computer Application, Veer Surendra Sai University of Technology, Burla, Odisha 768018 India
| | - Paidi Dinesh
- Department of Computer Science and Engineering, Sri Sivani College of Engineering, Srikakulam, AP 532402 India
| | - Kanithi Vakula
- Department of Computer Science and Engineering, Sri Sivani College of Engineering, Srikakulam, AP 532402 India
| | - Pandit Byomakesha Dash
- Department of Computer Application, Veer Surendra Sai University of Technology, Burla, Odisha 768018 India
| | - Danilo Pelusi
- Faculty of Communication Sciences, University of Teramo, Coste Sant', Agostino Campus, Teramo, Italy
| |
Collapse
|
256
|
Meng Y, Zhang H, Zhao Y, Yang X, Qiao Y, MacCormick IJC, Huang X, Zheng Y. Graph-Based Region and Boundary Aggregation for Biomedical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:690-701. [PMID: 34714742 DOI: 10.1109/tmi.2021.3123567] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Segmentation is a fundamental task in biomedical image analysis. Unlike the existing region-based dense pixel classification methods or boundary-based polygon regression methods, we build a novel graph neural network (GNN) based deep learning framework with multiple graph reasoning modules to explicitly leverage both region and boundary features in an end-to-end manner. The mechanism extracts discriminative region and boundary features, referred to as initialized region and boundary node embeddings, using a proposed Attention Enhancement Module (AEM). The weighted links between cross-domain nodes (region and boundary feature domains) in each graph are defined in a data-dependent way, which retains both global and local cross-node relationships. The iterative message aggregation and node update mechanism can enhance the interaction between each graph reasoning module's global semantic information and local spatial characteristics. Our model, in particular, is capable of concurrently addressing region and boundary feature reasoning and aggregation at several different feature levels due to the proposed multi-level feature node embeddings in different parallel graph reasoning modules. Experiments on two types of challenging datasets demonstrate that our method outperforms state-of-the-art approaches for segmentation of polyps in colonoscopy images and of the optic disc and optic cup in colour fundus images. The trained models will be made available at: https://github.com/smallmax00/Graph_Region_Boudnary.
Collapse
|
257
|
Wang G, Zhai S, Lasio G, Zhang B, Yi B, Chen S, Macvittie TJ, Metaxas D, Zhou J, Zhang S. Semi-Supervised Segmentation of Radiation-Induced Pulmonary Fibrosis From Lung CT Scans With Multi-Scale Guided Dense Attention. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:531-542. [PMID: 34606451 PMCID: PMC9271367 DOI: 10.1109/tmi.2021.3117564] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Computed Tomography (CT) plays an important role in monitoring radiation-induced Pulmonary Fibrosis (PF), where accurate segmentation of the PF lesions is highly desired for diagnosis and treatment follow-up. However, the task is challenged by ambiguous boundary, irregular shape, various position and size of the lesions, as well as the difficulty in acquiring a large set of annotated volumetric images for training. To overcome these problems, we propose a novel convolutional neural network called PF-Net and incorporate it into a semi-supervised learning framework based on Iterative Confidence-based Refinement And Weighting of pseudo Labels (I-CRAWL). Our PF-Net combines 2D and 3D convolutions to deal with CT volumes with large inter-slice spacing, and uses multi-scale guided dense attention to segment complex PF lesions. For semi-supervised learning, our I-CRAWL employs pixel-level uncertainty-based confidence-aware refinement to improve the accuracy of pseudo labels of unannotated images, and uses image-level uncertainty for confidence-based image weighting to suppress low-quality pseudo labels in an iterative training process. Extensive experiments with CT scans of Rhesus Macaques with radiation-induced PF showed that: 1) PF-Net achieved higher segmentation accuracy than existing 2D, 3D and 2.5D neural networks, and 2) I-CRAWL outperformed state-of-the-art semi-supervised learning methods for the PF lesion segmentation task. Our method has a potential to improve the diagnosis of PF and clinical assessment of side effects of radiotherapy for lung cancers.
Collapse
|
258
|
Oulefki A, Agaian S, Trongtirakul T, Benbelkacem S, Aouam D, Zenati-Henda N, Abdelli ML. Virtual Reality visualization for computerized COVID-19 lesion segmentation and interpretation. Biomed Signal Process Control 2022; 73:103371. [PMID: 34840591 PMCID: PMC8610934 DOI: 10.1016/j.bspc.2021.103371] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 09/23/2021] [Accepted: 11/07/2021] [Indexed: 02/01/2023]
Abstract
Coronavirus disease (COVID-19) is a severe infectious disease that causes respiratory illness and has had devastating medical and economic consequences globally. Therefore, early, and precise diagnosis is critical to control disease progression and management. Compared to the very popular RT-PCR (reverse-transcription polymerase chain reaction) method, chest CT imaging is a more consistent, sensible, and fast approach for identifying and managing infected COVID-19 patients, specifically in the epidemic area. CT images use computational methods to combine 2D X-ray images and transform them into 3D images. One major drawback of CT scans in diagnosing COVID-19 is creating false-negative effects, especially early infection. This article aims to combine novel CT imaging tools and Virtual Reality (VR) technology and generate an automatize system for accurately screening COVID-19 disease and navigating 3D visualizations of medical scenes. The key benefits of this system are a) it offers stereoscopic depth perception, b) give better insights and comprehension into the overall imaging data, c) it allows doctors to visualize the 3D models, manipulate them, study the inside 3D data, and do several kinds of measurements, and finally d) it has the capacity of real-time interactivity and accurately visualizes dynamic 3D volumetric data. The tool provides novel visualizations for medical practitioners to identify and analyze the change in the shape of COVID-19 infectious. The second objective of this work is to generate, the first time, the CT African patient COVID-19 scan datasets containing 224 patients positive for an infection and 70 regular patients CT-scan images. Computer simulations demonstrate that the proposed method's effectiveness comparing with state-of-the-art baselines methods. The results have also been evaluated with medical professionals. The developed system could be used for medical education professional training and a telehealth VR platform.
Collapse
Affiliation(s)
- Adel Oulefki
- Centre de Développement des Technologies Avancées (CDTA), PO. Box 17 Baba Hassen, Algiers 16081, Algeria
| | - Sos Agaian
- Dept. of Computer Science, College of Staten Island, New York, 2800 Victory Blvd Staten Island, New York 10314, USA
| | - Thaweesak Trongtirakul
- Faculty of Industrial Education, Rajamangala University of Technology Phra Nakhon, 399 Samsen Rd. Vachira Phayaban, Dusit, Bangkok 10300, Thailand
| | - Samir Benbelkacem
- Centre de Développement des Technologies Avancées (CDTA), PO. Box 17 Baba Hassen, Algiers 16081, Algeria
| | - Djamel Aouam
- Centre de Développement des Technologies Avancées (CDTA), PO. Box 17 Baba Hassen, Algiers 16081, Algeria
| | - Nadia Zenati-Henda
- Centre de Développement des Technologies Avancées (CDTA), PO. Box 17 Baba Hassen, Algiers 16081, Algeria
| | | |
Collapse
|
259
|
Zhou Q, Qin J, Xiang X, Tan Y, Ren Y. MOLS-Net: Multi-organ and lesion segmentation network based on sequence feature pyramid and attention mechanism for aortic dissection diagnosis. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107853] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
260
|
Enshaei N, Oikonomou A, Rafiee MJ, Afshar P, Heidarian S, Mohammadi A, Plataniotis KN, Naderkhani F. COVID-rate: an automated framework for segmentation of COVID-19 lesions from chest CT images. Sci Rep 2022; 12:3212. [PMID: 35217712 PMCID: PMC8881477 DOI: 10.1038/s41598-022-06854-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 01/21/2022] [Indexed: 11/09/2022] Open
Abstract
Novel Coronavirus disease (COVID-19) is a highly contagious respiratory infection that has had devastating effects on the world. Recently, new COVID-19 variants are emerging making the situation more challenging and threatening. Evaluation and quantification of COVID-19 lung abnormalities based on chest Computed Tomography (CT) images can help determining the disease stage, efficiently allocating limited healthcare resources, and making informed treatment decisions. During pandemic era, however, visual assessment and quantification of COVID-19 lung lesions by expert radiologists become expensive and prone to error, which raises an urgent quest to develop practical autonomous solutions. In this context, first, the paper introduces an open-access COVID-19 CT segmentation dataset containing 433 CT images from 82 patients that have been annotated by an expert radiologist. Second, a Deep Neural Network (DNN)-based framework is proposed, referred to as the [Formula: see text], that autonomously segments lung abnormalities associated with COVID-19 from chest CT images. Performance of the proposed [Formula: see text] framework is evaluated through several experiments based on the introduced and external datasets. Third, an unsupervised enhancement approach is introduced that can reduce the gap between the training set and test set and improve the model generalization. The enhanced results show a dice score of 0.8069 and specificity and sensitivity of 0.9969 and 0.8354, respectively. Furthermore, the results indicate that the [Formula: see text] model can efficiently segment COVID-19 lesions in both 2D CT images and whole lung volumes. Results on the external dataset illustrate generalization capabilities of the [Formula: see text] model to CT images obtained from a different scanner.
Collapse
Affiliation(s)
- Nastaran Enshaei
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | - Anastasia Oikonomou
- Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON, Canada.
| | - Moezedin Javad Rafiee
- Department of Medicine and Diagnostic Radiology, McGill University, Montreal, QC, Canada
| | - Parnian Afshar
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | - Shahin Heidarian
- Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada
| | - Arash Mohammadi
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | | | - Farnoosh Naderkhani
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| |
Collapse
|
261
|
L AA, S VCS. Cascaded 3D UNet architecture for segmenting the COVID-19 infection from lung CT volume. Sci Rep 2022; 12:3090. [PMID: 35197504 PMCID: PMC8866496 DOI: 10.1038/s41598-022-06931-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Accepted: 02/02/2022] [Indexed: 12/13/2022] Open
Abstract
World Health Organization (WHO) declared COVID-19 (COronaVIrus Disease 2019) as pandemic on March 11, 2020. Ever since then, the virus is undergoing different mutations, with a high rate of dissemination. The diagnosis and prognosis of COVID-19 are critical in bringing the situation under control. COVID-19 virus replicates in the lungs after entering the upper respiratory system, causing pneumonia and mortality. Deep learning has a significant role in detecting infections from the Computed Tomography (CT). With the help of basic image processing techniques and deep learning, we have developed a two stage cascaded 3D UNet to segment the contaminated area from the lungs. The first 3D UNet extracts the lung parenchyma from the CT volume input after preprocessing and augmentation. Since the CT volume is small, we apply appropriate post-processing to the lung parenchyma and input these volumes into the second 3D UNet. The second 3D UNet extracts the infected 3D volumes. With this method, clinicians can input the complete CT volume of the patient and analyze the contaminated area without having to label the lung parenchyma for each new patient. For lung parenchyma segmentation, the proposed method obtained a sensitivity of 93.47%, specificity of 98.64%, an accuracy of 98.07%, and a dice score of 92.46%. We have achieved a sensitivity of 83.33%, a specificity of 99.84%, an accuracy of 99.20%, and a dice score of 82% for lung infection segmentation.
Collapse
Affiliation(s)
- Aswathy A L
- Department of Computer Science, University of Kerala, Thiruvananthapuram, India.
| | - Vinod Chandra S S
- Department of Computer Science, University of Kerala, Thiruvananthapuram, India
| |
Collapse
|
262
|
Liu T, Siegel E, Shen D. Deep Learning and Medical Image Analysis for COVID-19 Diagnosis and Prediction. Annu Rev Biomed Eng 2022; 24:179-201. [PMID: 35316609 DOI: 10.1146/annurev-bioeng-110220-012203] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has imposed dramatic challenges to health-care organizations worldwide. To combat the global crisis, the use of thoracic imaging has played a major role in diagnosis, prediction, and management for COVID-19 patients with moderate to severe symptoms or with evidence of worsening respiratory status. In response, the medical image analysis community acted quickly to develop and disseminate deep learning models and tools to meet the urgent need of managing and interpreting large amounts of COVID-19 imaging data. This review aims to not only summarize existing deep learning and medical image analysis methods but also offer in-depth discussions and recommendations for future investigations. We believe that the wide availability of high-quality, curated, and benchmarked COVID-19 imaging data sets offers the great promise of a transformative test bed to develop, validate, and disseminate novel deep learning methods in the frontiers of data science and artificial intelligence. Expected final online publication date for the Annual Review of Biomedical Engineering, Volume 24 is June 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Tianming Liu
- Department of Computer Science, University of Georgia, Athens, Georgia, USA;
| | - Eliot Siegel
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, Maryland, USA;
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.,Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China;
| |
Collapse
|
263
|
Yao HY, Wan WG, Li X. A deep adversarial model for segmentation-assisted COVID-19 diagnosis using CT images. EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING 2022; 2022:10. [PMID: 35194421 PMCID: PMC8830991 DOI: 10.1186/s13634-022-00842-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 01/27/2022] [Indexed: 06/14/2023]
Abstract
The outbreak of coronavirus disease 2019 (COVID-19) is spreading rapidly around the world, resulting in a global pandemic. Imaging techniques such as computed tomography (CT) play an essential role in the diagnosis and treatment of the disease since lung infection or pneumonia is a common complication. However, training a deep network to learn how to diagnose COVID-19 rapidly and accurately in CT images and segment the infected regions like a radiologist is challenging. Since the infectious area is difficult to distinguish manually annotation, the segmentation results are time-consuming. To tackle these problems, we propose an efficient method based on a deep adversarial network to segment the infection regions automatically. Then, the predicted segment results can assist the diagnostic network in identifying the COVID-19 samples from the CT images. On the other hand, a radiologist-like segmentation network provides detailed information of the infectious regions by separating areas of ground-glass, consolidation, and pleural effusion, respectively. Our method can accurately predict the COVID-19 infectious probability and provide lesion regions in CT images with limited training data. Additionally, we have established a public dataset for multitask learning. Extensive experiments on diagnosis and segmentation show superior performance over state-of-the-art methods.
Collapse
Affiliation(s)
- Hai-yan Yao
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
- Anyang Institute of Technology, Anyang, China
| | - Wang-gen Wan
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Xiang Li
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| |
Collapse
|
264
|
Semantic segmentation of COVID-19 lesions with a multiscale dilated convolutional network. Sci Rep 2022; 12:1847. [PMID: 35115573 PMCID: PMC8814191 DOI: 10.1038/s41598-022-05527-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 01/12/2022] [Indexed: 11/09/2022] Open
Abstract
Automatic segmentation of infected lesions from computed tomography (CT) of COVID-19 patients is crucial for accurate diagnosis and follow-up assessment. The remaining challenges are the obvious scale difference between different types of COVID-19 lesions and the similarity between the lesions and normal tissues. This work aims to segment lesions of different scales and lesion boundaries correctly by utilizing multiscale and multilevel features. A novel multiscale dilated convolutional network (MSDC-Net) is proposed against the scale difference of lesions and the low contrast between lesions and normal tissues in CT images. In our MSDC-Net, we propose a multiscale feature capture block (MSFCB) to effectively capture multiscale features for better segmentation of lesions at different scales. Furthermore, a multilevel feature aggregate (MLFA) module is proposed to reduce the information loss in the downsampling process. Experiments on the publicly available COVID-19 CT Segmentation dataset demonstrate that the proposed MSDC-Net is superior to other existing methods in segmenting lesion boundaries and large, medium, and small lesions, and achieves the best results in Dice similarity coefficient, sensitivity and mean intersection-over-union (mIoU) scores of 82.4%, 81.1% and 78.2%, respectively. Compared with other methods, the proposed model has an average improvement of 10.6% and 11.8% on Dice and mIoU. Compared with the existing methods, our network achieves more accurate segmentation of lesions at various scales and lesion boundaries, which will facilitate further clinical analysis. In the future, we consider integrating the automatic detection and segmentation of COVID-19, and conduct research on the automatic diagnosis system of COVID-19.
Collapse
|
265
|
Barshooi AH, Amirkhani A. A novel data augmentation based on Gabor filter and convolutional deep learning for improving the classification of COVID-19 chest X-Ray images. Biomed Signal Process Control 2022; 72:103326. [PMID: 34777557 PMCID: PMC8576144 DOI: 10.1016/j.bspc.2021.103326] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 10/20/2021] [Accepted: 11/02/2021] [Indexed: 12/21/2022]
Abstract
A dangerous infectious disease of the current century, the COVID-19 has apparently originated in a city in China and turned into a widespread pandemic within a short time. In this paper, a novel method has been presented for improving the screening and classification of COVID-19 patients based on their chest X-Ray (CXR) images. This method eliminates the severe dependence of the deep learning models on large datasets and the deep features extracted from them. In this approach, we have not only resolved the data limitation problem by combining the traditional data augmentation techniques with the generative adversarial networks (GANs), but also have enabled a deeper extraction of features by applying different filter banks such as the Sobel, Laplacian of Gaussian (LoG) and the Gabor filters. To verify the satisfactory performance of the proposed approach, it was applied on several deep transfer models and the results in each step were compared with each other. For training the entire models, we used 4560 CXR images of various patients with the viral, bacterial, fungal, and other diseases; 360 of these images are in the COVID-19 category and the rest belong to the non-COVID-19 diseases. According to the results, the Gabor filter bank achieves the highest growth in the values of the defined evaluation criteria and in just 45 epochs, it is able to elevate the accuracy by up to 32%. We then applied the proposed model on the DenseNet-201 model and compared its performance in terms of the detection accuracy with the performances of 10 existing COVID-19 detection techniques. Our approach was able to achieve an accuracy of 98.5% in the two-class classification procedure; which makes it a state-of-the-art method for detecting the COVID-19.
Collapse
Affiliation(s)
- Amir Hossein Barshooi
- School of Automotive Engineering, Iran University of Science and Technology, Tehran 16846-13114, Iran
| | - Abdollah Amirkhani
- School of Automotive Engineering, Iran University of Science and Technology, Tehran 16846-13114, Iran
| |
Collapse
|
266
|
Hassan H, Ren Z, Zhao H, Huang S, Li D, Xiang S, Kang Y, Chen S, Huang B. Review and classification of AI-enabled COVID-19 CT imaging models based on computer vision tasks. Comput Biol Med 2022; 141:105123. [PMID: 34953356 PMCID: PMC8684223 DOI: 10.1016/j.compbiomed.2021.105123] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 12/03/2021] [Accepted: 12/03/2021] [Indexed: 01/12/2023]
Abstract
This article presents a systematic overview of artificial intelligence (AI) and computer vision strategies for diagnosing the coronavirus disease of 2019 (COVID-19) using computerized tomography (CT) medical images. We analyzed the previous review works and found that all of them ignored classifying and categorizing COVID-19 literature based on computer vision tasks, such as classification, segmentation, and detection. Most of the COVID-19 CT diagnosis methods comprehensively use segmentation and classification tasks. Moreover, most of the review articles are diverse and cover CT as well as X-ray images. Therefore, we focused on the COVID-19 diagnostic methods based on CT images. Well-known search engines and databases such as Google, Google Scholar, Kaggle, Baidu, IEEE Xplore, Web of Science, PubMed, ScienceDirect, and Scopus were utilized to collect relevant studies. After deep analysis, we collected 114 studies and reported highly enriched information for each selected research. According to our analysis, AI and computer vision have substantial potential for rapid COVID-19 diagnosis as they could significantly assist in automating the diagnosis process. Accurate and efficient models will have real-time clinical implications, though further research is still required. Categorization of literature based on computer vision tasks could be helpful for future research; therefore, this review article will provide a good foundation for conducting such research.
Collapse
Affiliation(s)
- Haseeb Hassan
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China
| | - Zhaoyu Ren
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Huishi Zhao
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Shoujin Huang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Dan Li
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Shaohua Xiang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Yan Kang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China; Medical Device Innovation Research Center, Shenzhen Technology University, Shenzhen, China
| | - Sifan Chen
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China; Medical Research Center, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Bingding Huang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China.
| |
Collapse
|
267
|
Wang R, Chen S, Ji C, Fan J, Li Y. Boundary-Aware Context Neural Network for Medical Image Segmentation. Med Image Anal 2022; 78:102395. [DOI: 10.1016/j.media.2022.102395] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 02/07/2022] [Accepted: 02/12/2022] [Indexed: 12/13/2022]
|
268
|
Liu X, Yuan Q, Gao Y, He K, Wang S, Tang X, Tang J, Shen D. Weakly Supervised Segmentation of COVID19 Infection with Scribble Annotation on CT Images. PATTERN RECOGNITION 2022; 122:108341. [PMID: 34565913 PMCID: PMC8452156 DOI: 10.1016/j.patcog.2021.108341] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 08/23/2021] [Accepted: 09/18/2021] [Indexed: 05/19/2023]
Abstract
Segmentation of infections from CT scans is important for accurate diagnosis and follow-up in tackling the COVID-19. Although the convolutional neural network has great potential to automate the segmentation task, most existing deep learning-based infection segmentation methods require fully annotated ground-truth labels for training, which is time-consuming and labor-intensive. This paper proposed a novel weakly supervised segmentation method for COVID-19 infections in CT slices, which only requires scribble supervision and is enhanced with the uncertainty-aware self-ensembling and transformation-consistent techniques. Specifically, to deal with the difficulty caused by the shortage of supervision, an uncertainty-aware mean teacher is incorporated into the scribble-based segmentation method, encouraging the segmentation predictions to be consistent under different perturbations for an input image. This mean teacher model can guide the student model to be trained using information in images without requiring manual annotations. On the other hand, considering the output of the mean teacher contains both correct and unreliable predictions, equally treating each prediction in the teacher model may degrade the performance of the student network. To alleviate this problem, the pixel level uncertainty measure on the predictions of the teacher model is calculated, and then the student model is only guided by reliable predictions from the teacher model. To further regularize the network, a transformation-consistent strategy is also incorporated, which requires the prediction to follow the same transformation if a transform is performed on an input image of the network. The proposed method has been evaluated on two public datasets and one local dataset. The experimental results demonstrate that the proposed method is more effective than other weakly supervised methods and achieves similar performance as those fully supervised.
Collapse
Affiliation(s)
- Xiaoming Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan, China
| | - Quan Yuan
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan, China
| | - Yaozong Gao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Kelei He
- Medical School, Nanjing University, Nanjing, China
- National Institute of Healthcare Data Science at Nanjing University, Nanjing, China
| | - Shuo Wang
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan, China
| | - Xiao Tang
- Department of Medical Imaging, Tianyou Hospital Affiliated to Wuhan University of Science and Technology, Wuhan, China
| | - Jinshan Tang
- Department of Health Administration and Policy, George Mason University, Fairfax, VA, 22030, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
269
|
Huang Z, Li L, Zhang X, Song Y, Chen J, Zhao H, Chong Y, Wu H, Yang Y, Shen J, Zha Y. A coarse-refine segmentation network for COVID-19 CT images. IET IMAGE PROCESSING 2022; 16:333-343. [PMID: 34899976 PMCID: PMC8653356 DOI: 10.1049/ipr2.12278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2020] [Revised: 04/10/2021] [Accepted: 05/19/2021] [Indexed: 06/04/2023]
Abstract
The rapid spread of the novel coronavirus disease 2019 (COVID-19) causes a significant impact on public health. It is critical to diagnose COVID-19 patients so that they can receive reasonable treatments quickly. The doctors can obtain a precise estimate of the infection's progression and decide more effective treatment options by segmenting the CT images of COVID-19 patients. However, it is challenging to segment infected regions in CT slices because the infected regions are multi-scale, and the boundary is not clear due to the low contrast between the infected area and the normal area. In this paper, a coarse-refine segmentation network is proposed to address these challenges. The coarse-refine architecture and hybrid loss is used to guide the model to predict the delicate structures with clear boundaries to address the problem of unclear boundaries. The atrous spatial pyramid pooling module in the network is added to improve the performance in detecting infected regions with different scales. Experimental results show that the model in the segmentation of COVID-19 CT images outperforms other familiar medical segmentation models, enabling the doctor to get a more accurate estimate on the progression of the infection and thus can provide more reasonable treatment options.
Collapse
Affiliation(s)
- Ziwang Huang
- School of Data and Computer ScienceSun Yat‐Sen UniversityGuangzhouChina
| | - Liang Li
- Department of RadiologyRenmin Hospital of Wuhan UniversityWuhanChina
| | - Xiang Zhang
- Department of Radiology Sun Yat‐Sen Memorial HospitalSun Yat‐Sen UniversityGuangzhouChina
| | - Ying Song
- School of Systems Sciences and EngineeringSun Yat‐Sen UniversityGuangzhouChina
| | - Jianwen Chen
- School of Data and Computer ScienceSun Yat‐Sen UniversityGuangzhouChina
| | - Huiying Zhao
- Department of Radiology Sun Yat‐Sen Memorial HospitalSun Yat‐Sen UniversityGuangzhouChina
| | - Yutian Chong
- Department of RadiologyThe Third Affiliated Hospital of Sun Yat‐Sen UniversityGuangzhouChina
| | - Hejun Wu
- School of Data and Computer ScienceSun Yat‐Sen UniversityGuangzhouChina
| | - Yuedong Yang
- School of Data and Computer ScienceSun Yat‐Sen UniversityGuangzhouChina
| | - Jun Shen
- Department of Radiology Sun Yat‐Sen Memorial HospitalSun Yat‐Sen UniversityGuangzhouChina
| | - Yunfei Zha
- Department of RadiologyRenmin Hospital of Wuhan UniversityWuhanChina
| |
Collapse
|
270
|
Ter-Sarkisov A. One Shot Model For The Prediction of COVID-19 And Lesions Segmentation In Chest CT Scans Through The Affinity Among Lesion Mask Features. Appl Soft Comput 2022; 116:108261. [PMID: 34924896 PMCID: PMC8668605 DOI: 10.1016/j.asoc.2021.108261] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 11/09/2021] [Accepted: 11/27/2021] [Indexed: 01/15/2023]
Abstract
We present a novel framework that integrates segmentation of lesion masks and prediction of COVID-19 in chest CT scans in one shot. In order to classify the whole input image, we introduce a type of associations among lesion mask features extracted from the scan slice that we refer to as affinities. First, we map mask features to the affinity space by training an affinity matrix. Next, we map them back into the feature space through a trainable affinity vector. Finally, this feature representation is used for the classification of the whole input scan slice. We achieve a 93.55% COVID-19 sensitivity, 96.93% common pneumonia sensitivity, 99.37% true negative rate and 97.37% F1-score on the test split of CNCB-NCOV dataset with 21192 chest CT scan slices. We also achieve a 0.4240 mean average precision on the lesion segmentation task. All source code, models and results are publicly available on https://github.com/AlexTS1980/COVID-Affinity-Model.
Collapse
Affiliation(s)
- Aram Ter-Sarkisov
- CitAI Research Center, Department of Computer Science, City University of London, United Kingdom
| |
Collapse
|
271
|
Wu X, Zhang Y, Zhang P, Hui H, Jing J, Tian F, Jiang J, Yang X, Chen Y, Tian J. Structure attention co-training neural network for neovascularization segmentation in intravascular optical coherence tomography. Med Phys 2022; 49:1723-1738. [PMID: 35061247 DOI: 10.1002/mp.15477] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 01/09/2022] [Accepted: 01/10/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To development and validate a Neovascularization (NV) segmentation model in intravascular optical coherence tomography (IVOCT) through deep learning methods. METHODS AND MATERIALS A total of 1950 2D slices of 70 IVOCT pullbacks were used in our study. We randomly selected 1273 2D slices from 44 patients as the training set, 379 2D slices from 11 patients as the validation set, and 298 2D slices from the last 15 patients as the testing set. Automatic NV segmentation is quite challenging, as it must address issues of speckle noise, shadow artifacts, high distribution variation, etc. To meet these challenges, a new deep learning-based segmentation method is developed based on a co-training architecture with an integrated structural attention mechanism. Co-training is developed to exploit the features of three consecutive slices. The structural attention mechanism comprises spatial and channel attention modules and is integrated into the co-training architecture at each up-sampling step. A cascaded fixed network is further incorporated to achieve segmentation at the image level in a coarse-to-fine manner. RESULTS Extensive experiments were performed involving a comparison with several state-of-the-art deep learning-based segmentation methods. Moreover, the consistency of the results with those of manual segmentation was also investigated. Our proposed NV automatic segmentation method achieved the highest correlation with the manual delineation by interventional cardiologists (the Pearson correlation coefficient is 0.825). CONCLUSION In this work, we proposed a co-training architecture with an integrated structural attention mechanism to segment NV in IVOCT images. The good agreement between our segmentation results and manual segmentation indicates that the proposed method has great potential for application in the clinical investigation of NV-related plaque diagnosis and treatment. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Xiangjun Wu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, 100083, China.,CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing, 100190, China.,Beijing Key Laboratory of Molecular Imaging, Beijing, 100190, China
| | - Yingqian Zhang
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing, 100853, China
| | - Peng Zhang
- Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China
| | - Hui Hui
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing, 100190, China.,Beijing Key Laboratory of Molecular Imaging, Beijing, 100190, China.,University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Jing Jing
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing, 100853, China
| | - Feng Tian
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing, 100853, China
| | - Jingying Jiang
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, 100083, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing, 100190, China.,Beijing Key Laboratory of Molecular Imaging, Beijing, 100190, China
| | - Yundai Chen
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing, 100853, China.,Southern Medical University, Guangzhou, 510515, China
| | - Jie Tian
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, 100083, China.,CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing, 100190, China.,Beijing Key Laboratory of Molecular Imaging, Beijing, 100190, China.,Zhuhai Precision Medical Center, Zhuhai People's Hospital, affiliated with Jinan University, Zhuhai, 519000, China
| |
Collapse
|
272
|
Wang X, Zhu L, Tang S, Fu H, Li P, Wu F, Yang Y, Zhuang Y. Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1107-1119. [PMID: 34990359 DOI: 10.1109/tip.2021.3139232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Training deep models for RGB-D salient object detection (SOD) often requires a large number of labeled RGB-D images. However, RGB-D data is not easily acquired, which limits the development of RGB-D SOD techniques. To alleviate this issue, we present a Dual-Semi RGB-D Salient Object Detection Network (DS-Net) to leverage unlabeled RGB images for boosting RGB-D saliency detection. We first devise a depth decoupling convolutional neural network (DDCNN), which contains a depth estimation branch and a saliency detection branch. The depth estimation branch is trained with RGB-D images and then used to estimate the pseudo depth maps for all unlabeled RGB images to form the paired data. The saliency detection branch is used to fuse the RGB feature and depth feature to predict the RGB-D saliency. Then, the whole DDCNN is assigned as the backbone in a teacher-student framework for semi-supervised learning. Moreover, we also introduce a consistency loss on the intermediate attention and saliency maps for the unlabeled data, as well as a supervised depth and saliency loss for labeled data. Experimental results on seven widely-used benchmark datasets demonstrate that our DDCNN outperforms state-of-the-art methods both quantitatively and qualitatively. We also demonstrate that our semi-supervised DS-Net can further improve the performance, even when using an RGB image with the pseudo depth map.
Collapse
|
273
|
Albishri AA, Shah SJH, Kang SS, Lee Y. AM-UNet: automated mini 3D end-to-end U-net based network for brain claustrum segmentation. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:36171-36194. [PMID: 35035265 PMCID: PMC8742670 DOI: 10.1007/s11042-021-11568-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 09/08/2021] [Accepted: 09/20/2021] [Indexed: 06/14/2023]
Abstract
Recent advances in deep learning (DL) have provided promising solutions to medical image segmentation. Among existing segmentation approaches, the U-Net-based methods have been used widely. However, very few U-Net-based studies have been conducted on automatic segmentation of the human brain claustrum (CL). The CL segmentation is challenging due to its thin, sheet-like structure, heterogeneity of its image modalities and formats, imperfect labels, and data imbalance. We propose an automatic optimized U-Net-based 3D segmentation model, called AM-UNet, designed as an end-to-end process of the pre and post-process techniques and a U-Net model for CL segmentation. It is a lightweight and scalable solution which has achieved the state-of-the-art accuracy for automatic CL segmentation on 3D magnetic resonance images (MRI). On the T1/T2 combined MRI CL dataset, AM-UNet has obtained excellent results, including Dice, Intersection over Union (IoU), and Intraclass Correlation Coefficient (ICC) scores of 82%, 70%, and 90%, respectively. We have conducted the comparative evaluation of AM-UNet with other pre-existing models for segmentation on the MRI CL dataset. As a result, medical experts confirmed the superiority of the proposed AM-UNet model for automatic CL segmentation. The source code and model of the AM-UNet project is publicly available on GitHub: https://github.com/AhmedAlbishri/AM-UNET.
Collapse
Affiliation(s)
- Ahmed Awad Albishri
- School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO 64110 USA
- College of Computing and Informatics, Saudi Electronic University, Riyadh, Saudi Arabia
| | - Syed Jawad Hussain Shah
- School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO 64110 USA
| | - Seung Suk Kang
- Department of Psychiatry Biomedical Sciences, School of Medicine, University of Missouri-Kansas City, Kansas City, MO 64110 USA
| | - Yugyung Lee
- School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO 64110 USA
| |
Collapse
|
274
|
Xie Y, Zhang J, Liao Z, Verjans J, Shen C, Xia Y. Intra- and Inter-Pair Consistency for Semi-Supervised Gland Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:894-905. [PMID: 34951847 DOI: 10.1109/tip.2021.3136716] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Accurate gland segmentation in histology tissue images is a critical but challenging task. Although deep models have demonstrated superior performance in medical image segmentation, they commonly require a large amount of annotated data, which are hard to obtain due to the extensive labor costs and expertise required. In this paper, we propose an intra- and inter-pair consistency-based semi-supervised (I2CS) model that can be trained on both labeled and unlabeled histology images for gland segmentation. Considering that each image contains glands and hence different images could potentially share consistent semantics in the feature space, we introduce a novel intra- and inter-pair consistency module to explore such consistency for learning with unlabeled data. It first characterizes the pixel-level relation between a pair of images in the feature space to create an attention map that highlights the regions with the same semantics but on different images. Then, it imposes a consistency constraint on the attention maps obtained from multiple image pairs, and thus filters low-confidence attention regions to generate refined attention maps that are then merged with original features to improve their representation ability. In addition, we also design an object-level loss to address the issues caused by touching glands. We evaluated our model against several recent gland segmentation methods and three typical semi-supervised methods on the GlaS and CRAG datasets. Our results not only demonstrate the effectiveness of the proposed due consistency module and Obj-Dice loss, but also indicate that the proposed I2CS model achieves state-of-the-art gland segmentation performance on both benchmarks.
Collapse
|
275
|
Peng Y, Liu E, Peng S, Chen Q, Li D, Lian D. Using artificial intelligence technology to fight COVID-19: a review. Artif Intell Rev 2022; 55:4941-4977. [PMID: 35002010 PMCID: PMC8720541 DOI: 10.1007/s10462-021-10106-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/12/2021] [Indexed: 02/10/2023]
Abstract
In late December 2019, a new type of coronavirus was discovered, which was later named severe acute respiratory syndrome coronavirus 2(SARS-CoV-2). Since its discovery, the virus has spread globally, with 2,975,875 deaths as of 15 April 2021, and has had a huge impact on our health systems and economy. How to suppress the continued spread of new coronary pneumonia is the main task of many scientists and researchers. The introduction of artificial intelligence technology has provided a huge contribution to the suppression of the new coronavirus. This article discusses the main application of artificial intelligence technology in the suppression of coronavirus from three major aspects of identification, prediction, and development through a large amount of literature research, and puts forward the current main challenges and possible development directions. The results show that it is an effective measure to combine artificial intelligence technology with a variety of new technologies to predict and identify COVID-19 patients.
Collapse
Affiliation(s)
- Yong Peng
- Petroleum Engineering School, Southwest Petroleum University, Chengdu, 610500 China
| | - Enbin Liu
- Petroleum Engineering School, Southwest Petroleum University, Chengdu, 610500 China
| | - Shanbi Peng
- School of Civil Engineering and Geomatics, Southwest Petroleum University, Chengdu, 610500 China
| | - Qikun Chen
- School of Engineering, Cardiff University, Cardiff, CF24 3AA UK
| | - Dangjian Li
- Petroleum Engineering School, Southwest Petroleum University, Chengdu, 610500 China
| | - Dianpeng Lian
- Petroleum Engineering School, Southwest Petroleum University, Chengdu, 610500 China
| |
Collapse
|
276
|
Wang R, Ji C, Zhang Y, Li Y. Focus, Fusion, and Rectify: Context-Aware Learning for COVID-19 Lung Infection Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:12-24. [PMID: 34813479 DOI: 10.1109/tnnls.2021.3126305] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The coronavirus disease 2019 (COVID-19) pandemic is spreading worldwide. Considering the limited clinicians and resources and the evidence that computed tomography (CT) analysis can achieve comparable sensitivity, specificity, and accuracy with reverse-transcription polymerase chain reaction, the automatic segmentation of lung infection from CT scans supplies a rapid and effective strategy for COVID-19 diagnosis, treatment, and follow-up. It is challenging because the infection appearance has high intraclass variation and interclass indistinction in CT slices. Therefore, a new context-aware neural network is proposed for lung infection segmentation. Specifically, the autofocus and panorama modules are designed for extracting fine details and semantic knowledge and capturing the long-range dependencies of the context from both peer level and cross level. Also, a novel structure consistency rectification is proposed for calibration by depicting the structural relationship between foreground and background. Experimental results on multiclass and single-class COVID-19 CT images demonstrate the effectiveness of our work. In particular, our method obtains the mean intersection over union (mIoU) score of 64.8%, 65.2%, and 73.8% on three benchmark datasets for COVID-19 infection segmentation.
Collapse
|
277
|
Abstract
Due to the outbreak of lung infections caused by the coronavirus disease (COVID-19), humans have to face an unprecedented and devastating global health crisis. Since chest computed tomography (CT) images of COVID-19 patients contain abundant pathological features closely related to this disease, rapid detection and diagnosis based on CT images is of great significance for the treatment of patients and blocking the spread of the disease. In particular, the segmentation of the COVID-19 CT lung-infected area can quantify and evaluate the severity of the disease. However, due to the blurred boundaries and low contrast between the infected and the non-infected areas in COVID-19 CT images, the manual segmentation of the COVID-19 lesion is laborious and places high demands on the operator. Quick and accurate segmentation of COVID-19 lesions from CT images based on deep learning has drawn increasing attention. To effectively improve the segmentation effect of COVID-19 lung infection, a modified UNet network that combines the squeeze-and-attention (SA) and dense atrous spatial pyramid pooling (Dense ASPP) modules) (SD-UNet) is proposed, fusing global context and multi-scale information. Specifically, the SA module is introduced to strengthen the attention of pixel grouping and fully exploit the global context information, allowing the network to better mine the differences and connections between pixels. The Dense ASPP module is utilized to capture multi-scale information of COVID-19 lesions. Moreover, to eliminate the interference of background noise outside the lungs and highlight the texture features of the lung lesion area, we extract in advance the lung area from the CT images in the pre-processing stage. Finally, we evaluate our method using the binary-class and multi-class COVID-19 lung infection segmentation datasets. The experimental results show that the metrics of Sensitivity, Dice Similarity Coefficient, Accuracy, Specificity, and Jaccard Similarity are 0.8988 (0.6169), 0.8696 (0.5936), 0.9906 (0.9821), 0.9932 (0.9907), and 0.7702 (0.4788), respectively, for the binary-class (multi-class) segmentation task in the proposed SD-UNet. The result of the COVID-19 lung infection area segmented by SD-UNet is closer to the ground truth compared to several existing models such as CE-Net, DeepLab v3+, UNet++, and other models, which further proves that a more accurate segmentation effect can be achieved by our method. It has the potential to assist doctors in making more accurate and rapid diagnosis and quantitative assessment of COVID-19.
Collapse
|
278
|
Abstract
The COVID-19 pandemic presents the Artificial Intelligence (AI) community with many obstacles. Healthcare organizations are in desperate need of technology for decision-making to tackle this virus and allow them to get timely feedback in real-time to prevent its spread. With the epidemic now being a global pandemic, AI tools and technology can be used to help efforts by governments, the medical community, and society as a whole to handle every stage of the crisis and its aftermath: identification, prevention, response, recovery, and acceleration of science. AI works to simulate human intellect professionally. This outcome-based technology is used to better scan, evaluate, forecast, and monitor current patients and probable patients in the future. In this proposed study, for global pandemic COVID-19, we are aiming to incorporate AI-based preventive measures such as face mask detection and image-based computed tomography scans using advanced deep learning models.
Collapse
|
279
|
Wang Q, Tan X, Ma L, Liu C. Dual Windows Are Significant: Learning from Mediastinal Window and Focusing on Lung Window. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20497-5_16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
280
|
Shiri I, Arabi H, Salimi Y, Sanaat A, Akhavanallaf A, Hajianfar G, Askari D, Moradi S, Mansouri Z, Pakbin M, Sandoughdaran S, Abdollahi H, Radmard AR, Rezaei‐Kalantari K, Ghelich Oghli M, Zaidi H. COLI-Net: Deep learning-assisted fully automated COVID-19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:12-25. [PMID: 34898850 PMCID: PMC8652855 DOI: 10.1002/ima.22672] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 09/18/2021] [Accepted: 10/17/2021] [Indexed: 05/17/2023]
Abstract
We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347'259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7'333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98-0.99) and 0.91 ± 0.038 (95% CI, 0.90-0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, -0.12 to 0.18) and -0.18 ± 3.4% (95% CI, -0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16-0.59) and 0.81 ± 6.6% (95% CI, -0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (-6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Ghasem Hajianfar
- Rajaie Cardiovascular Medical and Research CenterIran University of Medical SciencesTehranIran
| | - Dariush Askari
- Department of Radiology TechnologyShahid Beheshti University of Medical SciencesTehranIran
| | - Shakiba Moradi
- Research and Development DepartmentMed Fanavaran Plus Co.KarajIran
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Masoumeh Pakbin
- Clinical Research Development CenterQom University of Medical SciencesQomIran
| | - Saleh Sandoughdaran
- Men's Health and Reproductive Health Research CenterShahid Beheshti University of Medical SciencesTehranIran
| | - Hamid Abdollahi
- Department of Radiologic Technology, Faculty of Allied MedicineKerman University of Medical SciencesKermanIran
| | - Amir Reza Radmard
- Department of RadiologyShariati Hospital, Tehran University of Medical SciencesTehranIran
| | - Kiara Rezaei‐Kalantari
- Rajaie Cardiovascular Medical and Research CenterIran University of Medical SciencesTehranIran
| | - Mostafa Ghelich Oghli
- Research and Development DepartmentMed Fanavaran Plus Co.KarajIran
- Department of Cardiovascular SciencesKU LeuvenLeuvenBelgium
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
- Geneva University NeurocenterGeneva UniversityGenevaSwitzerland
- Department of Nuclear Medicine and Molecular ImagingUniversity of Groningen, University Medical Center GroningenGroningenNetherlands
- Department of Nuclear MedicineUniversity of Southern DenmarkOdenseDenmark
| |
Collapse
|
281
|
Jadhav S, Deng G, Zawin M, Kaufman AE. COVID-view: Diagnosis of COVID-19 using Chest CT. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:227-237. [PMID: 34587075 PMCID: PMC8981756 DOI: 10.1109/tvcg.2021.3114851] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 06/13/2021] [Accepted: 08/08/2021] [Indexed: 05/02/2023]
Abstract
Significant work has been done towards deep learning (DL) models for automatic lung and lesion segmentation and classification of COVID-19 on chest CT data. However, comprehensive visualization systems focused on supporting the dual visual+DL diagnosis of COVID-19 are non-existent. We present COVID-view, a visualization application specially tailored for radiologists to diagnose COVID-19 from chest CT data. The system incorporates a complete pipeline of automatic lungs segmentation, localization/isolation of lung abnormalities, followed by visualization, visual and DL analysis, and measurement/quantification tools. Our system combines the traditional 2D workflow of radiologists with newer 2D and 3D visualization techniques with DL support for a more comprehensive diagnosis. COVID-view incorporates a novel DL model for classifying the patients into positive/negative COVID-19 cases, which acts as a reading aid for the radiologist using COVID-view and provides the attention heatmap as an explainable DL for the model output. We designed and evaluated COVID-view through suggestions, close feedback and conducting case studies of real-world patient data by expert radiologists who have substantial experience diagnosing chest CT scans for COVID-19, pulmonary embolism, and other forms of lung infections. We present requirements and task analysis for the diagnosis of COVID-19 that motivate our design choices and results in a practical system which is capable of handling real-world patient cases.
Collapse
Affiliation(s)
| | - Gaofeng Deng
- Department of Computer ScienceStony Brook UniversityUSA
| | - Marlene Zawin
- Department of RadiologyStony Brook University HospitalUSA
| | | |
Collapse
|
282
|
An optimized CNN based automated COVID-19 lung infection identification technique from C.T. images. NOVEL AI AND DATA SCIENCE ADVANCEMENTS FOR SUSTAINABILITY IN THE ERA OF COVID-19 2022. [PMCID: PMC9068981 DOI: 10.1016/b978-0-323-90054-6.00010-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
The Novel Coronavirus, commonly known as COVID-19, is a highly contagious disease that spreads all over the globe has brought great suffering. The symptoms have made all ages suffer a lot. The diagnostics of COVID-19 is an excellent challenge for the medical field as the mutated form of the virus gives out its symptoms in different forms. The main diagnostics to be involved in this infection of COVID-19 is the Lung operation. Especially the automatic detection of the lungs' infection using chest X-rays provides the comprehensive possibility for healthcare professionals to develop hospital procedures to handle this COVID-19. Computed tomography scans are used to diagnose the lungs' infection caused by the Coronavirus, whereas the C.T. scans break the infected region from lung lesions. It is imperative to measure the disease progression, which is challenging to track down and treat accurately. Currently, Segmenting the contaminated regions from the C.T. slices can create loads of problems, which involves more alteration in contamination nature and minor power disagreement in the center of the infected tissues and the typical material. This chapter aims to segment the infection in the lungs using SqueezeNet as the Convolutional Neural Network (CNN) to recognize the contaminated regions automatically. This approach may be useful to help in the accuracy of the C.T more accurately. It has been ensured based on Dice scores, sensitivity, and high precision, specificity. The results achieved with the proposed model are low for the former and directly proportional to the latter compared with existing methods.
Collapse
|
283
|
Hu R, Gan J, Zhu X, Liu T, Shi X. Multi-task multi-modality SVM for early COVID-19 Diagnosis using chest CT data. Inf Process Manag 2022; 59:102782. [PMID: 34629687 PMCID: PMC8487772 DOI: 10.1016/j.ipm.2021.102782] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 09/17/2021] [Accepted: 09/23/2021] [Indexed: 01/08/2023]
Abstract
In the early diagnosis of the Coronavirus disease (COVID-19), it is of great importance for either distinguishing severe cases from mild cases or predicting the conversion time that mild cases would possibly convert to severe cases. This study investigates both of them in a unified framework by exploring the problems such as slight appearance difference between mild cases and severe cases, the interpretability, the High Dimension and Low Sample Size (HDLSS) data, and the class imbalance. To this end, the proposed framework includes three steps: (1) feature extraction which first conducts the hierarchical segmentation on the chest Computed Tomography (CT) image data and then extracts multi-modality handcrafted features for each segment, aiming at capturing the slight appearance difference from different perspectives; (2) data augmentation which employs the over-sampling technique to augment the number of samples corresponding to the minority classes, aiming at investigating the class imbalance problem; and (3) joint construction of classification and regression by proposing a novel Multi-task Multi-modality Support Vector Machine (MM-SVM) method to solve the issue of the HDLSS data and achieve the interpretability. Experimental analysis on two synthetic and one real COVID-19 data set demonstrated that our proposed framework outperformed six state-of-the-art methods in terms of binary classification and regression performance.
Collapse
Affiliation(s)
- Rongyao Hu
- School of Computer Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
- Massey University Albany Campus, Auckland 0745, New Zealand
| | - Jiangzhang Gan
- School of Computer Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
- Massey University Albany Campus, Auckland 0745, New Zealand
| | - Xiaofeng Zhu
- School of Computer Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
- Massey University Albany Campus, Auckland 0745, New Zealand
| | - Tong Liu
- Massey University Albany Campus, Auckland 0745, New Zealand
| | - Xiaoshuang Shi
- School of Computer Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
284
|
Li Y, Chen J, Wei D, Zhu Y, Wu J, Xiong J, Gang Y, Sun W, Xu H, Qian T, Ma K, Zheng Y. Mix-and-Interpolate: A Training Strategy to Deal With Source-Biased Medical Data. IEEE J Biomed Health Inform 2022; 26:172-182. [PMID: 34637384 PMCID: PMC8908883 DOI: 10.1109/jbhi.2021.3119325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 08/27/2021] [Accepted: 10/05/2021] [Indexed: 12/04/2022]
Abstract
Till March 31st, 2021, the coronavirus disease 2019 (COVID-19) had reportedly infected more than 127 million people and caused over 2.5 million deaths worldwide. Timely diagnosis of COVID-19 is crucial for management of individual patients as well as containment of the highly contagious disease. Having realized the clinical value of non-contrast chest computed tomography (CT) for diagnosis of COVID-19, deep learning (DL) based automated methods have been proposed to aid the radiologists in reading the huge quantities of CT exams as a result of the pandemic. In this work, we address an overlooked problem for training deep convolutional neural networks for COVID-19 classification using real-world multi-source data, namely, the data source bias problem. The data source bias problem refers to the situation in which certain sources of data comprise only a single class of data, and training with such source-biased data may make the DL models learn to distinguish data sources instead of COVID-19. To overcome this problem, we propose MIx-aNd-Interpolate (MINI), a conceptually simple, easy-to-implement, efficient yet effective training strategy. The proposed MINI approach generates volumes of the absent class by combining the samples collected from different hospitals, which enlarges the sample space of the original source-biased dataset. Experimental results on a large collection of real patient data (1,221 COVID-19 and 1,520 negative CT images, and the latter consisting of 786 community acquired pneumonia and 734 non-pneumonia) from eight hospitals and health institutions show that: 1) MINI can improve COVID-19 classification performance upon the baseline (which does not deal with the source bias), and 2) MINI is superior to competing methods in terms of the extent of improvement.
Collapse
Affiliation(s)
| | | | - Dong Wei
- Tencent Jarvis LabShenzhen518000China
| | | | | | | | - Yadong Gang
- Department of RadiologyZhongnan Hospital of Wuhan UniversityWuhan430071China
| | - Wenbo Sun
- Department of RadiologyZhongnan Hospital of Wuhan UniversityWuhan430071China
| | - Haibo Xu
- Department of RadiologyZhongnan Hospital of Wuhan UniversityWuhan430071China
| | | | - Kai Ma
- Tencent Jarvis LabShenzhen518000China
| | | |
Collapse
|
285
|
Gopatoti A, Vijayalakshmi P. Optimized chest X-ray image semantic segmentation networks for COVID-19 early detection. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:491-512. [PMID: 35213339 DOI: 10.3233/xst-211113] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
BACKGROUND Although detection of COVID-19 from chest X-ray radiography (CXR) images is faster than PCR sputum testing, the accuracy of detecting COVID-19 from CXR images is lacking in the existing deep learning models. OBJECTIVE This study aims to classify COVID-19 and normal patients from CXR images using semantic segmentation networks for detecting and labeling COVID-19 infected lung lobes in CXR images. METHODS For semantically segmenting infected lung lobes in CXR images for COVID-19 early detection, three structurally different deep learning (DL) networks such as SegNet, U-Net and hybrid CNN with SegNet plus U-Net, are proposed and investigated. Further, the optimized CXR image semantic segmentation networks such as GWO SegNet, GWO U-Net, and GWO hybrid CNN are developed with the grey wolf optimization (GWO) algorithm. The proposed DL networks are trained, tested, and validated without and with optimization on the openly available dataset that contains 2,572 COVID-19 CXR images including 2,174 training images and 398 testing images. The DL networks and their GWO optimized networks are also compared with other state-of-the-art models used to detect COVID-19 CXR images. RESULTS All optimized CXR image semantic segmentation networks for COVID-19 image detection developed in this study achieved detection accuracy higher than 92%. The result shows the superiority of optimized SegNet in segmenting COVID-19 infected lung lobes and classifying with an accuracy of 98.08% compared to optimized U-Net and hybrid CNN. CONCLUSION The optimized DL networks has potential to be utilised to more objectively and accurately identify COVID-19 disease using semantic segmentation of COVID-19 CXR images of the lungs.
Collapse
Affiliation(s)
- Anandbabu Gopatoti
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
- Anna University, Chennai, Tamil Nadu, India
| | - P Vijayalakshmi
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
286
|
Avetisian M, Burenko I, Egorov K, Kokh V, Nesterov A, Nikolaev A, Ponomarchuk A, Sokolova E, Tuzhilin A, Umerenkov D. CoRSAI: A System for Robust Interpretation of CT Scans of COVID-19 Patients Using Deep Learning. ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS 2021. [DOI: 10.1145/3467471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Analysis of chest CT scans can be used in detecting parts of lungs that are affected by infectious diseases such as COVID-19. Determining the volume of lungs affected by lesions is essential for formulating treatment recommendations and prioritizing patients by severity of the disease. In this article we adopted an approach based on using an ensemble of deep convolutional neural networks for segmentation of slices of lung CT scans. Using our models, we are able to segment the lesions, evaluate patients’ dynamics, estimate relative volume of lungs affected by lesions, and evaluate the lung damage stage. Our models were trained on data from different medical centers. We compared predictions of our models with those of six experienced radiologists, and our segmentation model outperformed most of them. On the task of classification of disease severity, our model outperformed all the radiologists.
Collapse
Affiliation(s)
| | | | | | | | | | - Aleksandr Nikolaev
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies, Russia
| | | | | | - Alex Tuzhilin
- Sberbank AI Laboratory and New York University, New York, USA
| | | |
Collapse
|
287
|
Hao J, Xie J, Liu R, Hao H, Ma Y, Yan K, Liu R, Zheng Y, Zheng J, Liu J, Zhang J, Zhao Y. Automatic Sequence-Based Network for Lung Diseases Detection in Chest CT. Front Oncol 2021; 11:781798. [PMID: 34926297 PMCID: PMC8674429 DOI: 10.3389/fonc.2021.781798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/01/2021] [Indexed: 11/18/2022] Open
Abstract
Objective To develop an accurate and rapid computed tomography (CT)-based interpretable AI system for the diagnosis of lung diseases. Background Most existing AI systems only focus on viral pneumonia (e.g., COVID-19), specifically, ignoring other similar lung diseases: e.g., bacterial pneumonia (BP), which should also be detected during CT screening. In this paper, we propose a unified sequence-based pneumonia classification network, called SLP-Net, which utilizes consecutiveness information for the differential diagnosis of viral pneumonia (VP), BP, and normal control cases from chest CT volumes. Methods Considering consecutive images of a CT volume as a time sequence input, compared with previous 2D slice-based or 3D volume-based methods, our SLP-Net can effectively use the spatial information and does not need a large amount of training data to avoid overfitting. Specifically, sequential convolutional neural networks (CNNs) with multi-scale receptive fields are first utilized to extract a set of higher-level representations, which are then fed into a convolutional long short-term memory (ConvLSTM) module to construct axial dimensional feature maps. A novel adaptive-weighted cross-entropy loss (ACE) is introduced to optimize the output of the SLP-Net with a view to ensuring that as many valid features from the previous images as possible are encoded into the later CT image. In addition, we employ sequence attention maps for auxiliary classification to enhance the confidence level of the results and produce a case-level prediction. Results For evaluation, we constructed a dataset of 258 chest CT volumes with 153 VP, 42 BP, and 63 normal control cases, for a total of 43,421 slices. We implemented a comprehensive comparison between our SLP-Net and several state-of-the-art methods across the dataset. Our proposed method obtained significant performance without a large amount of data, outperformed other slice-based and volume-based approaches. The superior evaluation performance achieved in the classification experiments demonstrated the ability of our model in the differential diagnosis of VP, BP and normal cases.
Collapse
Affiliation(s)
- Jinkui Hao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.,School of Optical Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Jianyang Xie
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Ri Liu
- Hwa Mei Hospital, University of Chinese Academy of Sciences, Ningbo, China
| | - Huaying Hao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Yuhui Ma
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.,School of Optical Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Kun Yan
- Hwa Mei Hospital, University of Chinese Academy of Sciences, Ningbo, China
| | - Ruirui Liu
- School of Medicine, Ningbo University, Ningbo, China
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Jianjun Zheng
- School of Medicine, Ningbo University, Ningbo, China
| | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.,Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jingfeng Zhang
- Hwa Mei Hospital, University of Chinese Academy of Sciences, Ningbo, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.,Zhejiang International Scientific and Technological Cooperative Base of Biomedical Materials and Technology, Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.,Zhejiang Engineering Research Center for Biomedical Materials, Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| |
Collapse
|
288
|
Ren Q, Zhou B, Tian L, Guo W. Detection of COVID-19 with CT Images using Hybrid Complex Shearlet Scattering Networks. IEEE J Biomed Health Inform 2021; 26:194-205. [PMID: 34855604 DOI: 10.1109/jbhi.2021.3132157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
With the ongoing worldwide coronavirus disease 2019 (COVID-19) pandemic, it is desirable to develop effective algorithms for the automatic detection of COVID-19 with chest computed tomography (CT) images. As deep learning has achieved breakthrough results in numerous computer vision and image understanding tasks, a good choice is to consider diagnosis models based on deep learning. Recently, a considerable number of methods have indeed been proposed. However, training an accurate deep learning model requires a large-scale chest CT dataset, which is hard to collect due to the high contagiousness of COVID-19. To achieve improved COVID-19 detection performance, this paper proposes a hybrid framework that fuses the complex shearlet scattering transform (CSST) and a suitable convolutional neural network into a single model. The introduced CSST cascades complex shearlet transforms with modulus nonlinearities and low-pass filter convolutions to compute a sparse and locally invariant image representation. The features computed from the input chest CT images are discriminative for the detection of COVID-19. Furthermore, a wide residual network with a redesigned residual block (WR2N) is developed to learn more granular multiscale representations by applying it to scattering features. The combination of the model-based CSST and data-driven WR2N leads to a more convenient neural network for image representation, where the idea is to learn only the image parts that the CSST cannot handle instead of all parts. The experimental results obtained on two public chest CT datasets for COVID-19 detection demonstrate the superiority of the proposed method. We can obtain more accurate results than several state-of-the-art COVID-19 classification methods in terms of measures such as accuracy, the F1-score, and the area under the receiver operating characteristic curve.
Collapse
|
289
|
He J, Zhu Q, Zhang K, Yu P, Tang J. An evolvable adversarial network with gradient penalty for COVID-19 infection segmentation. Appl Soft Comput 2021; 113:107947. [PMID: 34658687 PMCID: PMC8507576 DOI: 10.1016/j.asoc.2021.107947] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 08/20/2021] [Accepted: 09/22/2021] [Indexed: 11/30/2022]
Abstract
COVID-19 infection segmentation has essential applications in determining the severity of a COVID-19 patient and can provide a necessary basis for doctors to adopt a treatment scheme. However, in clinical applications, infection segmentation is performed by human beings, which is time-consuming and generally introduces bias. In this paper, we developed a novel evolvable adversarial framework for COVID-19 infection segmentation. Three generator networks compose an evolutionary population to accommodate the current discriminator, i.e., generator networks evolved with different mutations instead of the single adversarial objective to provide sufficient gradient feedback. Compared with the existing work that enforces a Lipschitz constraint by weight clipping, which may lead to gradient exploding or vanishing, the proposed model also incorporates the gradient penalty into the network, penalizing the discriminator's gradient norm input. Experiments on several COVID-19 CT scan datasets verified that the proposed method achieved superior effectiveness and stability for COVID-19 infection segmentation.
Collapse
Affiliation(s)
- Juanjuan He
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan, China
| | - Qi Zhu
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan, China
| | - Kai Zhang
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan, China
| | - Piaoyao Yu
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan, China
| | - Jinshan Tang
- Department of Health Administration and Policy George Mason University, Fairfax, VA, 22030, USA
| |
Collapse
|
290
|
Mu N, Wang H, Zhang Y, Jiang J, Tang J. Progressive global perception and local polishing network for lung infection segmentation of COVID-19 CT images. PATTERN RECOGNITION 2021; 120:108168. [PMID: 34305181 PMCID: PMC8272691 DOI: 10.1016/j.patcog.2021.108168] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 06/23/2021] [Accepted: 07/06/2021] [Indexed: 05/19/2023]
Abstract
In this paper, a progressive global perception and local polishing (PCPLP) network is proposed to automatically segment the COVID-19-caused pneumonia infections in computed tomography (CT) images. The proposed PCPLP follows an encoder-decoder architecture. Particularly, the encoder is implemented as a computationally efficient fully convolutional network (FCN). In this study, a multi-scale multi-level feature recursive aggregation (mmFRA) network is used to integrate multi-scale features (viz. global guidance features and local refinement features) with multi-level features (viz. high-level semantic features, middle-level comprehensive features, and low-level detailed features). Because of this innovative aggregation of features, an edge-preserving segmentation map can be produced in a boundary-aware multiple supervision (BMS) way. Furthermore, both global perception and local perception are devised. On the one hand, a global perception module (GPM) providing a holistic estimation of potential lung infection regions is employed to capture more complementary coarse-structure information from different pyramid levels by enlarging the receptive fields without substantially increasing the computational burden. On the other hand, a local polishing module (LPM), which provides a fine prediction of the segmentation regions, is applied to explicitly heighten the fine-detail information and reduce the dilution effect of boundary knowledge. Comprehensive experimental evaluations demonstrate the effectiveness of the proposed PCPLP in boosting the learning ability to identify the lung infected regions with clear contours accurately. Our model is superior remarkably to the state-of-the-art segmentation models both quantitatively and qualitatively on a real CT dataset of COVID-19.
Collapse
Affiliation(s)
- Nan Mu
- School of Computer Science, Sichuan Normal University, 610101 Chengdu, China
| | - Hongyu Wang
- School of Computer Science, Sichuan Normal University, 610101 Chengdu, China
| | - Yu Zhang
- School of Computer Science, Sichuan Normal University, 610101 Chengdu, China
| | - Jingfeng Jiang
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, United States
- Center for Biocomputing and Digital Health, Institute of Computing & and Cybersystems and Health Research Institute, Michigan Technological University, Houghton, MI 49931, United States
| | - Jinshan Tang
- Department of Health Administration and Policy, George Mason University, Fairfax, VA 22030, United States
| |
Collapse
|
291
|
Liu J, Dong B, Wang S, Cui H, Fan DP, Ma J, Chen G. COVID-19 lung infection segmentation with a novel two-stage cross-domain transfer learning framework. Med Image Anal 2021; 74:102205. [PMID: 34425317 PMCID: PMC8342869 DOI: 10.1016/j.media.2021.102205] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 06/07/2021] [Accepted: 07/28/2021] [Indexed: 12/11/2022]
Abstract
With the global outbreak of COVID-19 in early 2020, rapid diagnosis of COVID-19 has become the urgent need to control the spread of the epidemic. In clinical settings, lung infection segmentation from computed tomography (CT) images can provide vital information for the quantification and diagnosis of COVID-19. However, accurate infection segmentation is a challenging task due to (i) the low boundary contrast between infections and the surroundings, (ii) large variations of infection regions, and, most importantly, (iii) the shortage of large-scale annotated data. To address these issues, we propose a novel two-stage cross-domain transfer learning framework for the accurate segmentation of COVID-19 lung infections from CT images. Our framework consists of two major technical innovations, including an effective infection segmentation deep learning model, called nCoVSegNet, and a novel two-stage transfer learning strategy. Specifically, our nCoVSegNet conducts effective infection segmentation by taking advantage of attention-aware feature fusion and large receptive fields, aiming to resolve the issues related to low boundary contrast and large infection variations. To alleviate the shortage of the data, the nCoVSegNet is pre-trained using a two-stage cross-domain transfer learning strategy, which makes full use of the knowledge from natural images (i.e., ImageNet) and medical images (i.e., LIDC-IDRI) to boost the final training on CT images with COVID-19 infections. Extensive experiments demonstrate that our framework achieves superior segmentation accuracy and outperforms the cutting-edge models, both quantitatively and qualitatively.
Collapse
Affiliation(s)
- Jiannan Liu
- Department of Computer Science and Technology, Heilongjiang University, Harbin, China
| | - Bo Dong
- Center for Brain Imaging Science and Technology, Zhejiang University, Hangzhou, China
| | - Shuai Wang
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Hui Cui
- Department of Computer Science and Information Technology, La Trobe University, Melbourne, Australia
| | - Deng-Ping Fan
- College of Computer Science, Nankai University, Tianjin, China
| | - Jiquan Ma
- Department of Computer Science and Technology, Heilongjiang University, Harbin, China.
| | - Geng Chen
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China.
| |
Collapse
|
292
|
Gudigar A, Raghavendra U, Nayak S, Ooi CP, Chan WY, Gangavarapu MR, Dharmik C, Samanth J, Kadri NA, Hasikin K, Barua PD, Chakraborty S, Ciaccio EJ, Acharya UR. Role of Artificial Intelligence in COVID-19 Detection. SENSORS (BASEL, SWITZERLAND) 2021; 21:8045. [PMID: 34884045 PMCID: PMC8659534 DOI: 10.3390/s21238045] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 11/26/2021] [Accepted: 11/26/2021] [Indexed: 12/15/2022]
Abstract
The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.
Collapse
Affiliation(s)
- Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Sneha Nayak
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore;
| | - Wai Yee Chan
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur 50603, Malaysia;
| | - Mokshagna Rohit Gangavarapu
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chinmay Dharmik
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nahrizul Adib Kadri
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Prabal Datta Barua
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia;
- School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
| | - Subrata Chakraborty
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Edward J. Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, NY 10032, USA;
| | - U. Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore;
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
- International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 860-8555, Japan
| |
Collapse
|
293
|
Pandya S, Sur A, Solke N. COVIDSAVIOR: A Novel Sensor-Fusion and Deep Learning Based Framework for Virus Outbreaks. Front Public Health 2021; 9:797808. [PMID: 34917585 PMCID: PMC8669395 DOI: 10.3389/fpubh.2021.797808] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Accepted: 11/02/2021] [Indexed: 12/24/2022] Open
Abstract
The presented deep learning and sensor-fusion based assistive technology (Smart Facemask and Thermal scanning kiosk) will protect the individual using auto face-mask detection and auto thermal scanning to detect the current body temperature. Furthermore, the presented system also facilitates a variety of notifications, such as an alarm, if an individual is not wearing a mask and detects thermal temperature beyond the standard body temperature threshold, such as 98.6°F (37°C). Design/methodology/approach-The presented deep Learning and sensor-fusion-based approach can also detect an individual in with or without mask situations and provide appropriate notification to the security personnel by raising the alarm. Moreover, the smart tunnel is also equipped with a thermal sensing unit embedded with a camera, which can detect the real-time body temperature of an individual concerning the prescribed body temperature limits as prescribed by WHO reports. Findings-The investigation results validate the performance evaluation of the presented smart face-mask and thermal scanning mechanism. The presented system can also detect an outsider entering the building with or without mask condition and be aware of the security control room by raising appropriate alarms. Furthermore, the presented smart epidemic tunnel is embedded with an intelligent algorithm that can perform real-time thermal scanning of an individual and store essential information in a cloud platform, such as Google firebase. Thus, the proposed system favors society by saving time and helps in lowering the spread of coronavirus.
Collapse
Affiliation(s)
- Sharnil Pandya
- Symbiosis Institute of Technology, Symbiosis International (Deemed) University, Pune, India
| | | | | |
Collapse
|
294
|
Zhang Q, Ren X, Wei B. Segmentation of infected region in CT images of COVID-19 patients based on QC-HC U-net. Sci Rep 2021; 11:22854. [PMID: 34819524 PMCID: PMC8613253 DOI: 10.1038/s41598-021-01502-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Accepted: 10/25/2021] [Indexed: 12/24/2022] Open
Abstract
Since the outbreak of COVID-19 in 2019, the rapid spread of the epidemic has brought huge challenges to medical institutions. If the pathological region in the COVID-19 CT image can be automatically segmented, it will help doctors quickly determine the patient's infection, thereby speeding up the diagnosis process. To be able to automatically segment the infected area, we proposed a new network structure and named QC-HC U-Net. First, we combine residual connection and dense connection to form a new connection method and apply it to the encoder and the decoder. Second, we choose to add Hypercolumns in the decoder section. Compared with the benchmark 3D U-Net, the improved network can effectively avoid vanishing gradient while extracting more features. To improve the situation of insufficient data, resampling and data enhancement methods are selected in this paper to expand the datasets. We used 63 cases of MSD lung tumor data for training and testing, continuously verified to ensure the training effect of this model, and then selected 20 cases of public COVID-19 data for training and testing. Experimental results showed that in the segmentation of COVID-19, the specificity and sensitivity were 85.3% and 83.6%, respectively, and in the segmentation of MSD lung tumors, the specificity and sensitivity were 81.45% and 80.93%, respectively, without any fitting.
Collapse
Affiliation(s)
- Qin Zhang
- School of Computer Science and Technology, Qilu University of Technology, Jinan, 250301, China
| | - Xiaoqiang Ren
- School of Computer Science and Technology, Qilu University of Technology, Jinan, 250301, China.
| | - Benzheng Wei
- Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Jinan, China.
| |
Collapse
|
295
|
Sahoo P, Roy I, Ahlawat R, Irtiza S, Khan L. Potential diagnosis of COVID-19 from chest X-ray and CT findings using semi-supervised learning. Phys Eng Sci Med 2021; 45:31-42. [PMID: 34780042 PMCID: PMC8591440 DOI: 10.1007/s13246-021-01075-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Accepted: 10/30/2021] [Indexed: 12/11/2022]
Abstract
COVID-19 is an infectious disease, which has adversely affected public health and the economy across the world. On account of the highly infectious nature of the disease, rapid automated diagnosis of COVID-19 is urgently needed. A few recent findings suggest that chest X-rays and CT scans can be used by machine learning for the diagnosis of COVID-19. Herein, we employed semi-supervised learning (SSL) approaches to detect COVID-19 cases accurately by analyzing digital chest X-rays and CT scans. On a relatively small COVID-19 radiography dataset, which contains only 219 COVID-19 positive images, 1341 normal and 1345 viral pneumonia images, our algorithm, COVIDCon, which takes advantage of data augmentation, consistency regularization, and multicontrastive learning, attains 97.07% average class prediction accuracy, with 1000 labeled images, which is 7.65% better than the next best SSL method, virtual adversarial training. COVIDCon performs even better on a larger COVID-19 CT Scan dataset that contains 82,767 images. It achieved an excellent accuracy of 99.13%, at 20,000 labels, which is 6.45% better than the next best pseudo-labeling approach. COVIDCon outperforms other state-of-the-art algorithms at every label that we have investigated. These results demonstrate COVIDCon as the benchmark SSL algorithm for potential diagnosis of COVID-19 from chest X-rays and CT-Scans. Furthermore, COVIDCon performs exceptionally well in identifying COVID-19 positive cases from a completely unseen repository with a confirmed COVID-19 case history. COVIDCon, may provide a fast, accurate, and reliable method for screening COVID-19 patients.
Collapse
Affiliation(s)
- Pracheta Sahoo
- Department of Computer Science, The University of Texas at Dallas, Richardson, TX, 75080, USA.
| | - Indranil Roy
- Department of Chemistry, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208-3113, USA
| | - Randeep Ahlawat
- Department of Computer Science, The University of Texas at Dallas, Richardson, TX, 75080, USA
| | - Saquib Irtiza
- Department of Computer Science, The University of Texas at Dallas, Richardson, TX, 75080, USA
| | - Latifur Khan
- Department of Computer Science, The University of Texas at Dallas, Richardson, TX, 75080, USA
| |
Collapse
|
296
|
Boulila W, Shah SA, Ahmad J, Driss M, Ghandorh H, Alsaeedi A, Al-Sarem M, Saeed F. Noninvasive Detection of Respiratory Disorder Due to COVID-19 at the Early Stages in Saudi Arabia. ELECTRONICS 2021; 10:2701. [DOI: 10.3390/electronics10212701] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The Kingdom of Saudi Arabia has suffered from COVID-19 disease as part of the global pandemic due to severe acute respiratory syndrome coronavirus 2. The economy of Saudi Arabia also suffered a heavy impact. Several measures were taken to help mitigate its impact and stimulate the economy. In this context, we present a safe and secure WiFi-sensing-based COVID-19 monitoring system exploiting commercially available low-cost wireless devices that can be deployed in different indoor settings within Saudi Arabia. We extracted different activities of daily living and respiratory rates from ubiquitous WiFi signals in terms of channel state information (CSI) and secured them from unauthorized access through permutation and diffusion with multiple substitution boxes using chaos theory. The experiments were performed on healthy participants. We used the variances of the amplitude information of the CSI data and evaluated their security using several security parameters such as the correlation coefficient, mean-squared error (MSE), peak-signal-to-noise ratio (PSNR), entropy, number of pixel change rate (NPCR), and unified average change intensity (UACI). These security metrics, for example, lower correlation and higher entropy, indicate stronger security of the proposed encryption method. Moreover, the NPCR and UACI values were higher than 99% and 30, respectively, which also confirmed the security strength of the encrypted information.
Collapse
Affiliation(s)
- Wadii Boulila
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- RIADI Laboratory, National School of Computer Science, University of Manouba, Manouba 2010, Tunisia
| | - Syed Aziz Shah
- Centre for Intelligent Healthcare, Coventry University, Coventry CV1 5FB, UK
| | - Jawad Ahmad
- School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK
| | - Maha Driss
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| | - Hamza Ghandorh
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| | - Abdullah Alsaeedi
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| | - Mohammed Al-Sarem
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- Department of Computer Science, Saba’a Region University, Mareb, Yemen
| | - Faisal Saeed
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7XG, UK
| |
Collapse
|
297
|
Uncertainty-guided graph attention network for parapneumonic effusion diagnosis. Med Image Anal 2021; 75:102217. [PMID: 34775280 DOI: 10.1016/j.media.2021.102217] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 08/12/2021] [Accepted: 08/23/2021] [Indexed: 01/08/2023]
Abstract
Parapneumonic effusion (PPE) is a common condition that causes death in patients hospitalized with pneumonia. Rapid distinction of complicated PPE (CPPE) from uncomplicated PPE (UPPE) in Computed Tomography (CT) scans is of great importance for the management and medical treatment of PPE. However, UPPE and CPPE display similar appearances in CT scans, and it is challenging to distinguish CPPE from UPPE via a single 2D CT image, whether attempted by a human expert, or by any of the existing disease classification approaches. 3D convolutional neural networks (CNNs) can utilize the entire 3D volume for classification: however, they typically suffer from the intrinsic defect of over-fitting. Therefore, it is important to develop a method that not only overcomes the heavy memory and computational requirements of 3D CNNs, but also leverages the 3D information. In this paper, we propose an uncertainty-guided graph attention network (UG-GAT) that can automatically extract and integrate information from all CT slices in a 3D volume for classification into UPPE, CPPE, and normal control cases. Specifically, we frame the distinction of different cases as a graph classification problem. Each individual is represented as a directed graph with a topological structure, where vertices represent the image features of slices, and edges encode the spatial relationship between them. To estimate the contribution of each slice, we first extract the slice representations with uncertainty, using a Bayesian CNN: we then make use of the uncertainty information to weight each slice during the graph prediction phase in order to enable more reliable decision-making. We construct a dataset consisting of 302 chest CT volumetric data from different subjects (99 UPPE, 99 CPPE and 104 normal control cases) in this study, and to the best of our knowledge, this is the first attempt to classify UPPE, CPPE and normal cases using a deep learning method. Extensive experiments show that our approach is lightweight in demands, and outperforms accepted state-of-the-art methods by a large margin. Code is available at https://github.com/iMED-Lab/UG-GAT.
Collapse
|
298
|
Zhao S, Li Z, Chen Y, Zhao W, Xie X, Liu J, Zhao D, Li Y. SCOAT-Net: A novel network for segmenting COVID-19 lung opacification from CT images. PATTERN RECOGNITION 2021; 119:108109. [PMID: 34127870 PMCID: PMC8189738 DOI: 10.1016/j.patcog.2021.108109] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 05/07/2021] [Accepted: 06/09/2021] [Indexed: 02/05/2023]
Abstract
Automatic segmentation of lung opacification from computed tomography (CT) images shows excellent potential for quickly and accurately quantifying the infection of Coronavirus disease 2019 (COVID-19) and judging the disease development and treatment response. However, some challenges still exist, including the complexity and variability features of the opacity regions, the small difference between the infected and healthy tissues, and the noise of CT images. Due to limited medical resources, it is impractical to obtain a large amount of data in a short time, which further hinders the training of deep learning models. To answer these challenges, we proposed a novel spatial- and channel-wise coarse-to-fine attention network (SCOAT-Net), inspired by the biological vision mechanism, for the segmentation of COVID-19 lung opacification from CT images. With the UNet++ as basic structure, our SCOAT-Net introduces the specially designed spatial-wise and channel-wise attention modules, which serve to collaboratively boost the attention learning of the network and extract the efficient features of the infected opacification regions at the pixel and channel levels. Experiments show that our proposed SCOAT-Net achieves better results compared to several state-of-the-art image segmentation networks and has acceptable generalization ability.
Collapse
Affiliation(s)
- Shixuan Zhao
- MOE Key Lab for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhidan Li
- MOE Key Lab for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Yang Chen
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Wei Zhao
- Department of Radiology, The Second Xiangya Hospital, Central South University, No.139 Middle Renmin Road, Changsha, Hunan, China
| | - Xingzhi Xie
- Department of Radiology, The Second Xiangya Hospital, Central South University, No.139 Middle Renmin Road, Changsha, Hunan, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, No.139 Middle Renmin Road, Changsha, Hunan, China
- Department of Radiology Quality Control Center, Changsha, Hunan, China
| | - Di Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Yongjie Li
- MOE Key Lab for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
299
|
Ding W, Abdel-Basset M, Hawash H. RCTE: A reliable and consistent temporal-ensembling framework for semi-supervised segmentation of COVID-19 lesions. Inf Sci (N Y) 2021; 578:559-573. [PMID: 34305162 PMCID: PMC8294559 DOI: 10.1016/j.ins.2021.07.059] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 06/17/2021] [Accepted: 07/17/2021] [Indexed: 12/16/2022]
Abstract
The segmentation of COVID-19 lesions from computed tomography (CT) scans is crucial to develop an efficient automated diagnosis system. Deep learning (DL) has shown success in different segmentation tasks. However, an efficient DL approach requires a large amount of accurately annotated data, which is difficult to aggregate owing to the urgent situation of COVID-19. Inaccurate annotation can easily occur without experts, and segmentation performance is substantially worsened by noisy annotations. Therefore, this study presents a reliable and consistent temporal-ensembling (RCTE) framework for semi-supervised lesion segmentation. A segmentation network is integrated into a teacher-student architecture to segment infection regions from a limited number of annotated CT scans and a large number of unannotated CT scans. The network generates reliable and unreliable targets, and to evenly handle these targets potentially degrades performance. To address this, a reliable teacher-student architecture is introduced, where a reliable teacher network is the exponential moving average (EMA) of a reliable student network that is reliably renovated by restraining the student involvement to EMA when its loss is larger. We also present a noise-aware loss based on improvements to generalized cross-entropy loss to lead the segmentation performance toward noisy annotations. Comprehensive analysis validates the robustness of RCTE over recent cutting-edge semi-supervised segmentation techniques, with a 65.87% Dice score.
Collapse
Affiliation(s)
- Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Mohamed Abdel-Basset
- Zagazig Univesitry, Shaibet an Nakareyah, Zagazig 2, 44519 Ash Sharqia Governorate, Egypt
| | - Hossam Hawash
- Zagazig Univesitry, Shaibet an Nakareyah, Zagazig 2, 44519 Ash Sharqia Governorate, Egypt
| |
Collapse
|
300
|
Gao Y, Wang H, Liu X, Huang N, Wang G, Zhang S. A Denoising Self-supervised Approach for COVID-19 Pneumonia Lesion Segmentation with Limited Annotated CT Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3705-3708. [PMID: 34892041 DOI: 10.1109/embc46164.2021.9630215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The coronavirus disease 2019 (COVID-19) has become a global pandemic. The segmentation of COVID-19 pneumonia lesions from CT images is important in quantitative evaluation and assessment of the infection. Though many deep learning segmentation methods have been proposed, the performance is limited when pixel-level annotations are hard to obtain. In order to alleviate the performance limitation brought by the lack of pixel-level annotation in COVID-19 pneumonia lesion segmentation task, we construct a denoising self-supervised framework, which is composed of a pretext denoising task and a downstream segmentation task. Through the pretext denoising task, the semantic features from massive unlabelled data are learned in an unsupervised manner, so as to provide additional supervisory signal for the downstream segmentation task. Experimental results showed that our method can effectively leverage unlabelled images to improve the segmentation performance, and outperformed reconstruction-based self-supervised learning when only a small set of training images are annotated.Clinical relevance-The proposed method can effectively leverage unlabelled images to improve the performance for COVID-19 pneumonia lesion segmentation when only a small set of CT images are annotated.
Collapse
|