1
|
Nur-A-Alam M, Nasir MK, Ahsan M, Based MA, Haider J, Kowalski M. Ensemble classification of integrated CT scan datasets in detecting COVID-19 using feature fusion from contourlet transform and CNN. Sci Rep 2023; 13:20063. [PMID: 37973820 PMCID: PMC10654719 DOI: 10.1038/s41598-023-47183-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 11/09/2023] [Indexed: 11/19/2023] Open
Abstract
The COVID-19 disease caused by coronavirus is constantly changing due to the emergence of different variants and thousands of people are dying every day worldwide. Early detection of this new form of pulmonary disease can reduce the mortality rate. In this paper, an automated method based on machine learning (ML) and deep learning (DL) has been developed to detect COVID-19 using computed tomography (CT) scan images extracted from three publicly available datasets (A total of 11,407 images; 7397 COVID-19 images and 4010 normal images). An unsupervised clustering approach that is a modified region-based clustering technique for segmenting COVID-19 CT scan image has been proposed. Furthermore, contourlet transform and convolution neural network (CNN) have been employed to extract features individually from the segmented CT scan images and to fuse them in one feature vector. Binary differential evolution (BDE) approach has been employed as a feature optimization technique to obtain comprehensible features from the fused feature vector. Finally, a ML/DL-based ensemble classifier considering bagging technique has been employed to detect COVID-19 from the CT images. A fivefold and generalization cross-validation techniques have been used for the validation purpose. Classification experiments have also been conducted with several pre-trained models (AlexNet, ResNet50, GoogleNet, VGG16, VGG19) and found that the ensemble classifier technique with fused feature has provided state-of-the-art performance with an accuracy of 99.98%.
Collapse
Affiliation(s)
- Md Nur-A-Alam
- Department of Computer Science & Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Bangladesh
| | - Mostofa Kamal Nasir
- Department of Computer Science & Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, York, YO10 5GH, UK
| | - Md Abdul Based
- Department of Computer Science & Engineering, Dhaka International University, Dhaka, 1205, Bangladesh
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester, M1 5GD, UK
| | - Marcin Kowalski
- Institute of Optoelectronics, Military University of Technology, Gen. S. Kaliskiego 2, Warsaw, Poland.
| |
Collapse
|
2
|
Keicher M, Burwinkel H, Bani-Harouni D, Paschali M, Czempiel T, Burian E, Makowski MR, Braren R, Navab N, Wendler T. Multimodal graph attention network for COVID-19 outcome prediction. Sci Rep 2023; 13:19539. [PMID: 37945590 PMCID: PMC10636061 DOI: 10.1038/s41598-023-46625-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 11/03/2023] [Indexed: 11/12/2023] Open
Abstract
When dealing with a newly emerging disease such as COVID-19, the impact of patient- and disease-specific factors (e.g., body weight or known co-morbidities) on the immediate course of the disease is largely unknown. An accurate prediction of the most likely individual disease progression can improve the planning of limited resources and finding the optimal treatment for patients. In the case of COVID-19, the need for intensive care unit (ICU) admission of pneumonia patients can often only be determined on short notice by acute indicators such as vital signs (e.g., breathing rate, blood oxygen levels), whereas statistical analysis and decision support systems that integrate all of the available data could enable an earlier prognosis. To this end, we propose a holistic, multimodal graph-based approach combining imaging and non-imaging information. Specifically, we introduce a multimodal similarity metric to build a population graph that shows a clustering of patients. For each patient in the graph, we extract radiomic features from a segmentation network that also serves as a latent image feature encoder. Together with clinical patient data like vital signs, demographics, and lab results, these modalities are combined into a multimodal representation of each patient. This feature extraction is trained end-to-end with an image-based Graph Attention Network to process the population graph and predict the COVID-19 patient outcomes: admission to ICU, need for ventilation, and mortality. To combine multiple modalities, radiomic features are extracted from chest CTs using a segmentation neural network. Results on a dataset collected in Klinikum rechts der Isar in Munich, Germany and the publicly available iCTCF dataset show that our approach outperforms single modality and non-graph baselines. Moreover, our clustering and graph attention increases understanding of the patient relationships within the population graph and provides insight into the network's decision-making process.
Collapse
Affiliation(s)
- Matthias Keicher
- Computer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of Munich, Boltzmannstr. 3, 85748, Garching, Germany.
| | - Hendrik Burwinkel
- Computer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of Munich, Boltzmannstr. 3, 85748, Garching, Germany
| | - David Bani-Harouni
- Computer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of Munich, Boltzmannstr. 3, 85748, Garching, Germany
| | - Magdalini Paschali
- Computer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of Munich, Boltzmannstr. 3, 85748, Garching, Germany
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94304, USA
| | - Tobias Czempiel
- Computer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of Munich, Boltzmannstr. 3, 85748, Garching, Germany
| | - Egon Burian
- Department of Diagnostic and Interventional Radiology, School of Medicine, Technical University of Munich, Ismaninger Str. 22, 81675, Munich, Germany
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Ismaninger Str. 22, 81675, Munich, Germany
| | - Marcus R Makowski
- Department of Diagnostic and Interventional Radiology, School of Medicine, Technical University of Munich, Ismaninger Str. 22, 81675, Munich, Germany
| | - Rickmer Braren
- Department of Diagnostic and Interventional Radiology, School of Medicine, Technical University of Munich, Ismaninger Str. 22, 81675, Munich, Germany
| | - Nassir Navab
- Computer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of Munich, Boltzmannstr. 3, 85748, Garching, Germany
| | - Thomas Wendler
- Computer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of Munich, Boltzmannstr. 3, 85748, Garching, Germany
- Department of Diagnostic and Interventional Radiology and Neuroradiology, Clinical Computational Medical Imaging Research, University Hospital Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany
| |
Collapse
|
3
|
Santosh KC, GhoshRoy D, Nakarmi S. A Systematic Review on Deep Structured Learning for COVID-19 Screening Using Chest CT from 2020 to 2022. Healthcare (Basel) 2023; 11:2388. [PMID: 37685422 PMCID: PMC10486542 DOI: 10.3390/healthcare11172388] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The emergence of the COVID-19 pandemic in Wuhan in 2019 led to the discovery of a novel coronavirus. The World Health Organization (WHO) designated it as a global pandemic on 11 March 2020 due to its rapid and widespread transmission. Its impact has had profound implications, particularly in the realm of public health. Extensive scientific endeavors have been directed towards devising effective treatment strategies and vaccines. Within the healthcare and medical imaging domain, the application of artificial intelligence (AI) has brought significant advantages. This study delves into peer-reviewed research articles spanning the years 2020 to 2022, focusing on AI-driven methodologies for the analysis and screening of COVID-19 through chest CT scan data. We assess the efficacy of deep learning algorithms in facilitating decision making processes. Our exploration encompasses various facets, including data collection, systematic contributions, emerging techniques, and encountered challenges. However, the comparison of outcomes between 2020 and 2022 proves intricate due to shifts in dataset magnitudes over time. The initiatives aimed at developing AI-powered tools for the detection, localization, and segmentation of COVID-19 cases are primarily centered on educational and training contexts. We deliberate on their merits and constraints, particularly in the context of necessitating cross-population train/test models. Our analysis encompassed a review of 231 research publications, bolstered by a meta-analysis employing search keywords (COVID-19 OR Coronavirus) AND chest CT AND (deep learning OR artificial intelligence OR medical imaging) on both the PubMed Central Repository and Web of Science platforms.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab, Vermillion, SD 57069, USA
| | - Debasmita GhoshRoy
- School of Automation, Banasthali Vidyapith, Tonk 304022, Rajasthan, India;
| | - Suprim Nakarmi
- Department of Computer Science, University of South Dakota, Vermillion, SD 57069, USA;
| |
Collapse
|
4
|
Wang C, Liu S, Tang Y, Yang H, Liu J. Diagnostic Test Accuracy of Deep Learning Prediction Models on COVID-19 Severity: Systematic Review and Meta-Analysis. J Med Internet Res 2023; 25:e46340. [PMID: 37477951 PMCID: PMC10403760 DOI: 10.2196/46340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/27/2023] [Accepted: 06/30/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND Deep learning (DL) prediction models hold great promise in the triage of COVID-19. OBJECTIVE We aimed to evaluate the diagnostic test accuracy of DL prediction models for assessing and predicting the severity of COVID-19. METHODS We searched PubMed, Scopus, LitCovid, Embase, Ovid, and the Cochrane Library for studies published from December 1, 2019, to April 30, 2022. Studies that used DL prediction models to assess or predict COVID-19 severity were included, while those without diagnostic test accuracy analysis or severity dichotomies were excluded. QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies 2), PROBAST (Prediction Model Risk of Bias Assessment Tool), and funnel plots were used to estimate the bias and applicability. RESULTS A total of 12 retrospective studies involving 2006 patients reported the cross-sectionally assessed value of DL on COVID-19 severity. The pooled sensitivity and area under the curve were 0.92 (95% CI 0.89-0.94; I2=0.00%) and 0.95 (95% CI 0.92-0.96), respectively. A total of 13 retrospective studies involving 3951 patients reported the longitudinal predictive value of DL for disease severity. The pooled sensitivity and area under the curve were 0.76 (95% CI 0.74-0.79; I2=0.00%) and 0.80 (95% CI 0.76-0.83), respectively. CONCLUSIONS DL prediction models can help clinicians identify potentially severe cases for early triage. However, high-quality research is lacking. TRIAL REGISTRATION PROSPERO CRD42022329252; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD 42022329252.
Collapse
Affiliation(s)
- Changyu Wang
- Department of Medical Informatics, West China Medical School, Sichuan University, Chengdu, China
- West China College of Stomatology, Sichuan University, Chengdu, China
| | - Siru Liu
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Yu Tang
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Hao Yang
- Information Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jialin Liu
- Department of Medical Informatics, West China Medical School, Sichuan University, Chengdu, China
- Information Center, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
5
|
Vinod DN, Prabaharan SRS. Elucidation of infection asperity of CT scan images of COVID-19 positive cases: A Machine Learning perspective. SCIENTIFIC AFRICAN 2023; 20:e01681. [PMID: 37192886 PMCID: PMC10150416 DOI: 10.1016/j.sciaf.2023.e01681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 03/19/2023] [Accepted: 04/30/2023] [Indexed: 05/18/2023] Open
Abstract
Owing to the profoundly irresistible nature of the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection, an enormous number of individuals halt in the line for Computed Tomography (CT) scan assessment, which overburdens the medical practitioners, radiologists, and adversely influences the patient's remedy, diagnosis, as well as restraint the epidemic. Medical facilities like intensive care systems and mechanical ventilators are restrained due to highly infectious diseases. It turns out to be very imperative to characterize the patients as per their asperity levels. This article exhibited a novel execution of a threshold-based image segmentation technique and random forest classifier for COVID-19 contamination asperity identification. With the help of the image segmentation model and machine learning classifier, we can identify and classify COVID-19 individuals into three asperity classes such as early, progressive, and advanced, with an accuracy of 95.5% using chest CT scan image database. Experimental outcomes on an adequately enormous number of CT scan images exhibit the adequacy of the machine learning mechanism developed and recommended to identify coronavirus severity.
Collapse
Affiliation(s)
- Dasari Naga Vinod
- Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, Tamilnadu 600062, India
| | - S R S Prabaharan
- Sathyabama Centre for Advanced Studies, Sathyabama Institute of Science and Technology, Rajiv Gandhi Salai, Chennai, Tamilnadu 600119, India
| |
Collapse
|
6
|
Murugappan M, Bourisly AK, Prakash NB, Sumithra MG, Acharya UR. Automated semantic lung segmentation in chest CT images using deep neural network. Neural Comput Appl 2023; 35:15343-15364. [PMID: 37273912 PMCID: PMC10088735 DOI: 10.1007/s00521-023-08407-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 02/13/2023] [Indexed: 06/06/2023]
Abstract
Lung segmentation algorithms play a significant role in segmenting theinfected regions in the lungs. This work aims to develop a computationally efficient and robust deep learning model for lung segmentation using chest computed tomography (CT) images with DeepLabV3 + networks for two-class (background and lung field) and four-class (ground-glass opacities, background, consolidation, and lung field). In this work, we investigate the performance of the DeepLabV3 + network with five pretrained networks: Xception, ResNet-18, Inception-ResNet-v2, MobileNet-v2 and ResNet-50. A publicly available database for COVID-19 that contains 750 chest CT images and corresponding pixel-labeled images are used to develop the deep learning model. The segmentation performance has been assessed using five performance measures: Intersection of Union (IoU), Weighted IoU, Balance F1 score, pixel accu-racy, and global accuracy. The experimental results of this work confirm that the DeepLabV3 + network with ResNet-18 and a batch size of 8 have a higher performance for two-class segmentation. DeepLabV3 + network coupled with ResNet-50 and a batch size of 16 yielded better results for four-class segmentation compared to other pretrained networks. Besides, the ResNet with a fewer number of layers is highly adequate for developing a more robust lung segmentation network with lesser computational complexity compared to the conventional DeepLabV3 + network with Xception. This present work proposes a unified DeepLabV3 + network to delineate the two and four different regions automatically using CT images for CoVID-19 patients. Our developed automated segmented model can be further developed to be used as a clinical diagnosis system for CoVID-19 as well as assist clinicians in providing an accurate second opinion CoVID-19 diagnosis.
Collapse
Affiliation(s)
- M. Murugappan
- Intelligent Signal Processing (ISP) Research Lab, Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, Block 4, Doha, Kuwait
- Department of Electronics and Communication Engineering, School of Engineering, Vels Institute of Science, Technology, and Advanced Studies, Chennai, India
- Centre of Excellence for Unmanned Aerial Systems (CoEUAS), Universiti Malaysia Perlis, 02600 Perlis, Malaysia
| | - Ali K. Bourisly
- Department of Physiology, Kuwait University, Kuwait City, Kuwait
| | - N. B. Prakash
- Department of Electrical and Electronics and Engineering, National Engineering College, Kovilpatti, Tamil Nadu India
| | - M. G. Sumithra
- Department of Biomedical Engineering, Dr. N. G. P. Institute of Technology, Coimbatore, Tamilnadu India
| | - U. Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore, Singapore
- Department of Biomedical Engineering, School of Science and Technology, Singapore School of Social Sciences, Singapore, Singapore
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
7
|
Chen Y, Feng L, Zheng C, Zhou T, Liu L, Liu P, Chen Y. LDANet: Automatic lung parenchyma segmentation from CT images. Comput Biol Med 2023; 155:106659. [PMID: 36791550 DOI: 10.1016/j.compbiomed.2023.106659] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/27/2023] [Accepted: 02/09/2023] [Indexed: 02/12/2023]
Abstract
Automatic segmentation of the lung parenchyma from computed tomography (CT) images is helpful for the subsequent diagnosis and treatment of patients. In this paper, based on a deep learning algorithm, a lung dense attention network (LDANet) is proposed with two mechanisms: residual spatial attention (RSA) and gated channel attention (GCA). RSA is utilized to weight the spatial information of the lung parenchyma and suppress feature activation in irrelevant regions, while the weights of each channel are adaptively calibrated using GCA to implicitly predict potential key features. Then, a dual attention guidance module (DAGM) is designed to maximize the integration of the advantages of both mechanisms. In addition, LDANet introduces a lightweight dense block (LDB) that reuses feature information and a positioned transpose block (PTB) that realizes accurate positioning and gradually restores the image resolution until the predicted segmentation map is generated. Experiments are conducted on two public datasets, LIDC-IDRI and COVID-19 CT Segmentation, on which LDANet achieves Dice similarity coefficient values of 0.98430 and 0.98319, respectively, outperforming a state-of-the-art lung segmentation model. Additionally, the effectiveness of the main components of LDANet is demonstrated through ablation experiments.
Collapse
Affiliation(s)
- Ying Chen
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Longfeng Feng
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Cheng Zheng
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Taohui Zhou
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Lan Liu
- Department of Medical Imaging, Jiangxi Cancer Hospital, Nanchang, 330029, PR China.
| | - Pengfei Liu
- Department of Medical Imaging, Jiangxi Cancer Hospital, Nanchang, 330029, PR China
| | - Yi Chen
- Key Laboratory of Intelligent Informatics for Safety & Emergency of Zhejiang Province, Wenzhou University, Wenzhou, 325035, PR China.
| |
Collapse
|
8
|
Deb SD, Jha RK, Kumar R, Tripathi PS, Talera Y, Kumar M. CoVSeverity-Net: an efficient deep learning model for COVID-19 severity estimation from Chest X-Ray images. RESEARCH ON BIOMEDICAL ENGINEERING 2023. [PMCID: PMC9901380 DOI: 10.1007/s42600-022-00254-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Purpose COVID-19 is not going anywhere and is slowly becoming a part of our life. The World Health Organization declared it a pandemic in 2020, and it has affected all of us in many ways. Several deep learning techniques have been developed to detect COVID-19 from Chest X-Ray images. COVID-19 infection severity scoring can aid in establishing the optimum course of treatment and care for a positive patient, as all COVID-19 positive patients do not require special medical attention. Still, very few works are reported to estimate the severity of the disease from the Chest X-Ray images. The unavailability of the large-scale dataset might be a reason. Methods We aim to propose CoVSeverity-Net, a deep learning-based architecture for predicting the severity of COVID-19 from Chest X-ray images. CoVSeverity-Net is trained on a public COVID-19 dataset, curated by experienced radiologists for severity estimation. For that, a large publicly available dataset is collected and divided into three levels of severity, namely Mild, Moderate, and Severe. Results An accuracy of 85.71% is reported. Conducting 5-fold cross-validation, we have obtained an accuracy of 87.82 ± 6.25%. Similarly, conducting 10-fold cross-validation we obtained accuracy of 91.26 ± 3.42. The results were better when compared with other state-of-the-art architectures. Conclusion We strongly believe that this study has a high chance of reducing the workload of overworked front-line radiologists, speeding up patient diagnosis and treatment, and easing pandemic control. Future work would be to train a novel deep learning-based architecture on a larger dataset for severity estimation.
Collapse
Affiliation(s)
- Sagar Deep Deb
- Department of Electrical Engineering, Indian Institute of Technology Patna, Patna, 801103 India
| | - Rajib Kumar Jha
- Department of Electrical Engineering, Indian Institute of Technology Patna, Patna, 801103 India
| | - Rajnish Kumar
- Department of Paediatrics, Netaji Subhas Medical College & Hospital, Patna, 801106 India
| | - Prem S. Tripathi
- Department of Radiodiagnosis, Mahatma Gandhi Memorial Government Medical College, Indore, 452001 India
| | - Yash Talera
- Department of Radiodiagnosis, Mahatma Gandhi Memorial Government Medical College, Indore, 452001 India
| | - Manish Kumar
- Patna Medical College and Hospital, Bihar, 800001 India
| |
Collapse
|
9
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
10
|
Meng Y, Bridge J, Addison C, Wang M, Merritt C, Franks S, Mackey M, Messenger S, Sun R, Fitzmaurice T, McCann C, Li Q, Zhao Y, Zheng Y. Bilateral adaptive graph convolutional network on CT based Covid-19 diagnosis with uncertainty-aware consensus-assisted multiple instance learning. Med Image Anal 2023; 84:102722. [PMID: 36574737 PMCID: PMC9753459 DOI: 10.1016/j.media.2022.102722] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 10/17/2022] [Accepted: 12/02/2022] [Indexed: 12/23/2022]
Abstract
Coronavirus disease (COVID-19) has caused a worldwide pandemic, putting millions of people's health and lives in jeopardy. Detecting infected patients early on chest computed tomography (CT) is critical in combating COVID-19. Harnessing uncertainty-aware consensus-assisted multiple instance learning (UC-MIL), we propose to diagnose COVID-19 using a new bilateral adaptive graph-based (BA-GCN) model that can use both 2D and 3D discriminative information in 3D CT volumes with arbitrary number of slices. Given the importance of lung segmentation for this task, we have created the largest manual annotation dataset so far with 7,768 slices from COVID-19 patients, and have used it to train a 2D segmentation model to segment the lungs from individual slices and mask the lungs as the regions of interest for the subsequent analyses. We then used the UC-MIL model to estimate the uncertainty of each prediction and the consensus between multiple predictions on each CT slice to automatically select a fixed number of CT slices with reliable predictions for the subsequent model reasoning. Finally, we adaptively constructed a BA-GCN with vertices from different granularity levels (2D and 3D) to aggregate multi-level features for the final diagnosis with the benefits of the graph convolution network's superiority to tackle cross-granularity relationships. Experimental results on three largest COVID-19 CT datasets demonstrated that our model can produce reliable and accurate COVID-19 predictions using CT volumes with any number of slices, which outperforms existing approaches in terms of learning and generalisation ability. To promote reproducible research, we have made the datasets, including the manual annotations and cleaned CT dataset, as well as the implementation code, available at https://doi.org/10.5281/zenodo.6361963.
Collapse
Affiliation(s)
- Yanda Meng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Joshua Bridge
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Cliff Addison
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | - Manhui Wang
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | | | - Stu Franks
- Alces Flight Limited, Bicester, United Kingdom
| | - Maria Mackey
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Steve Messenger
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Renrong Sun
- Department of Radiology, Hubei Provincial Hospital of Integrated Chinese and Western Medicine, Hubei University of Chinese Medicine, Wuhan, China
| | - Thomas Fitzmaurice
- Adult Cystic Fibrosis Unit, Liverpool Heart and Chest Hospital NHS Foundation Trust, Liverpool, United Kingdom
| | - Caroline McCann
- Radiology, Liverpool Heart and Chest Hospital NHS Foundation Trust, United Kingdom
| | - Qiang Li
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Yitian Zhao
- The Affiliated People's Hospital of Ningbo University, Ningbo, China; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Science, Ningbo, China.
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom; Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart & Chest Hospital, Liverpool, United Kingdom.
| |
Collapse
|
11
|
Interactive framework for Covid-19 detection and segmentation with feedback facility for dynamically improved accuracy and trust. PLoS One 2022; 17:e0278487. [PMID: 36548288 PMCID: PMC9778629 DOI: 10.1371/journal.pone.0278487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 11/17/2022] [Indexed: 12/24/2022] Open
Abstract
Due to the severity and speed of spread of the ongoing Covid-19 pandemic, fast but accurate diagnosis of Covid-19 patients has become a crucial task. Achievements in this respect might enlighten future efforts for the containment of other possible pandemics. Researchers from various fields have been trying to provide novel ideas for models or systems to identify Covid-19 patients from different medical and non-medical data. AI-based researchers have also been trying to contribute to this area by mostly providing novel approaches of automated systems using convolutional neural network (CNN) and deep neural network (DNN) for Covid-19 detection and diagnosis. Due to the efficiency of deep learning (DL) and transfer learning (TL) models in classification and segmentation tasks, most of the recent AI-based researches proposed various DL and TL models for Covid-19 detection and infected region segmentation from chest medical images like X-rays or CT images. This paper describes a web-based application framework for Covid-19 lung infection detection and segmentation. The proposed framework is characterized by a feedback mechanism for self learning and tuning. It uses variations of three popular DL models, namely Mask R-CNN, U-Net, and U-Net++. The models were trained, evaluated and tested using CT images of Covid patients which were collected from two different sources. The web application provide a simple user friendly interface to process the CT images from various resources using the chosen models, thresholds and other parameters to generate the decisions on detection and segmentation. The models achieve high performance scores for Dice similarity, Jaccard similarity, accuracy, loss, and precision values. The U-Net model outperformed the other models with more than 98% accuracy.
Collapse
|
12
|
Multi-instance learning based on spatial continuous category representation for case-level meningioma grading in MRI images. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04114-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
13
|
Thaljaoui A, Khediri SE, Benmohamed E, Alabdulatif A, Alourani A. Integrated Bayesian and association-rules methods for autonomously orienting COVID-19 patients. Med Biol Eng Comput 2022; 60:3475-3496. [PMID: 36205834 PMCID: PMC9540074 DOI: 10.1007/s11517-022-02677-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 09/17/2022] [Indexed: 11/11/2022]
Abstract
The coronavirus infection continues to spread rapidly worldwide, having a devastating impact on the health of the global population. To fight against COVID-19, we propose a novel autonomous decision-making process which combines two modules in order to support the decision-maker: (1) Bayesian Networks method-based data-analysis module, which is used to specify the severity of coronavirus symptoms and classify cases as mild, moderate, and severe, and (2) autonomous decision-making module-based association rules mining method. This method allows the autonomous generation of the adequate decision based on the FP-growth algorithm and the distance between objects. To build the Bayesian Network model, we propose a novel data-based method that enables to effectively learn the network's structure, namely, MIGT-SL algorithm. The experimentations are performed over pre-processed discrete dataset. The proposed algorithm allows to correctly generate 74%, 87.5%, and 100% of the original structure of ALARM, ASIA, and CANCER networks. The proposed Bayesian model performs well in terms of accuracy with 96.15% and 94.77%, respectively, for binary and multi-class classification. The developed decision-making model is evaluated according to its utility in solving the decisional problem, and its accuracy of proposing the adequate decision is about 97.80%.
Collapse
Affiliation(s)
- Adel Thaljaoui
- Department of Computer Science and Information, College of Science at Zulfi, Majmaah University, Al-Majmaah, 11952 Saudi Arabia
| | - Salim El Khediri
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
- Department of Computer Sciences, Faculty of Sciences of Gafsa, University of Gafsa, Gafsa, Tunisia
| | - Emna Benmohamed
- Department of Computer Sciences, Faculty of Sciences of Gafsa, University of Gafsa, Gafsa, Tunisia
- Research Groups in Intelligent Machines, University of Sfax, National School of Engineers (ENIS), BP 1173, 3038 Sfax, Tunisia
| | - Abdulatif Alabdulatif
- Department of Computer Sciences, College of Computer, Qassim University, Buraidah, Saudi Arabia
| | - Abdullah Alourani
- Department of Computer Science and Information, College of Science at Zulfi, Majmaah University, Al-Majmaah, 11952 Saudi Arabia
| |
Collapse
|
14
|
Wang C, Wang X, Wang Z, Zhu W, Hu R. COVID-19 contact tracking by group activity trajectory recovery over camera networks. PATTERN RECOGNITION 2022; 132:108908. [PMID: 35873066 PMCID: PMC9290376 DOI: 10.1016/j.patcog.2022.108908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 07/14/2022] [Accepted: 07/16/2022] [Indexed: 05/03/2023]
Abstract
Contact tracking plays an important role in the epidemiological investigation of COVID-19, which can effectively reduce the spread of the epidemic. As an excellent alternative method for contact tracking, mobile phone location-based methods are widely used for locating and tracking contacts. However, current inaccurate positioning algorithms that are widely used in contact tracking lead to the inaccurate follow-up of contacts. Aiming to achieve accurate contact tracking for the COVID-19 contact group, we extend the analysis of the GPS data to combine GPS data with video surveillance data and address a novel task named group activity trajectory recovery. Meanwhile, a new dataset called GATR-GPS is constructed to simulate a realistic scenario of COVID-19 contact tracking, and a coordinated optimization algorithm with a spatio-temporal constraint table is further proposed to realize efficient trajectory recovery of pedestrian trajectories. Extensive experiments on the novel collected dataset and commonly used two existing person re-identification datasets are performed, and the results evidently demonstrate that our method achieves competitive results compared to the state-of-the-art methods.
Collapse
Affiliation(s)
- Chao Wang
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan 430072, China
- Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan 430072, China
| | - XiaoChen Wang
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan 430072, China
| | - Zhongyuan Wang
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan 430072, China
| | - WenQian Zhu
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan 430072, China
- Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan 430072, China
| | - Ruimin Hu
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan 430072, China
- Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan 430072, China
- Collaborative Innovation Center of Geospatial Technology, Wuhan 430079, China
| |
Collapse
|
15
|
COVID-19 Infection Segmentation and Severity Assessment Using a Self-Supervised Learning Approach. Diagnostics (Basel) 2022; 12:diagnostics12081805. [PMID: 35892518 PMCID: PMC9332359 DOI: 10.3390/diagnostics12081805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/14/2022] [Accepted: 07/23/2022] [Indexed: 11/16/2022] Open
Abstract
Background: Automated segmentation of COVID-19 infection lesions and the assessment of the severity of the infections are critical in COVID-19 diagnosis and treatment. Based on a large amount of annotated data, deep learning approaches have been widely used in COVID-19 medical image analysis. However, the number of medical image samples is generally huge, and it is challenging to obtain enough annotated medical images for training a deep CNN model. Methods: To address these challenges, we propose a novel self-supervised deep learning method for automated segmentation of COVID-19 infection lesions and assessing the severity of infection, which can reduce the dependence on the annotation of the training samples. In the proposed method, first, many unlabeled data are used to pre-train an encoder-decoder model to learn rotation-dependent and rotation-invariant features. Then, a small amount of labeled data is used to fine-tune the pre-trained encoder-decoder for COVID-19 severity classification and lesion segmentation. Results: The proposed methods were tested on two public COVID-19 CT datasets and one self-built dataset. Accuracy, precision, recall, and F1-score were used to measure classification performance and Dice coefficient was used to measure segmentation performance. For COVID-19 severity classification, the proposed method outperformed other unsupervised feature learning methods by about 7.16% in accuracy. For segmentation, when the amount of labeled data was 100%, the Dice value of the proposed method was 5.58% higher than that of U-Net.; in 70% of the cases, our method was 8.02% higher than U-Net; in 30% of the cases, our method was 11.88% higher than U-Net; and in 10% of the cases, our method was 16.88% higher than U-Net. Conclusions: The proposed method provides better classification and segmentation performance under limited labeled data than other methods.
Collapse
|
16
|
Chandra TB, Singh BK, Jain D. Disease Localization and Severity Assessment in Chest X-Ray Images using Multi-Stage Superpixels Classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 222:106947. [PMID: 35749885 PMCID: PMC9403875 DOI: 10.1016/j.cmpb.2022.106947] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 05/25/2022] [Accepted: 06/08/2022] [Indexed: 05/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Chest X-ray (CXR) is a non-invasive imaging modality used in the prognosis and management of chronic lung disorders like tuberculosis (TB), pneumonia, coronavirus disease (COVID-19), etc. The radiomic features associated with different disease manifestations assist in detection, localization, and grading the severity of infected lung regions. The majority of the existing computer-aided diagnosis (CAD) system used these features for the classification task, and only a few works have been dedicated to disease-localization and severity scoring. Moreover, the existing deep learning approaches use class activation map and Saliency map, which generate a rough localization. This study aims to generate a compact disease boundary, infection map, and grade the infection severity using proposed multistage superpixel classification-based disease localization and severity assessment framework. METHODS The proposed method uses a simple linear iterative clustering (SLIC) technique to subdivide the lung field into small superpixels. Initially, the different radiomic texture and proposed shape features are extracted and combined to train different benchmark classifiers in a multistage framework. Subsequently, the predicted class labels are used to generate an infection map, mark disease boundary, and grade the infection severity. The performance is evaluated using a publicly available Montgomery dataset and validated using Friedman average ranking and Holm and Nemenyi post-hoc procedures. RESULTS The proposed multistage classification approach achieved accuracy (ACC)= 95.52%, F-Measure (FM)= 95.48%, area under the curve (AUC)= 0.955 for Stage-I and ACC=85.35%, FM=85.20%, AUC=0.853 for Stage-II using calibration dataset and ACC = 93.41%, FM = 95.32%, AUC = 0.936 for Stage-I and ACC = 84.02%, FM = 71.01%, AUC = 0.795 for Stage-II using validation dataset. Also, the model has demonstrated the average Jaccard Index (JI) of 0.82 and Pearson's correlation coefficient (r) of 0.9589. CONCLUSIONS The obtained classification results using calibration and validation dataset confirms the promising performance of the proposed framework. Also, the average JI shows promising potential to localize the disease, and better agreement between radiologist score and predicted severity score (r) confirms the robustness of the method. Finally, the statistical test justified the significance of the obtained results.
Collapse
Affiliation(s)
- Tej Bahadur Chandra
- Department of Computer Applications, National Institute of Technology Raipur, Chhattisgarh, India.
| | - Bikesh Kumar Singh
- Department of Biomedical Engineering, National Institute of Technology Raipur, Chhattisgarh, India
| | - Deepak Jain
- Department of Radiodiagnosis, Pt. Jawahar Lal Nehru Memorial Medical College, Raipur, Chhattisgarh, India
| |
Collapse
|
17
|
Karthik R, Menaka R, Hariharan M, Won D. CT-based severity assessment for COVID-19 using weakly supervised non-local CNN. Appl Soft Comput 2022; 121:108765. [PMID: 35370523 PMCID: PMC8962065 DOI: 10.1016/j.asoc.2022.108765] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 01/09/2023]
Abstract
Evaluating patient criticality is the foremost step in administering appropriate COVID-19 treatment protocols. Learning an Artificial Intelligence (AI) model from clinical data for automatic risk-stratification enables accelerated response to patients displaying critical indicators. Chest CT manifestations including ground-glass opacities and consolidations are a reliable indicator for prognostic studies and show variability with patient condition. To this end, we propose a novel attention framework to estimate COVID-19 severity as a regression score from a weakly annotated CT scan dataset. It takes a non-locality approach that correlates features across different parts and spatial scales of the 3D scan. An explicit guidance mechanism from limited infection labeling drives attention refinement and feature modulation. The resulting encoded representation is further enriched through cross-channel attention. The attention model also infuses global contextual awareness into the deep voxel features by querying the base CT scan to mine relevant features. Consequently, it learns to effectively localize its focus region and chisel out the infection precisely. Experimental validation on the MosMed dataset shows that the proposed architecture has significant potential in augmenting existing methods as it achieved a 0.84 R-squared score and 0.133 mean absolute difference.
Collapse
Affiliation(s)
- R Karthik
- Centre for Cyber Physical Systems & School of Electronics Engineering, Vellore Institute of Technology, Chennai, India
| | - R Menaka
- Centre for Cyber Physical Systems & School of Electronics Engineering, Vellore Institute of Technology, Chennai, India
| | - M Hariharan
- Cisco Systems India Pvt Ltd, Bangalore, India
| | - Daehan Won
- System Sciences and Industrial Engineering, Binghamton University, NY, USA
| |
Collapse
|
18
|
Karthik R, Menaka R, M H, Won D. Contour-enhanced attention CNN for CT-based COVID-19 segmentation. PATTERN RECOGNITION 2022; 125:108538. [PMID: 35068591 PMCID: PMC8767763 DOI: 10.1016/j.patcog.2022.108538] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 09/14/2021] [Accepted: 01/14/2022] [Indexed: 05/14/2023]
Abstract
Accurate detection of COVID-19 is one of the challenging research topics in today's healthcare sector to control the coronavirus pandemic. Automatic data-powered insights for COVID-19 localization from medical imaging modality like chest CT scan tremendously augment clinical care assistance. In this research, a Contour-aware Attention Decoder CNN has been proposed to precisely segment COVID-19 infected tissues in a very effective way. It introduces a novel attention scheme to extract boundary, shape cues from CT contours and leverage these features in refining the infected areas. For every decoded pixel, the attention module harvests contextual information in its spatial neighborhood from the contour feature maps. As a result of incorporating such rich structural details into decoding via dense attention, the CNN is able to capture even intricate morphological details. The decoder is also augmented with a Cross Context Attention Fusion Upsampling to robustly reconstruct deep semantic features back to high-resolution segmentation map. It employs a novel pixel-precise attention model that draws relevant encoder features to aid in effective upsampling. The proposed CNN was evaluated on 3D scans from MosMedData and Jun Ma benchmarked datasets. It achieved state-of-the-art performance with a high dice similarity coefficient of 85.43% and a recall of 88.10%.
Collapse
Affiliation(s)
- R Karthik
- Centre for Cyber Physical Systems (CCPS), Vellore Institute of Technology, Chennai, India
| | - R Menaka
- Centre for Cyber Physical Systems (CCPS), Vellore Institute of Technology, Chennai, India
| | - Hariharan M
- School of Computing Sciences and Engineering, Vellore Institute of Technology, Chennai, India
| | - Daehan Won
- System Sciences and Industrial Engineering, Binghamton University, United States
| |
Collapse
|
19
|
Hu H, Shen L, Guan Q, Li X, Zhou Q, Ruan S. Deep co-supervision and attention fusion strategy for automatic COVID-19 lung infection segmentation on CT images. PATTERN RECOGNITION 2022; 124:108452. [PMID: 34848897 PMCID: PMC8612757 DOI: 10.1016/j.patcog.2021.108452] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 09/20/2021] [Accepted: 11/22/2021] [Indexed: 05/13/2023]
Abstract
Due to the irregular shapes,various sizes and indistinguishable boundaries between the normal and infected tissues, it is still a challenging task to accurately segment the infected lesions of COVID-19 on CT images. In this paper, a novel segmentation scheme is proposed for the infections of COVID-19 by enhancing supervised information and fusing multi-scale feature maps of different levels based on the encoder-decoder architecture. To this end, a deep collaborative supervision (Co-supervision) scheme is proposed to guide the network learning the features of edges and semantics. More specifically, an Edge Supervised Module (ESM) is firstly designed to highlight low-level boundary features by incorporating the edge supervised information into the initial stage of down-sampling. Meanwhile, an Auxiliary Semantic Supervised Module (ASSM) is proposed to strengthen high-level semantic information by integrating mask supervised information into the later stage. Then an Attention Fusion Module (AFM) is developed to fuse multiple scale feature maps of different levels by using an attention mechanism to reduce the semantic gaps between high-level and low-level feature maps. Finally, the effectiveness of the proposed scheme is demonstrated on four various COVID-19 CT datasets. The results show that the proposed three modules are all promising. Based on the baseline (ResUnet), using ESM, ASSM, or AFM alone can respectively increase Dice metric by 1.12%, 1.95%,1.63% in our dataset, while the integration by incorporating three models together can rise 3.97%. Compared with the existing approaches in various datasets, the proposed method can obtain better segmentation performance in some main metrics, and can achieve the best generalization and comprehensive performance.
Collapse
Affiliation(s)
- Haigen Hu
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, PR China
- Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, PR China
| | - Leizhao Shen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, PR China
- Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, PR China
| | - Qiu Guan
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, PR China
- Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, PR China
| | - Xiaoxin Li
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, PR China
- Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, PR China
| | - Qianwei Zhou
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, PR China
- Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, PR China
| | - Su Ruan
- University of Rouen Normandy, LITIS EA 4108, Rouen 76183, France
| |
Collapse
|
20
|
Yousefzadeh M, Zolghadri M, Hasanpour M, Salimi F, Jafari R, Vaziri Bozorg M, Haseli S, Mahmoudi Aqeel Abadi A, Naseri S, Ay M, Nazem-Zadeh MR. Statistical analysis of COVID-19 infection severity in lung lobes from chest CT. INFORMATICS IN MEDICINE UNLOCKED 2022; 30:100935. [PMID: 35382230 PMCID: PMC8970623 DOI: 10.1016/j.imu.2022.100935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/28/2022] [Accepted: 03/29/2022] [Indexed: 11/23/2022] Open
Abstract
Detection of the COVID 19 virus is possible through the reverse transcription-polymerase chain reaction (RT-PCR) kits and computed tomography (CT) images of the lungs. Diagnosis via CT images provides a faster diagnosis than the RT-PCR method does. In addition to low false-negative rate, CT is also used for prognosis in determining the severity of the disease and the proposed treatment method. In this study, we estimated a probability density function (PDF) to examine the infections caused by the virus. We collected 232 chest CT of suspected patients and had them labeled by two radiologists in 6 classes, including a healthy class and 5 classes of different infection severity. To segment the lung lobes, we used a pre-trained U-Net model with an average Dice similarity coefficient (DSC) greater than 0.96. First, we extracted the PDF to grade the infection of each lobe and selected five specific thresholds as feature vectors. We then assigned this feature vector to a support vector machine (SVM) model and made the final prediction of the infection severity. Using the T-Test statistics, we calculated the p-value at different pixel thresholds and reported the significant differences in the pixel values. In most cases, the p-value was less than 0.05. Our developed model was developed on roughly labeled data without any manual segmentation, which estimated lung infection involvements with the area under the curve (AUC) in the range of [0.64, 0.87]. The introduced model can be used to generate a systematic automated report for individual patients infected by COVID-19.
Collapse
|
21
|
Instance Importance-Aware Graph Convolutional Network for 3D Medical Diagnosis. Med Image Anal 2022; 78:102421. [DOI: 10.1016/j.media.2022.102421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 02/11/2022] [Accepted: 03/10/2022] [Indexed: 11/21/2022]
|
22
|
Wang C, Lu X, Wang W. A theoretical analysis based on causal inference and single-instance learning. APPL INTELL 2022; 52:13902-13915. [PMID: 35250175 PMCID: PMC8884416 DOI: 10.1007/s10489-022-03193-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/04/2022] [Indexed: 11/26/2022]
Abstract
Although using single-instance learning methods to solve multi-instance problems has achieved excellent performance in many tasks, the reasons for this success still lack a rigorous theoretical explanation. In particular, the potential relation between the number of causal factors (also called causal instances) in a bag and the model performance is not transparent. The goal of our study is to use the causal relationship between instances and bags to enhance the interpretability of multi-instance learning. First, we provide a lower bound on the number of instances required to determine causal factors in a real multi-instance learning task. Then, we provide a lower bound on the single-instance learning loss function when testing instances and training instances follow the same distribution and extend this conclusion to the situation where the distribution changes. Thus, theoretically, we demonstrate that the number of causal factors in the bag is an important parameter that affects the performance of the model when using single-instance learning methods to solve multi-instance learning problems. Finally, combining with a specific classification task, we experimentally validate our theoretical analysis.
Collapse
Affiliation(s)
- Chao Wang
- Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
| | - Xuantao Lu
- Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
| | - Wei Wang
- Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
| |
Collapse
|
23
|
Hassan H, Ren Z, Zhao H, Huang S, Li D, Xiang S, Kang Y, Chen S, Huang B. Review and classification of AI-enabled COVID-19 CT imaging models based on computer vision tasks. Comput Biol Med 2022; 141:105123. [PMID: 34953356 PMCID: PMC8684223 DOI: 10.1016/j.compbiomed.2021.105123] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 12/03/2021] [Accepted: 12/03/2021] [Indexed: 01/12/2023]
Abstract
This article presents a systematic overview of artificial intelligence (AI) and computer vision strategies for diagnosing the coronavirus disease of 2019 (COVID-19) using computerized tomography (CT) medical images. We analyzed the previous review works and found that all of them ignored classifying and categorizing COVID-19 literature based on computer vision tasks, such as classification, segmentation, and detection. Most of the COVID-19 CT diagnosis methods comprehensively use segmentation and classification tasks. Moreover, most of the review articles are diverse and cover CT as well as X-ray images. Therefore, we focused on the COVID-19 diagnostic methods based on CT images. Well-known search engines and databases such as Google, Google Scholar, Kaggle, Baidu, IEEE Xplore, Web of Science, PubMed, ScienceDirect, and Scopus were utilized to collect relevant studies. After deep analysis, we collected 114 studies and reported highly enriched information for each selected research. According to our analysis, AI and computer vision have substantial potential for rapid COVID-19 diagnosis as they could significantly assist in automating the diagnosis process. Accurate and efficient models will have real-time clinical implications, though further research is still required. Categorization of literature based on computer vision tasks could be helpful for future research; therefore, this review article will provide a good foundation for conducting such research.
Collapse
Affiliation(s)
- Haseeb Hassan
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China
| | - Zhaoyu Ren
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Huishi Zhao
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Shoujin Huang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Dan Li
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Shaohua Xiang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Yan Kang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China; Medical Device Innovation Research Center, Shenzhen Technology University, Shenzhen, China
| | - Sifan Chen
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China; Medical Research Center, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Bingding Huang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China.
| |
Collapse
|
24
|
Hasaninasab M, Khansari M. Efficient COVID-19 testing via contextual model based compressive sensing. PATTERN RECOGNITION 2022; 122:108253. [PMID: 34413547 PMCID: PMC8362654 DOI: 10.1016/j.patcog.2021.108253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 08/11/2021] [Accepted: 08/11/2021] [Indexed: 06/13/2023]
Abstract
The COVID-19 pandemic is threatening billions of people's life all over the world. As of March 6, 2021, covid-19 has confirmed in 115,653,459 people worldwide. It has also a devastating effect on businesses and social activities. Since there is still no definite cure for this disease, extensive testing is the most critical issue to determine the trend of illness, appropriate medical treatment, and make social distancing policies. Besides, testing more people in a shorter time helps to contain the contagion. The PCR-based methods are the most popular tests which take about an hour to make the output result. Obviously, it makes the number of tests highly limited and consequently, hurts the efficiency of pandemic control. In this paper, we propose a new approach to identify affected individuals with a considerably reduced No. of tests. Intuitively, saving time and resources is the main advantage of our approach. We use contextual information to make a graph-based model to be used in model-based compressive sensing (CS). Our proposed model makes the testing with fewer tests required compared to traditional testing methods and even group testing. We embed contextual information such as age, underlying disease, symptoms (i.e. cough, fever, fatigue, loss of consciousness), and social contacts into a graph-based model. This model is used in model-based CS to minimize the required test. We take advantage of Discrete Graph Signal Processing on Graph (DSPG) to generate the model. Our contextual model makes CS more efficient in both the number of samples and the recovery quality. Moreover, it can be applied in the case that group testing is not applicable due to its severe dependency on sparsity. Experimental results show that the overall testing speed (individuals per test ratio) increases more than 15 times compared to the individual testing with the error of less than 5% which is dramatically lower than that of traditional compressive sensing.
Collapse
Affiliation(s)
- Mehdi Hasaninasab
- Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran
| | - Mohammad Khansari
- Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran
| |
Collapse
|
25
|
Deshpande G, Batliner A, Schuller BW. AI-Based human audio processing for COVID-19: A comprehensive overview. PATTERN RECOGNITION 2022; 122:108289. [PMID: 34483372 PMCID: PMC8404390 DOI: 10.1016/j.patcog.2021.108289] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 08/24/2021] [Accepted: 08/29/2021] [Indexed: 06/02/2023]
Abstract
The Coronavirus (COVID-19) pandemic impelled several research efforts, from collecting COVID-19 patients' data to screening them for virus detection. Some COVID-19 symptoms are related to the functioning of the respiratory system that influences speech production; this suggests research on identifying markers of COVID-19 in speech and other human generated audio signals. In this article, we give an overview of research on human audio signals using 'Artificial Intelligence' techniques to screen, diagnose, monitor, and spread the awareness about COVID-19. This overview will be useful for developing automated systems that can help in the context of COVID-19, using non-obtrusive and easy to use bio-signals conveyed in human non-speech and speech audio productions.
Collapse
Affiliation(s)
- Gauri Deshpande
- Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany
- TCS Research Pune, India
| | - Anton Batliner
- Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany
| | - Björn W Schuller
- Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany
- GLAM - Group on Language, Audio, & Music, Imperial College London, UK
| |
Collapse
|
26
|
Ter-Sarkisov A. One Shot Model For The Prediction of COVID-19 And Lesions Segmentation In Chest CT Scans Through The Affinity Among Lesion Mask Features. Appl Soft Comput 2022; 116:108261. [PMID: 34924896 PMCID: PMC8668605 DOI: 10.1016/j.asoc.2021.108261] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 11/09/2021] [Accepted: 11/27/2021] [Indexed: 01/15/2023]
Abstract
We present a novel framework that integrates segmentation of lesion masks and prediction of COVID-19 in chest CT scans in one shot. In order to classify the whole input image, we introduce a type of associations among lesion mask features extracted from the scan slice that we refer to as affinities. First, we map mask features to the affinity space by training an affinity matrix. Next, we map them back into the feature space through a trainable affinity vector. Finally, this feature representation is used for the classification of the whole input scan slice. We achieve a 93.55% COVID-19 sensitivity, 96.93% common pneumonia sensitivity, 99.37% true negative rate and 97.37% F1-score on the test split of CNCB-NCOV dataset with 21192 chest CT scan slices. We also achieve a 0.4240 mean average precision on the lesion segmentation task. All source code, models and results are publicly available on https://github.com/AlexTS1980/COVID-Affinity-Model.
Collapse
Affiliation(s)
- Aram Ter-Sarkisov
- CitAI Research Center, Department of Computer Science, City University of London, United Kingdom
| |
Collapse
|
27
|
Alzahrani A, Bhuiyan MAA, Akhter F. Detecting COVID-19 Pneumonia over Fuzzy Image Enhancement on Computed Tomography Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1043299. [PMID: 35087599 PMCID: PMC8789426 DOI: 10.1155/2022/1043299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Revised: 10/30/2021] [Accepted: 12/01/2021] [Indexed: 11/30/2022]
Abstract
COVID-19 is the worst pandemic that has hit the globe in recent history, causing an increase in deaths. As a result of this pandemic, a number of research interests emerged in several fields such as medicine, health informatics, medical imaging, artificial intelligence and social sciences. Lung infection or pneumonia is the regular complication of COVID-19, and Reverse Transcription Polymerase Chain Reaction (RT-PCR) and computed tomography (CT) have played important roles to diagnose the disease. This research proposes an image enhancement method employing fuzzy expected value to improve the quality of the image for the detection of COVID-19 pneumonia. The principal objective of this research is to detect COVID-19 in patients using CT scan images collected from different sources, which include patients suffering from pneumonia and healthy people. The method is based on fuzzy histogram equalization and is organized with the improvement of the image contrast using fuzzy normalized histogram of the image. The effectiveness of the algorithm has been justified over several experiments on different features of CT images of lung for COVID-19 patients, like Ground-Glass Opacity (GGO), crazy paving, and consolidation. Experimental investigations indicate that among the 254 patients, 81.89% had features on both lungs; 9.5% on the left lung; and 10.24% on the right lung. The predominantly affected lobe was the right lower lobe (79.53%).
Collapse
Affiliation(s)
- Ali Alzahrani
- Department of Computer Engineering, King Faisal University, Hofuf 31982, Saudi Arabia
| | - Md. Al-Amin Bhuiyan
- Department of Computer Engineering, King Faisal University, Hofuf 31982, Saudi Arabia
| | - Fahima Akhter
- College of Applied Medical Sciences, King Faisal University, Hofuf 31982, Saudi Arabia
| |
Collapse
|
28
|
D. Algarni A, El-Shafai W, M. El Banby G, E. Abd El-Samie F, F. Soliman N. An Efficient CNN-Based Hybrid Classification and Segmentation Approach for COVID-19 Detection. COMPUTERS, MATERIALS & CONTINUA 2022; 70:4393-4410. [DOI: 10.32604/cmc.2022.020265] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 06/29/2021] [Indexed: 09/02/2023]
|
29
|
Li W, Cao Y, Yu K, Cai Y, Huang F, Yang M, Xie W. Pulmonary lesion subtypes recognition of COVID-19 from radiomics data with three-dimensional texture characterization in computed tomography images. Biomed Eng Online 2021; 20:123. [PMID: 34865622 PMCID: PMC8645296 DOI: 10.1186/s12938-021-00961-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 11/19/2021] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND The COVID-19 disease is putting unprecedented pressure on the global healthcare system. The CT (computed tomography) examination as a auxiliary confirmed diagnostic method can help clinicians quickly detect lesions locations of COVID-19 once screening by PCR test. Furthermore, the lesion subtypes classification plays a critical role in the consequent treatment decision. Identifying the subtypes of lesions accurately can help doctors discover changes in lesions in time and better assess the severity of COVID-19. METHOD The most four typical lesion subtypes of COVID-19 are discussed in this paper, which are GGO (ground-glass opacity), cord, solid and subsolid. A computer-aided diagnosis approach of lesion subtype is proposed in this paper. The radiomics data of lesions are segmented from COVID-19 patients CT images with diagnosis and lesions annotations by radiologists. Then the three-dimensional texture descriptors are applied on the volume data of lesions as well as shape and first-order features. The massive feature data are selected by HAFS (hybrid adaptive feature selection) algorithm and a classification model is trained at the same time. The classifier is used to predict lesion subtypes as side decision information for radiologists. RESULTS There are 3734 lesions extracted from the dataset with 319 patients collection and then 189 radiomics features are obtained finally. The random forest classifier is trained with data augmentation that the number of different subtypes of lesions is imbalanced in initial dataset. The experimental results show that the accuracy of the four subtypes of lesions is (93.06%, 96.84%, 99.58%, and 94.30%), the recall is (95.52%, 91.58%, 95.80% and 80.75%) and the f-score is (93.84%, 92.37%, 95.47%, and 84.42%). CONCLUSION The three-dimensional radiomics features used in this paper can better express the high-level information of COVID-19 lesions in CT slices. HAFS method aggregates the results of multiple feature selection algorithms intersects with traditional methods to filter out redundant features more accurately. After selection, the subtype of COVID-19 lesion can be judged by inputting the features into the RF (random forest) model, which can help clinicians more accurately identify the subtypes of COVID-19 lesions and provide help for further research.
Collapse
Affiliation(s)
- Wei Li
- Key Laboratory of Intelligent Computing in Medical Image (MIIC), Northeastern University, Ministry of Education,
Shenyang, China
| | - Yangyong Cao
- School of Computer Science and Engineering, Northeastern University,
Shenyang, China
| | - Kun Yu
- Biomedical and Information Engineering School, Northeastern University,
Shenyang, China
| | - Yibo Cai
- School of Computer Science and Engineering, Northeastern University,
Shenyang, China
| | - Feng Huang
- Neusoft Medical System Co., Ltd., Shenyang, Liaoning China
| | - Minglei Yang
- Neusoft Medical System Co., Ltd., Shenyang, Liaoning China
| | - Weidong Xie
- School of Computer Science and Engineering, Northeastern University,
Shenyang, China
| |
Collapse
|
30
|
Gudigar A, Raghavendra U, Nayak S, Ooi CP, Chan WY, Gangavarapu MR, Dharmik C, Samanth J, Kadri NA, Hasikin K, Barua PD, Chakraborty S, Ciaccio EJ, Acharya UR. Role of Artificial Intelligence in COVID-19 Detection. SENSORS (BASEL, SWITZERLAND) 2021; 21:8045. [PMID: 34884045 PMCID: PMC8659534 DOI: 10.3390/s21238045] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 11/26/2021] [Accepted: 11/26/2021] [Indexed: 12/15/2022]
Abstract
The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.
Collapse
Affiliation(s)
- Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Sneha Nayak
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore;
| | - Wai Yee Chan
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur 50603, Malaysia;
| | - Mokshagna Rohit Gangavarapu
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chinmay Dharmik
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nahrizul Adib Kadri
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Prabal Datta Barua
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia;
- School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
| | - Subrata Chakraborty
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Edward J. Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, NY 10032, USA;
| | - U. Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore;
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
- International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 860-8555, Japan
| |
Collapse
|
31
|
Zhao S, Li Z, Chen Y, Zhao W, Xie X, Liu J, Zhao D, Li Y. SCOAT-Net: A novel network for segmenting COVID-19 lung opacification from CT images. PATTERN RECOGNITION 2021; 119:108109. [PMID: 34127870 PMCID: PMC8189738 DOI: 10.1016/j.patcog.2021.108109] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 05/07/2021] [Accepted: 06/09/2021] [Indexed: 02/05/2023]
Abstract
Automatic segmentation of lung opacification from computed tomography (CT) images shows excellent potential for quickly and accurately quantifying the infection of Coronavirus disease 2019 (COVID-19) and judging the disease development and treatment response. However, some challenges still exist, including the complexity and variability features of the opacity regions, the small difference between the infected and healthy tissues, and the noise of CT images. Due to limited medical resources, it is impractical to obtain a large amount of data in a short time, which further hinders the training of deep learning models. To answer these challenges, we proposed a novel spatial- and channel-wise coarse-to-fine attention network (SCOAT-Net), inspired by the biological vision mechanism, for the segmentation of COVID-19 lung opacification from CT images. With the UNet++ as basic structure, our SCOAT-Net introduces the specially designed spatial-wise and channel-wise attention modules, which serve to collaboratively boost the attention learning of the network and extract the efficient features of the infected opacification regions at the pixel and channel levels. Experiments show that our proposed SCOAT-Net achieves better results compared to several state-of-the-art image segmentation networks and has acceptable generalization ability.
Collapse
Affiliation(s)
- Shixuan Zhao
- MOE Key Lab for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhidan Li
- MOE Key Lab for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Yang Chen
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Wei Zhao
- Department of Radiology, The Second Xiangya Hospital, Central South University, No.139 Middle Renmin Road, Changsha, Hunan, China
| | - Xingzhi Xie
- Department of Radiology, The Second Xiangya Hospital, Central South University, No.139 Middle Renmin Road, Changsha, Hunan, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, No.139 Middle Renmin Road, Changsha, Hunan, China
- Department of Radiology Quality Control Center, Changsha, Hunan, China
| | - Di Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Yongjie Li
- MOE Key Lab for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
32
|
Luengo J, Sánchez-Tarragó D, Prati RC, Herrera F. Multiple instance classification: Bag noise filtering for negative instance noise cleaning. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.07.076] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
33
|
Said AB, Erradi A, Aly HA, Mohamed A. Predicting COVID-19 cases using bidirectional LSTM on multivariate time series. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2021; 28:56043-56052. [PMID: 34043172 PMCID: PMC8155803 DOI: 10.1007/s11356-021-14286-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 05/03/2021] [Indexed: 05/05/2023]
Abstract
To assist policymakers in making adequate decisions to stop the spread of the COVID-19 pandemic, accurate forecasting of the disease propagation is of paramount importance. This paper presents a deep learning approach to forecast the cumulative number of COVID-19 cases using bidirectional Long Short-Term Memory (Bi-LSTM) network applied to multivariate time series. Unlike other forecasting techniques, our proposed approach first groups the countries having similar demographic and socioeconomic aspects and health sector indicators using K-means clustering algorithm. The cumulative case data of the clustered countries enriched with data related to the lockdown measures are fed to the bidirectional LSTM to train the forecasting model. We validate the effectiveness of the proposed approach by studying the disease outbreak in Qatar and the proposed model prediction from December 1st until December 31st, 2020. The quantitative evaluation shows that the proposed technique outperforms state-of-art forecasting approaches.
Collapse
Affiliation(s)
- Ahmed Ben Said
- Computer Science and Engineering Department, College of Engineering, Qatar University, 2713 Doha, Qatar
| | - Abdelkarim Erradi
- Computer Science and Engineering Department, College of Engineering, Qatar University, 2713 Doha, Qatar
| | - Hussein Ahmed Aly
- Computer Science and Engineering Department, College of Engineering, Qatar University, 2713 Doha, Qatar
| | - Abdelmonem Mohamed
- Computer Science and Engineering Department, College of Engineering, Qatar University, 2713 Doha, Qatar
| |
Collapse
|
34
|
Bougourzi F, Distante C, Ouafi A, Dornaika F, Hadid A, Taleb-Ahmed A. Per-COVID-19: A Benchmark Dataset for COVID-19 Percentage Estimation from CT-Scans. J Imaging 2021; 7:jimaging7090189. [PMID: 34564115 PMCID: PMC8468956 DOI: 10.3390/jimaging7090189] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 09/14/2021] [Accepted: 09/14/2021] [Indexed: 11/24/2022] Open
Abstract
COVID-19 infection recognition is a very important step in the fight against the COVID-19 pandemic. In fact, many methods have been used to recognize COVID-19 infection including Reverse Transcription Polymerase Chain Reaction (RT-PCR), X-ray scan, and Computed Tomography scan (CT- scan). In addition to the recognition of the COVID-19 infection, CT scans can provide more important information about the evolution of this disease and its severity. With the extensive number of COVID-19 infections, estimating the COVID-19 percentage can help the intensive care to free up the resuscitation beds for the critical cases and follow other protocol for less severity cases. In this paper, we introduce COVID-19 percentage estimation dataset from CT-scans, where the labeling process was accomplished by two expert radiologists. Moreover, we evaluate the performance of three Convolutional Neural Network (CNN) architectures: ResneXt-50, Densenet-161, and Inception-v3. For the three CNN architectures, we use two loss functions: MSE and Dynamic Huber. In addition, two pretrained scenarios are investigated (ImageNet pretrained models and pretrained models using X-ray data). The evaluated approaches achieved promising results on the estimation of COVID-19 infection. Inception-v3 using Dynamic Huber loss function and pretrained models using X-ray data achieved the best performance for slice-level results: 0.9365, 5.10, and 9.25 for Pearson Correlation coefficient (PC), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE), respectively. On the other hand, the same approach achieved 0.9603, 4.01, and 6.79 for PCsubj, MAEsubj, and RMSEsubj, respectively, for subject-level results. These results prove that using CNN architectures can provide accurate and fast solution to estimate the COVID-19 infection percentage for monitoring the evolution of the patient state.
Collapse
Affiliation(s)
- Fares Bougourzi
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, 73100 Lecce, Italy;
| | - Cosimo Distante
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, 73100 Lecce, Italy;
- Correspondence: ; Tel.: +39-0832-1975300
| | - Abdelkrim Ouafi
- Laboratory of LESIA, University of Biskra, Biskra 7000, Algeria;
| | - Fadi Dornaika
- University of the Basque Country UPV/EHU, 20018 San Sebastian, Spain;
- IKERBASQUE, Basque Foundation for Science, 48009 Bilbao, Spain
| | - Abdenour Hadid
- University Polytechnique Hauts-de-France, University Lille, CNRS, Centrale Lille, UMR 8520-IEMN, F-59313 Valenciennes, France; (A.H.); (A.T.-A.)
| | - Abdelmalik Taleb-Ahmed
- University Polytechnique Hauts-de-France, University Lille, CNRS, Centrale Lille, UMR 8520-IEMN, F-59313 Valenciennes, France; (A.H.); (A.T.-A.)
| |
Collapse
|
35
|
Zhang F. Application of machine learning in CT images and X-rays of COVID-19 pneumonia. Medicine (Baltimore) 2021; 100:e26855. [PMID: 34516488 PMCID: PMC8428739 DOI: 10.1097/md.0000000000026855] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 07/18/2021] [Accepted: 07/20/2021] [Indexed: 01/05/2023] Open
Abstract
ABSTRACT Coronavirus disease (COVID-19) has spread worldwide. X-ray and computed tomography (CT) are 2 technologies widely used in image acquisition, segmentation, diagnosis, and evaluation. Artificial intelligence can accurately segment infected parts in X-ray and CT images, assist doctors in improving diagnosis efficiency, and facilitate the subsequent assessment of the severity of the patient infection. The medical assistant platform based on machine learning can help radiologists make clinical decisions and helper in screening, diagnosis, and treatment. By providing scientific methods for image recognition, segmentation, and evaluation, we summarized the latest developments in the application of artificial intelligence in COVID-19 lung imaging, and provided guidance and inspiration to researchers and doctors who are fighting the COVID-19 virus.
Collapse
|
36
|
Amin J, Anjum MA, Sharif M, Rehman A, Saba T, Zahra R. Microscopic segmentation and classification of COVID-19 infection with ensemble convolutional neural network. Microsc Res Tech 2021; 85:385-397. [PMID: 34435702 PMCID: PMC8646237 DOI: 10.1002/jemt.23913] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 07/10/2021] [Accepted: 08/11/2021] [Indexed: 01/19/2023]
Abstract
The detection of biological RNA from sputum has a comparatively poor positive rate in the initial/early stages of discovering COVID‐19, as per the World Health Organization. It has a different morphological structure as compared to healthy images, manifested by computer tomography (CT). COVID‐19 diagnosis at an early stage can aid in the timely cure of patients, lowering the mortality rate. In this reported research, three‐phase model is proposed for COVID‐19 detection. In Phase I, noise is removed from CT images using a denoise convolutional neural network (DnCNN). In the Phase II, the actual lesion region is segmented from the enhanced CT images by using deeplabv3 and ResNet‐18. In Phase III, segmented images are passed to the stack sparse autoencoder (SSAE) deep learning model having two stack auto‐encoders (SAE) with the selected hidden layers. The designed SSAE model is based on both SAE and softmax layers for COVID19 classification. The proposed method is evaluated on actual patient data of Pakistan Ordinance Factories and other public benchmark data sets with different scanners/mediums. The proposed method achieved global segmentation accuracy of 0.96 and 0.97 for classification.
Collapse
Affiliation(s)
- Javeria Amin
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan
| | - Muhammad Almas Anjum
- Dean of University, National University of Technology (NUTECH), Islamabad, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad Wah Campus, Wah Cantt, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Rida Zahra
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan
| |
Collapse
|
37
|
Jin Q, Cui H, Sun C, Meng Z, Wei L, Su R. Domain adaptation based self-correction model for COVID-19 infection segmentation in CT images. EXPERT SYSTEMS WITH APPLICATIONS 2021; 176:114848. [PMID: 33746369 PMCID: PMC7954643 DOI: 10.1016/j.eswa.2021.114848] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Revised: 01/29/2021] [Accepted: 03/02/2021] [Indexed: 05/03/2023]
Abstract
The capability of generalization to unseen domains is crucial for deep learning models when considering real-world scenarios. However, current available medical image datasets, such as those for COVID-19 CT images, have large variations of infections and domain shift problems. To address this issue, we propose a prior knowledge driven domain adaptation and a dual-domain enhanced self-correction learning scheme. Based on the novel learning scheme, a domain adaptation based self-correction model (DASC-Net) is proposed for COVID-19 infection segmentation on CT images. DASC-Net consists of a novel attention and feature domain enhanced domain adaptation model (AFD-DA) to solve the domain shifts and a self-correction learning process to refine segmentation results. The innovations in AFD-DA include an image-level activation feature extractor with attention to lung abnormalities and a multi-level discrimination module for hierarchical feature domain alignment. The proposed self-correction learning process adaptively aggregates the learned model and corresponding pseudo labels for the propagation of aligned source and target domain information to alleviate the overfitting to noises caused by pseudo labels. Extensive experiments over three publicly available COVID-19 CT datasets demonstrate that DASC-Net consistently outperforms state-of-the-art segmentation, domain shift, and coronavirus infection segmentation methods. Ablation analysis further shows the effectiveness of the major components in our model. The DASC-Net enriches the theory of domain adaptation and self-correction learning in medical imaging and can be generalized to multi-site COVID-19 infection segmentation on CT images for clinical deployment.
Collapse
Affiliation(s)
- Qiangguo Jin
- School of Computer Software, College of Intelligence and Computing, Tianjin University, Tianjin, China
- CSIRO Data61, Sydney, Australia
| | - Hui Cui
- Department of Computer Science and Information Technology, La Trobe University, Melbourne, Australia
| | | | - Zhaopeng Meng
- School of Computer Software, College of Intelligence and Computing, Tianjin University, Tianjin, China
- Tianjin University of Traditional Chinese Medicine, Tianjin, China
| | - Leyi Wei
- School of Software, Shandong University, Shandong, China
| | - Ran Su
- School of Computer Software, College of Intelligence and Computing, Tianjin University, Tianjin, China
| |
Collapse
|
38
|
Özcan ANŞ, Aslan K. Diagnostic accuracy of sagittal TSE-T2W, variable flip angle 3D TSE-T2W and high-resolution 3D heavily T2W sequences for the stenosis of two localizations: the cerebral aqueduct and the superior medullary velum. Curr Med Imaging 2021; 17:1432-1438. [PMID: 34365953 DOI: 10.2174/1573405617666210806123720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 04/07/2021] [Accepted: 05/03/2021] [Indexed: 11/22/2022]
Abstract
OBJECTIVES This study aimed to investigate the accuracy of conventional sagittal turbo spin echo T2-weighted (Sag TSE-T2W), variable flip angle 3D TSE (VFA-3D-TSE) and high-resolution 3D heavily T2W (HR-3D-HT2W) sequences in the diagnosis of primary aqueductal stenosis (PAS) and superior medullary velum stenosis (SMV-S), and the effect of stenosis localization on diagnosis. METHODS Seventy-seven patients were included in the study. The diagnosis accuracy of the HR-3D-HT2W, Sag TSE-T2W and VFA-3D-TSE sequences, was classified into three grades by two experienced neuroradiologists: grade 0 (the sequence has no diagnostic ability), grade 1 (the sequence diagnoses stenosis but does not show focal stenosis itself or membrane formation), and grade 2 (the sequence makes a definitive diagnosis of stenosis and shows focal stenosis itself or membrane formation). Stenosis localizations were divided into three as Cerebral Aquaduct (CA), superior medullary velum (SMV) and SMV+CA. In the statistical analysis, the grades of the sequences were compared without making a differentiation based on localization. Then, the effect of localization on diagnosis was determined by comparing the grades for individual localizations. RESULTS In the sequence comparison, grade 0 was not detected in the VFA-3D-TSE and HR-3D-HT2W sequences, and these sequences diagnosed all cases. On the other hand, 25.4% of grade 0 was detected with the Sag TSE-T2W sequence (P<0.05). Grade 1 was detected by VFA-3D-TSE in 23% of the cases, while grade 1 (12.5%) was detected by HRH-3D-T2W in only one case, and the difference was statistically significant (P<0.05). When the sequences were examined according to localizations, the rate of grade 0 in the Sag TSE-T2W sequence was statistically significantly higher for the SMV localization (33.3%) compared to CA (66.7%) and SMV+CA (0%) (P<0.05). Localization had no effect on diagnosis using the other sequences. CONCLUSION In our study, we found that the VFA-3D-TSE and HR-3D-HT2W sequences were successful in the diagnosis of PAS and SMV-S contrary to the Sag TSE-T2W sequence.
Collapse
Affiliation(s)
| | - Kerim Aslan
- Samsun Ondokuz Mayıs University, Department of Radiology, Samsun. Turkey
| |
Collapse
|
39
|
Acar E, Şahin E, Yılmaz İ. Improving effectiveness of different deep learning-based models for detecting COVID-19 from computed tomography (CT) images. Neural Comput Appl 2021; 33:17589-17609. [PMID: 34345118 PMCID: PMC8321007 DOI: 10.1007/s00521-021-06344-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 07/18/2021] [Indexed: 12/12/2022]
Abstract
COVID-19 has caused a pandemic crisis that threatens the world in many areas, especially in public health. For the diagnosis of COVID-19, computed tomography has a prognostic role in the early diagnosis of COVID-19 as it provides both rapid and accurate results. This is crucial to assist clinicians in making decisions for rapid isolation and appropriate patient treatment. Therefore, many researchers have shown that the accuracy of COVID-19 patient detection from chest CT images using various deep learning systems is extremely optimistic. Deep learning networks such as convolutional neural networks (CNNs) require substantial training data. One of the biggest problems for researchers is accessing a significant amount of training data. In this work, we combine methods such as segmentation, data augmentation and generative adversarial network (GAN) to increase the effectiveness of deep learning models. We propose a method that generates synthetic chest CT images using the GAN method from a limited number of CT images. We test the performance of experiments (with and without GAN) on internal and external dataset. When the CNN is trained on real images and synthetic images, a slight increase in accuracy and other results are observed in the internal dataset, but between \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$3\%$$\end{document}3% and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$9\%$$\end{document}9% in the external dataset. It is promising according to the performance results that the proposed method will accelerate the detection of COVID-19 and lead to more robust systems.
Collapse
Affiliation(s)
- Erdi Acar
- Department of Computer Engineering, Çanakkale Onsekiz Mart University, 17100 Çanakkale, Turkey
| | - Engin Şahin
- Department of Computer Engineering, Çanakkale Onsekiz Mart University, 17100 Çanakkale, Turkey
| | - İhsan Yılmaz
- Department of Computer Engineering, Çanakkale Onsekiz Mart University, 17100 Çanakkale, Turkey
| |
Collapse
|
40
|
Li Z, Zhao S, Chen Y, Luo F, Kang Z, Cai S, Zhao W, Liu J, Zhao D, Li Y. A deep-learning-based framework for severity assessment of COVID-19 with CT images. EXPERT SYSTEMS WITH APPLICATIONS 2021; 185:115616. [PMID: 34334965 PMCID: PMC8314790 DOI: 10.1016/j.eswa.2021.115616] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 06/03/2021] [Accepted: 07/12/2021] [Indexed: 02/05/2023]
Abstract
Millions of positive COVID-19 patients are suffering from the pandemic around the world, a critical step in the management and treatment is severity assessment, which is quite challenging with the limited medical resources. Currently, several artificial intelligence systems have been developed for the severity assessment. However, imprecise severity assessment and insufficient data are still obstacles. To address these issues, we proposed a novel deep-learning-based framework for the fine-grained severity assessment using 3D CT scans, by jointly performing lung segmentation and lesion segmentation. The main innovations in the proposed framework include: 1) decomposing 3D CT scan into multi-view slices for reducing the complexity of 3D model, 2) integrating prior knowledge (dual-Siamese channels and clinical metadata) into our model for improving the model performance. We evaluated the proposed method on 1301 CT scans of 449 COVID-19 cases collected by us, our method achieved an accuracy of 86.7% for four-way classification, with the sensitivities of 92%, 78%, 95%, 89% for four stages. Moreover, ablation study demonstrated the effectiveness of the major components in our model. This indicates that our method may contribute a potential solution to severity assessment of COVID-19 patients using CT images and clinical metadata.
Collapse
Affiliation(s)
- Zhidan Li
- MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Shixuan Zhao
- MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Yang Chen
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Fuya Luo
- MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhiqing Kang
- MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Shengping Cai
- Department of Radiology, Wuhan Red Cross Hospital, Wuhan, China
| | - Wei Zhao
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Di Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Yongjie Li
- MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
41
|
Müller D, Soto-Rey I, Kramer F. Robust chest CT image segmentation of COVID-19 lung infection based on limited data. INFORMATICS IN MEDICINE UNLOCKED 2021; 25:100681. [PMID: 34337140 PMCID: PMC8313817 DOI: 10.1016/j.imu.2021.100681] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 07/12/2021] [Accepted: 07/25/2021] [Indexed: 12/17/2022] Open
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) affects billions of lives around the world and has a significant impact on public healthcare. For quantitative assessment and disease monitoring medical imaging like computed tomography offers great potential as alternative to RT-PCR methods. For this reason, automated image segmentation is highly desired as clinical decision support. However, publicly available COVID-19 imaging data is limited which leads to overfitting of traditional approaches. METHODS To address this problem, we propose an innovative automated segmentation pipeline for COVID-19 infected regions, which is able to handle small datasets by utilization as variant databases. Our method focuses on on-the-fly generation of unique and random image patches for training by performing several preprocessing methods and exploiting extensive data augmentation. For further reduction of the overfitting risk, we implemented a standard 3D U-Net architecture instead of new or computational complex neural network architectures. RESULTS Through a k-fold cross-validation on 20 CT scans as training and validation of COVID-19, we were able to develop a highly accurate as well as robust segmentation model for lungs and COVID-19 infected regions without overfitting on limited data. We performed an in-detail analysis and discussion on the robustness of our pipeline through a sensitivity analysis based on the cross-validation and impact on model generalizability of applied preprocessing techniques. Our method achieved Dice similarity coefficients for COVID-19 infection between predicted and annotated segmentation from radiologists of 0.804 on validation and 0.661 on a separate testing set consisting of 100 patients. CONCLUSIONS We demonstrated that the proposed method outperforms related approaches, advances the state-of-the-art for COVID-19 segmentation and improves robust medical image analysis based on limited data.
Collapse
Affiliation(s)
- Dominik Müller
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Germany
| | - Iñaki Soto-Rey
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Germany
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Germany
| |
Collapse
|
42
|
Irmak E. COVID-19 disease severity assessment using CNN model. IET IMAGE PROCESSING 2021; 15:1814-1824. [PMID: 34230837 PMCID: PMC8251482 DOI: 10.1049/ipr2.12153] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 02/16/2021] [Accepted: 02/21/2021] [Indexed: 05/14/2023]
Abstract
Due to the highly infectious nature of the novel coronavirus (COVID-19) disease, excessive number of patients waits in the line for chest X-ray examination, which overloads the clinicians and radiologists and negatively affects the patient's treatment, prognosis and control of the pandemic. Now that the clinical facilities such as the intensive care units and the mechanical ventilators are very limited in the face of this highly contagious disease, it becomes quite important to classify the patients according to their severity levels. This paper presents a novel implementation of convolutional neural network (CNN) approach for COVID-19 disease severity classification (assessment). An automated CNN model is designed and proposed to divide COVID-19 patients into four severity classes as mild, moderate, severe, and critical with an average accuracy of 95.52% using chest X-ray images as input. Experimental results on a sufficiently large number of chest X-ray images demonstrate the effectiveness of CNN model produced with the proposed framework. To the best of the author's knowledge, this is the first COVID-19 disease severity assessment study with four stages (mild vs. moderate vs. severe vs. critical) using a sufficiently large number of X-ray images dataset and CNN whose almost all hyper-parameters are automatically tuned by the grid search optimiser.
Collapse
Affiliation(s)
- Emrah Irmak
- Electrical‐Electronics Engineering DepartmentAlanya Alaaddin Keykubat UniversityAlanyaAntalyaTurkey
| |
Collapse
|
43
|
Moezzi M, Shirbandi K, Shahvandi HK, Arjmand B, Rahim F. The diagnostic accuracy of Artificial Intelligence-Assisted CT imaging in COVID-19 disease: A systematic review and meta-analysis. INFORMATICS IN MEDICINE UNLOCKED 2021; 24:100591. [PMID: 33977119 PMCID: PMC8099790 DOI: 10.1016/j.imu.2021.100591] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 04/17/2021] [Accepted: 04/29/2021] [Indexed: 01/08/2023] Open
Abstract
Artificial intelligence (AI) systems have become critical in support of decision-making. This systematic review summarizes all the data currently available on the AI-assisted CT-Scan prediction accuracy for COVID-19. The ISI Web of Science, Cochrane Library, PubMed, Scopus, CINAHL, Science Direct, PROSPERO, and EMBASE were systematically searched. We used the revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool to assess all included studies' quality and potential bias. A hierarchical receiver-operating characteristic summary (HSROC) curve and a summary receiver operating characteristic (SROC) curve have been implemented. The area under the curve (AUC) was computed to determine the diagnostic accuracy. Finally, 36 studies (a total of 39,246 image data) were selected for inclusion into the final meta-analysis. The pooled sensitivity for AI was 0.90 (95% CI, 0.90–0.91), specificity was 0.91 (95% CI, 0.90–0.92) and the AUC was 0.96 (95% CI, 0.91–0.98). For deep learning (DL) method, the pooled sensitivity was 0.90 (95% CI, 0.90–0.91), specificity was 0.88 (95% CI, 0.87–0.88) and the AUC was 0.96 (95% CI, 0.93–0.97). In case of machine learning (ML), the pooled sensitivity was 0.90 (95% CI, 0.90–0.91), specificity was 0.95 (95% CI, 0.94–0.95) and the AUC was 0.97 (95% CI, 0.96–0.99). AI in COVID-19 patients is useful in identifying symptoms of lung involvement. More prospective real-time trials are required to confirm AI's role for high and quick COVID-19 diagnosis due to the possible selection bias and retrospective existence of currently available studies.
Collapse
Affiliation(s)
- Meisam Moezzi
- Department of Emergency Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Kiarash Shirbandi
- International Affairs Department (IAD), Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Hassan Kiani Shahvandi
- Allied Health Science, School of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Babak Arjmand
- Research Assistant Professor of Applied Cellular Sciences (By Research), Cellular and Molecular Institute, Endocrinology and Metabolism Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Fakher Rahim
- Health Research Institute, Thalassemia and Hemoglobinopathies Research Centre, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| |
Collapse
|
44
|
Li C, Yang Y, Liang H, Wu B. Transfer learning for establishment of recognition of COVID-19 on CT imaging using small-sized training datasets. Knowl Based Syst 2021; 218:106849. [PMID: 33584016 PMCID: PMC7866884 DOI: 10.1016/j.knosys.2021.106849] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Revised: 01/27/2021] [Accepted: 02/02/2021] [Indexed: 01/08/2023]
Abstract
The coronavirus disease, called COVID-19, which is spreading fast worldwide since the end of 2019, and has become a global challenging pandemic. Until 27th May 2020, it caused more than 5.6 million individuals infected throughout the world and resulted in greater than 348,145 deaths. CT images-based classification technique has been tried to use the identification of COVID-19 with CT imaging by hospitals, which aims to minimize the possibility of virus transmission and alleviate the burden of clinicians and radiologists. Early diagnosis of COVID-19, which not only prevents the disease from spreading further but allows more reasonable allocation of limited medical resources. Therefore, CT images play an essential role in identifying cases of COVID-19 that are in great need of intensive clinical care. Unfortunately, the current public health emergency, which has caused great difficulties in collecting a large set of precise data for training neural networks. To tackle this challenge, our first thought is transfer learning, which is a technique that aims to transfer the knowledge from one or more source tasks to a target task when the latter has fewer training data. Since the training data is relatively limited, so a transfer learning-based DensNet-121 approach for the identification of COVID-19 is established. The proposed method is inspired by the precious work of predecessors such as CheXNet for identifying common Pneumonia, which was trained using the large Chest X-ray14 dataset, and the dataset contains 112,120 frontal chest X-rays of 14 different chest diseases (including Pneumonia) that are individually labeled and achieved good performance. Therefore, CheXNet as the pre-trained network was used for the target task (COVID-19 classification) by fine-tuning the network weights on the small-sized dataset in the target task. Finally, we evaluated our proposed method on the COVID-19-CT dataset. Experimentally, our method achieves state-of-the-art performance for the accuracy (ACC) and F1-score. The quantitative indicators show that the proposed method only uses a GPU can reach the best performance, up to 0.87 and 0.86, respectively, compared with some widely used and recent deep learning methods, which are helpful for COVID-19 diagnosis and patient triage. The codes used in this manuscript are publicly available on GitHub at (https://github.com/lichun0503/CT-Classification).
Collapse
Affiliation(s)
- Chun Li
- School of Science, Harbin Institute of Technology, Shenzhen, 518055, China
| | - Yunyun Yang
- School of Science, Harbin Institute of Technology, Shenzhen, 518055, China
| | - Hui Liang
- School of Science, Harbin Institute of Technology, Shenzhen, 518055, China
| | - Boying Wu
- Department of Mathematics, Harbin Institute of Technology, Harbin, 150006, China
| |
Collapse
|
45
|
Xue W, Cao C, Liu J, Duan Y, Cao H, Wang J, Tao X, Chen Z, Wu M, Zhang J, Sun H, Jin Y, Yang X, Huang R, Xiang F, Song Y, You M, Zhang W, Jiang L, Zhang Z, Kong S, Tian Y, Zhang L, Ni D, Xie M. Modality alignment contrastive learning for severity assessment of COVID-19 from lung ultrasound and clinical information. Med Image Anal 2021; 69:101975. [PMID: 33550007 PMCID: PMC7817458 DOI: 10.1016/j.media.2021.101975] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 12/17/2020] [Accepted: 01/15/2021] [Indexed: 02/06/2023]
Abstract
The outbreak of COVID-19 around the world has caused great pressure to the health care system, and many efforts have been devoted to artificial intelligence (AI)-based analysis of CT and chest X-ray images to help alleviate the shortage of radiologists and improve the diagnosis efficiency. However, only a few works focus on AI-based lung ultrasound (LUS) analysis in spite of its significant role in COVID-19. In this work, we aim to propose a novel method for severity assessment of COVID-19 patients from LUS and clinical information. Great challenges exist regarding the heterogeneous data, multi-modality information, and highly nonlinear mapping. To overcome these challenges, we first propose a dual-level supervised multiple instance learning module (DSA-MIL) to effectively combine the zone-level representations into patient-level representations. Then a novel modality alignment contrastive learning module (MA-CLR) is presented to combine representations of the two modalities, LUS and clinical information, by matching the two spaces while keeping the discriminative features. To train the nonlinear mapping, a staged representation transfer (SRT) strategy is introduced to maximumly leverage the semantic and discriminative information from the training data. We trained the model with LUS data of 233 patients, and validated it with 80 patients. Our method can effectively combine the two modalities and achieve accuracy of 75.0% for 4-level patient severity assessment, and 87.5% for the binary severe/non-severe identification. Besides, our method also provides interpretation of the severity assessment by grading each of the lung zone (with accuracy of 85.28%) and identifying the pathological patterns of each lung zone. Our method has a great potential in real clinical practice for COVID-19 patients, especially for pregnant women and children, in aspects of progress monitoring, prognosis stratification, and patient management.
Collapse
Affiliation(s)
- Wufeng Xue
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China.
| | - Chunyan Cao
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Jie Liu
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Yilian Duan
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Haiyan Cao
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Jian Wang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China
| | - Xumin Tao
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China
| | - Zejian Chen
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China
| | - Meng Wu
- Department of Ultrasound, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Jinxiang Zhang
- Department of Emergency Surgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hui Sun
- Department of Endocrinology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yang Jin
- Department of Respiratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xin Yang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China
| | - Ruobing Huang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China
| | - Feixiang Xiang
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Yue Song
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Manjie You
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Wen Zhang
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Lili Jiang
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Ziming Zhang
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Shuangshuang Kong
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Ying Tian
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Li Zhang
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China
| | - Dong Ni
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China.
| | - Mingxing Xie
- Department of Ultrasound, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, China.
| |
Collapse
|
46
|
Mondal MRH, Bharati S, Podder P. Diagnosis of COVID-19 Using Machine Learning and Deep Learning: A Review. Curr Med Imaging 2021; 17:1403-1418. [PMID: 34259149 DOI: 10.2174/1573405617666210713113439] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 03/29/2021] [Accepted: 04/08/2021] [Indexed: 02/08/2023]
Abstract
BACKGROUND This paper provides a systematic review of the application of Artificial Intelligence (AI) in the form of Machine Learning (ML) and Deep Learning (DL) techniques in fighting against the effects of novel coronavirus disease (COVID-19). OBJECTIVE & METHODS The objective is to perform a scoping review on AI for COVID-19 using preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines. A literature search was performed for relevant studies published from 1 January 2020 till 27 March 2021. Out of 4050 research papers available in reputed publishers, a full-text review of 440 articles was done based on the keywords of AI, COVID-19, ML, forecasting, DL, X-ray, and Computed Tomography (CT). Finally, 52 articles were included in the result synthesis of this paper. As part of the review, different ML regression methods were reviewed first in predicting the number of confirmed and death cases. Secondly, a comprehensive survey was carried out on the use of ML in classifying COVID-19 patients. Thirdly, different datasets on medical imaging were compared in terms of the number of images, number of positive samples and number of classes in the datasets. The different stages of the diagnosis, including preprocessing, segmentation and feature extraction were also reviewed. Fourthly, the performance results of different research papers were compared to evaluate the effectiveness of DL methods on different datasets. RESULTS Results show that residual neural network (ResNet-18) and densely connected convolutional network (DenseNet 169) exhibit excellent classification accuracy for X-ray images, while DenseNet-201 has the maximum accuracy in classifying CT scan images. This indicates that ML and DL are useful tools in assisting researchers and medical professionals in predicting, screening and detecting COVID-19. CONCLUSION Finally, this review highlights the existing challenges, including regulations, noisy data, data privacy, and the lack of reliable large datasets, then provides future research directions in applying AI in managing COVID-19.
Collapse
Affiliation(s)
| | - Subrato Bharati
- Institute of ICT, Bangladesh University of Engineering and Technology, Dhaka-1205, Bangladesh
| | - Prajoy Podder
- Institute of ICT, Bangladesh University of Engineering and Technology, Dhaka-1205, Bangladesh
| |
Collapse
|
47
|
AI-driven quantification, staging and outcome prediction of COVID-19 pneumonia. Med Image Anal 2020; 67:101860. [PMID: 33171345 PMCID: PMC7558247 DOI: 10.1016/j.media.2020.101860] [Citation(s) in RCA: 89] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 08/24/2020] [Accepted: 09/29/2020] [Indexed: 12/11/2022]
Abstract
Coronavirus disease 2019 (COVID-19) emerged in 2019 and disseminated around the world rapidly. Computed tomography (CT) imaging has been proven to be an important tool for screening, disease quantification and staging. The latter is of extreme importance for organizational anticipation (availability of intensive care unit beds, patient management planning) as well as to accelerate drug development through rapid, reproducible and quantified assessment of treatment response. Even if currently there are no specific guidelines for the staging of the patients, CT together with some clinical and biological biomarkers are used. In this study, we collected a multi-center cohort and we investigated the use of medical imaging and artificial intelligence for disease quantification, staging and outcome prediction. Our approach relies on automatic deep learning-based disease quantification using an ensemble of architectures, and a data-driven consensus for the staging and outcome prediction of the patients fusing imaging biomarkers with clinical and biological attributes. Highly promising results on multiple external/independent evaluation cohorts as well as comparisons with expert human readers demonstrate the potentials of our approach.
Collapse
|
48
|
Mohammed A, Wang C, Zhao M, Ullah M, Naseem R, Wang H, Pedersen M, Cheikh FA. Weakly-Supervised Network for Detection of COVID-19 in Chest CT Scans. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:155987-156000. [PMID: 34812352 PMCID: PMC8545309 DOI: 10.1109/access.2020.3018498] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 08/10/2020] [Indexed: 05/02/2023]
Abstract
Deep Learning-based chest Computed Tomography (CT) analysis has been proven to be effective and efficient for COVID-19 diagnosis. Existing deep learning approaches heavily rely on large labeled data sets, which are difficult to acquire in this pandemic situation. Therefore, weakly-supervised approaches are in demand. In this paper, we propose an end-to-end weakly-supervised COVID-19 detection approach, ResNext+, that only requires volume level data labels and can provide slice level prediction. The proposed approach incorporates a lung segmentation mask as well as spatial and channel attention to extract spatial features. Besides, Long Short Term Memory (LSTM) is utilized to acquire the axial dependency of the slices. Moreover, a slice attention module is applied before the final fully connected layer to generate the slice level prediction without additional supervision. An ablation study is conducted to show the efficiency of the attention blocks and the segmentation mask block. Experimental results, obtained from publicly available datasets, show a precision of 81.9% and F1 score of 81.4%. The closest state-of-the-art gives 76.7% precision and 78.8% F1 score. The 5% improvement in precision and 3% in the F1 score demonstrate the effectiveness of the proposed method. It is worth noticing that, applying image enhancement approaches do not improve the performance of the proposed method, sometimes even harm the scores, although the enhanced images have better perceptual quality.
Collapse
Affiliation(s)
- Ahmed Mohammed
- Department of Computer ScienceNorwegian University of Science and Technology (NTNU) 2815 Gjøvik Norway
| | - Congcong Wang
- Department of Computer ScienceNorwegian University of Science and Technology (NTNU) 2815 Gjøvik Norway
| | - Meng Zhao
- Department of Computer ScienceNorwegian University of Science and Technology (NTNU) 2815 Gjøvik Norway
| | - Mohib Ullah
- Department of Computer ScienceNorwegian University of Science and Technology (NTNU) 2815 Gjøvik Norway
| | - Rabia Naseem
- Department of Computer ScienceNorwegian University of Science and Technology (NTNU) 2815 Gjøvik Norway
| | - Hao Wang
- Department of Computer ScienceNorwegian University of Science and Technology (NTNU) 2815 Gjøvik Norway
| | - Marius Pedersen
- Department of Computer ScienceNorwegian University of Science and Technology (NTNU) 2815 Gjøvik Norway
| | - Faouzi Alaya Cheikh
- Department of Computer ScienceNorwegian University of Science and Technology (NTNU) 2815 Gjøvik Norway
| |
Collapse
|