1
|
Dang Y, Ma W, Luo X, Wang H. CAD-Unet: A capsule network-enhanced Unet architecture for accurate segmentation of COVID-19 lung infections from CT images. Med Image Anal 2025; 103:103583. [PMID: 40306203 DOI: 10.1016/j.media.2025.103583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 02/17/2025] [Accepted: 04/04/2025] [Indexed: 05/02/2025]
Abstract
Since the outbreak of the COVID-19 pandemic in 2019, medical imaging has emerged as a primary modality for diagnosing COVID-19 pneumonia. In clinical settings, the segmentation of lung infections from computed tomography images enables rapid and accurate quantification and diagnosis of COVID-19. Segmentation of COVID-19 infections in the lungs poses a formidable challenge, primarily due to the indistinct boundaries and limited contrast presented by ground glass opacity manifestations. Moreover, the confounding similarity among infiltrates, lung tissues, and lung walls further complicates this segmentation task. To address these challenges, this paper introduces a novel deep network architecture, called CAD-Unet, for segmenting COVID-19 lung infections. In this architecture, capsule networks are incorporated into the existing Unet framework. Capsule networks represent a novel type of network architecture that differs from traditional convolutional neural networks. They utilize vectors for information transfer among capsules, facilitating the extraction of intricate lesion spatial information. Additionally, we design a capsule encoder path and establish a coupling path between the unet encoder and the capsule encoder. This design maximizes the complementary advantages of both network structures while achieving efficient information fusion. Finally, extensive experiments are conducted on four publicly available datasets, encompassing binary segmentation tasks and multi-class segmentation tasks. The experimental results demonstrate the superior segmentation performance of the proposed model. The code has been released at: https://github.com/AmanoTooko-jie/CAD-Unet.
Collapse
Affiliation(s)
- Yijie Dang
- School of Information Engineering, Ningxia University, Yinchuan, 750021, Ningxia, China
| | - Weijun Ma
- School of Information Engineering, Ningxia University, Yinchuan, 750021, Ningxia, China; Ningxia Key Laboratory of Artificial Intelligence and Information Security for Channeling Computing Resources from the East to the West, Ningxia University, Yinchuan, 750021, Ningxia, China.
| | - Xiaohu Luo
- School of Mathematics and Computer Science, Ningxia Normal University, Guyuan, 756099, China
| | - Huaizhu Wang
- School of Advanced Interdisciplinary Studies, Ningxia University, Zhongwei, 755000, China
| |
Collapse
|
2
|
Chu Y, Wang J, Xiong Y, Gao Y, Liu X, Luo G, Gao X, Zhao M, Huang C, Qiu Z, Meng X. Point-annotation supervision for robust 3D pulmonary infection segmentation by CT-based cascading deep learning. Comput Biol Med 2025; 187:109760. [PMID: 39923589 DOI: 10.1016/j.compbiomed.2025.109760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 01/23/2025] [Accepted: 01/27/2025] [Indexed: 02/11/2025]
Abstract
Infected region segmentation is crucial for pulmonary infection diagnosis, severity assessment, and monitoring treatment progression. High-performance segmentation methods rely heavily on fully annotated, large-scale training datasets. However, manual labeling for pulmonary infections demands substantial investments of time and labor. While weakly supervised learning can greatly reduce annotation efforts, previous developments have focused mainly on natural or medical images with distinct boundaries and consistent textures. These approaches is not applicable to pulmonary infection segmentation, which should contend with high topological and intensity variations, irregular and ambiguous boundaries, and poor contrast in 3D contexts. In this study, we propose a cascading point-annotation framework to segment pulmonary infections, enabling optimization on larger datasets and superior performance on external data. Via comparing the representation of annotated points and unlabeled voxels, as well as establishing global uncertainty, we develop two regularization strategies to constrain the network to a more holistic lesion pattern understanding under sparse annotations. We further encompass an enhancement module to improve global anatomical perception and adaptability to spatial anisotropy, alongside a texture-aware variational module to determine more regionally consistent boundaries based on common textures of infection. Experiments on a large dataset of 1,072 CT volumes demonstrate our method outperforming state-of-the-art weakly-supervised approaches by approximately 3%-6% in dice score and is comparable to fully-supervised methods on external datasets. Moreover, our approach demonstrates robust performance even when applied to an unseen infection subtype, Mycoplasma pneumoniae, which was not included in the training datasets. These results collectively underscore rapid and promising applicability for emerging pulmonary infections.
Collapse
Affiliation(s)
- Yuetan Chu
- Center of Excellence for Smart Health (KCSH), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| | - Jianpeng Wang
- The Department of Critical Care Medicine, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yaxin Xiong
- The Department of Critical Care Medicine, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yuan Gao
- The Department of Critical Care Medicine, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xin Liu
- The Department of Prosthodontics, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Gongning Luo
- Center of Excellence for Smart Health (KCSH), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| | - Xin Gao
- Center of Excellence for Smart Health (KCSH), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| | - Mingyan Zhao
- The Department of Critical Care Medicine, First Affiliated Hospital of Harbin Medical University, Harbin, China.
| | - Chao Huang
- Ningbo Institute of Information Technology Application, Chinese Academy of Sciences (CAS), Ningbo, China.
| | - Zhaowen Qiu
- College of Computer and Control Engineering, Northeast Forestry University, Harbin, China.
| | - Xianglin Meng
- The Department of Critical Care Medicine, First Affiliated Hospital of Harbin Medical University, Harbin, China; The Cancer Institute and Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China.
| |
Collapse
|
3
|
Oliveira ADS, Costa MGF, Costa JPGF, Costa Filho CFF. Comparing Different Data Partitioning Strategies for Segmenting Areas Affected by COVID-19 in CT Scans. Diagnostics (Basel) 2024; 14:2791. [PMID: 39767152 PMCID: PMC11674714 DOI: 10.3390/diagnostics14242791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2024] [Revised: 12/09/2024] [Accepted: 12/10/2024] [Indexed: 01/11/2025] Open
Abstract
BACKGROUND/OBJECTIVES According to the World Health Organization, the gold standard for diagnosing COVID-19 is the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. However, to confirm the diagnosis in patients who have negative results but still show symptoms, imaging tests, especially computed tomography (CT), are used. In this study, using convolutional neural networks, we compared the following topics using manual and automatic lung segmentation methods: (1) the performance of an automatic segmentation of COVID-19 areas using two strategies for data partitioning, CT scans, and slice strategies; (2) the performance of an automatic segmentation method of COVID-19 when there was interobserver agreement between two groups of radiologists; and (3) the performance of the area affected by COVID-19. METHODS Two datasets and two deep neural network architectures are used to evaluate the automatic segmentation of lungs and COVID-19 areas. The performance of the U-Net architecture is compared with the performance of a new architecture proposed by the research group. RESULTS With automatic lung segmentation, the Dice metrics for the segmentation of the COVID-19 area were 73.01 ± 9.47% and 84.66 ± 5.41% for the CT-scan strategy and slice strategy, respectively. With manual lung segmentation, the Dice metrics for the automatic segmentation of COVID-19 were 74.47 ± 9.94% and 85.35 ± 5.41% for the CT-scan and the slice strategy, respectively. CONCLUSIONS The main conclusions were as follows: COVID-19 segmentation was slightly better for the slice strategy than for the CT-scan strategy; a comparison of the performance of the automatic COVID-19 segmentation and the interobserver agreement, in a group of 7 CT scans, revealed that there was no statistically significant difference between any metric.
Collapse
Affiliation(s)
- Anne de Souza Oliveira
- R&D Center in Electronic and Information Technology, Federal University of Amazonas, Manaus 69077-000, Brazil; (A.d.S.O.); (M.G.F.C.)
| | - Marly Guimarães Fernandes Costa
- R&D Center in Electronic and Information Technology, Federal University of Amazonas, Manaus 69077-000, Brazil; (A.d.S.O.); (M.G.F.C.)
| | | | | |
Collapse
|
4
|
Alshemaimri BK. Novel Deep CNNs Explore Regions, Boundaries, and Residual Learning for COVID-19 Infection Analysis in Lung CT. Tomography 2024; 10:1205-1221. [PMID: 39195726 PMCID: PMC11359787 DOI: 10.3390/tomography10080091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2024] [Revised: 07/06/2024] [Accepted: 07/17/2024] [Indexed: 08/29/2024] Open
Abstract
COVID-19 poses a global health crisis, necessitating precise diagnostic methods for timely containment. However, accurately delineating COVID-19-affected regions in lung CT scans is challenging due to contrast variations and significant texture diversity. In this regard, this study introduces a novel two-stage classification and segmentation CNN approach for COVID-19 lung radiological pattern analysis. A novel Residual-BRNet is developed to integrate boundary and regional operations with residual learning, capturing key COVID-19 radiological homogeneous regions, texture variations, and structural contrast patterns in the classification stage. Subsequently, infectious CT images undergo lesion segmentation using the newly proposed RESeg segmentation CNN in the second stage. The RESeg leverages both average and max-pooling implementations to simultaneously learn region homogeneity and boundary-related patterns. Furthermore, novel pixel attention (PA) blocks are integrated into RESeg to effectively address mildly COVID-19-infected regions. The evaluation of the proposed Residual-BRNet CNN in the classification stage demonstrates promising performance metrics, achieving an accuracy of 97.97%, F1-score of 98.01%, sensitivity of 98.42%, and MCC of 96.81%. Meanwhile, PA-RESeg in the segmentation phase achieves an optimal segmentation performance with an IoU score of 98.43% and a dice similarity score of 95.96% of the lesion region. The framework's effectiveness in detecting and segmenting COVID-19 lesions highlights its potential for clinical applications.
Collapse
Affiliation(s)
- Bader Khalid Alshemaimri
- Software Engineering Department, College of Computing and Information Sciences, King Saud University, Riyadh 11671, Saudi Arabia
| |
Collapse
|
5
|
Kumar S, Bhowmik B. Automated Segmentation of COVID-19 Infected Lungs via Modified U-Net Model. 2024 15TH INTERNATIONAL CONFERENCE ON COMPUTING COMMUNICATION AND NETWORKING TECHNOLOGIES (ICCCNT) 2024:1-7. [DOI: 10.1109/icccnt61001.2024.10724997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2025]
Affiliation(s)
- Sunil Kumar
- National Institute of Technology, Surathkal,Maharshi Patanjali CPS Lab BRICS Laboratory,Department of Computer Science and Engineering,Mangalore,Karnataka,Bharat,575025
| | - Biswajit Bhowmik
- National Institute of Technology, Surathkal,Maharshi Patanjali CPS Lab BRICS Laboratory,Department of Computer Science and Engineering,Mangalore,Karnataka,Bharat,575025
| |
Collapse
|
6
|
Bougourzi F, Dornaika F, Distante C, Taleb-Ahmed A. D-TrAttUnet: Toward hybrid CNN-transformer architecture for generic and subtle segmentation in medical images. Comput Biol Med 2024; 176:108590. [PMID: 38763066 DOI: 10.1016/j.compbiomed.2024.108590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 04/16/2024] [Accepted: 05/09/2024] [Indexed: 05/21/2024]
Abstract
Over the past two decades, machine analysis of medical imaging has advanced rapidly, opening up significant potential for several important medical applications. As complicated diseases increase and the number of cases rises, the role of machine-based imaging analysis has become indispensable. It serves as both a tool and an assistant to medical experts, providing valuable insights and guidance. A particularly challenging task in this area is lesion segmentation, a task that is challenging even for experienced radiologists. The complexity of this task highlights the urgent need for robust machine learning approaches to support medical staff. In response, we present our novel solution: the D-TrAttUnet architecture. This framework is based on the observation that different diseases often target specific organs. Our architecture includes an encoder-decoder structure with a composite Transformer-CNN encoder and dual decoders. The encoder includes two paths: the Transformer path and the Encoders Fusion Module path. The Dual-Decoder configuration uses two identical decoders, each with attention gates. This allows the model to simultaneously segment lesions and organs and integrate their segmentation losses. To validate our approach, we performed evaluations on the Covid-19 and Bone Metastasis segmentation tasks. We also investigated the adaptability of the model by testing it without the second decoder in the segmentation of glands and nuclei. The results confirmed the superiority of our approach, especially in Covid-19 infections and the segmentation of bone metastases. In addition, the hybrid encoder showed exceptional performance in the segmentation of glands and nuclei, solidifying its role in modern medical image analysis.
Collapse
Affiliation(s)
- Fares Bougourzi
- Junia, UMR 8520, CNRS, Centrale Lille, University of Polytechnique Hauts-de-France, 59000 Lille, France.
| | - Fadi Dornaika
- University of the Basque Country UPV/EHU, San Sebastian, Spain; IKERBASQUE, Basque Foundation for Science, Bilbao, Spain.
| | - Cosimo Distante
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, 73100 Lecce, Italy.
| | - Abdelmalik Taleb-Ahmed
- Université Polytechnique Hauts-de-France, Université de Lille, CNRS, Valenciennes, 59313, Hauts-de-France, France.
| |
Collapse
|
7
|
Ding X, Huang Y, Zhao Y, Tian X, Feng G, Gao Z. Transfer learning for anatomical structure segmentation in otorhinolaryngology microsurgery. Int J Med Robot 2024; 20:e2634. [PMID: 38767083 DOI: 10.1002/rcs.2634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/22/2024]
Abstract
BACKGROUND Reducing the annotation burden is an active and meaningful area of artificial intelligence (AI) research. METHODS Multiple datasets for the segmentation of two landmarks were constructed based on 41 257 labelled images and 6 different microsurgical scenarios. These datasets were trained using the multi-stage transfer learning (TL) methodology. RESULTS The multi-stage TL enhanced segmentation performance over baseline (mIOU 0.6892 vs. 0.8869). Besides, Convolutional Neural Networks (CNNs) achieved a robust performance (mIOU 0.8917 vs. 0.8603) even when the training dataset size was reduced from 90% (30 078 images) to 10% (3342 images). When directly applying the weight from one certain surgical scenario to recognise the same target in images of other scenarios without training, CNNs still obtained an optimal mIOU of 0.6190 ± 0.0789. CONCLUSIONS Model performance can be improved with TL in datasets with reduced size and increased complexity. It is feasible for data-based domain adaptation among different microsurgical fields.
Collapse
Affiliation(s)
- Xin Ding
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Yu Huang
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Yang Zhao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Xu Tian
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Guodong Feng
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Zhiqiang Gao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| |
Collapse
|
8
|
Zhang Z, Wen Y, Zhang X, Ma Q. CI-UNet: melding convnext and cross-dimensional attention for robust medical image segmentation. Biomed Eng Lett 2024; 14:341-353. [PMID: 38374903 PMCID: PMC10874369 DOI: 10.1007/s13534-023-00341-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 10/30/2023] [Accepted: 12/01/2023] [Indexed: 02/21/2024] Open
Abstract
Deep learning-based methods have recently shown great promise in medical image segmentation task. However, CNN-based frameworks struggle with inadequate long-range spatial dependency capture, whereas Transformers suffer from computational inefficiency and necessitate substantial volumes of labeled data for effective training. To tackle these issues, this paper introduces CI-UNet, a novel architecture that utilizes ConvNeXt as its encoder, amalgamating the computational efficiency and feature extraction capabilities. Moreover, an advanced attention mechanism is proposed to captures intricate cross-dimensional interactions and global context. Extensive experiments on two segmentation datasets, namely BCSD, and CT2USforKidneySeg, confirm the excellent performance of the proposed CI-UNet as compared to other segmentation methods.
Collapse
Affiliation(s)
- Zhuo Zhang
- School of Electronic and Information Engineering, Tiangong University, Tianjin, 300387 China
| | - Yihan Wen
- International School of Information Science and Engineering, Dalian University of Technology, Dalian, 116620 LiaoNing China
| | - Xiaochen Zhang
- Tianjin Cerebral Vascular and Neural Degenerative Disease Key Laboratory, Tianjin Huanhu Hospital, Tianjin, 300350 China
| | - Quanfeng Ma
- Tianjin Cerebral Vascular and Neural Degenerative Disease Key Laboratory, Tianjin Huanhu Hospital, Tianjin, 300350 China
| |
Collapse
|
9
|
Wang J, Yang X, Jia X, Xue W, Chen R, Chen Y, Zhu X, Liu L, Cao Y, Zhou J, Ni D, Gu N. Thyroid ultrasound diagnosis improvement via multi-view self-supervised learning and two-stage pre-training. Comput Biol Med 2024; 171:108087. [PMID: 38364658 DOI: 10.1016/j.compbiomed.2024.108087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 01/04/2024] [Accepted: 01/27/2024] [Indexed: 02/18/2024]
Abstract
Thyroid nodule classification and segmentation in ultrasound images are crucial for computer-aided diagnosis; however, they face limitations owing to insufficient labeled data. In this study, we proposed a multi-view contrastive self-supervised method to improve thyroid nodule classification and segmentation performance with limited manual labels. Our method aligns the transverse and longitudinal views of the same nodule, thereby enabling the model to focus more on the nodule area. We designed an adaptive loss function that eliminates the limitations of the paired data. Additionally, we adopted a two-stage pre-training to exploit the pre-training on ImageNet and thyroid ultrasound images. Extensive experiments were conducted on a large-scale dataset collected from multiple centers. The results showed that the proposed method significantly improves nodule classification and segmentation performance with limited manual labels and outperforms state-of-the-art self-supervised methods. The two-stage pre-training also significantly exceeded ImageNet pre-training.
Collapse
Affiliation(s)
- Jian Wang
- Key Laboratory for Bio-Electromagnetic Environment and Advanced Medical Theranostics, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, 211166, China
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518073, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, 518073, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, 518073, China
| | - Xiaohong Jia
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, 200025, China
| | - Wufeng Xue
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518073, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, 518073, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, 518073, China
| | - Rusi Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518073, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, 518073, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, 518073, China
| | - Yanlin Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518073, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, 518073, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, 518073, China
| | - Xiliang Zhu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518073, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, 518073, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, 518073, China
| | - Lian Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518073, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, 518073, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, 518073, China
| | - Yan Cao
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, 518051, China
| | - Jianqiao Zhou
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, 200025, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518073, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, 518073, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, 518073, China.
| | - Ning Gu
- Key Laboratory for Bio-Electromagnetic Environment and Advanced Medical Theranostics, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, 211166, China; Cardiovascular Disease Research Center, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Medical School, Nanjing University, Nanjing, 210093, China.
| |
Collapse
|
10
|
Murmu A, Kumar P. GIFNet: an effective global infection feature network for automatic COVID-19 lung lesions segmentation. Med Biol Eng Comput 2024:10.1007/s11517-024-03024-z. [PMID: 38308670 DOI: 10.1007/s11517-024-03024-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 01/11/2024] [Indexed: 02/05/2024]
Abstract
The ongoing COronaVIrus Disease 2019 (COVID-19) pandemic carried by the SARS-CoV-2 virus spread worldwide in early 2019, bringing about an existential health catastrophe. Automatic segmentation of infected lungs from COVID-19 X-ray and computer tomography (CT) images helps to generate a quantitative approach for treatment and diagnosis. The multi-class information about the infected lung is often obtained from the patient's CT dataset. However, the main challenge is the extensive range of infected features and lack of contrast between infected and normal areas. To resolve these issues, a novel Global Infection Feature Network (GIFNet)-based Unet with ResNet50 model is proposed for segmenting the locations of COVID-19 lung infections. The Unet layers have been used to extract the features from input images and select the region of interest (ROI) by using the ResNet50 technique for training it faster. Moreover, integrating the pooling layer into the atrous spatial pyramid pooling (ASPP) mechanism in the bottleneck helps for better feature selection and handles scale variation during training. Furthermore, the partial differential equation (PDE) approach is used to enhance the image quality and intensity value for particular ROI boundary edges in the COVID-19 images. The proposed scheme has been validated on two datasets, namely the SARS-CoV-2 CT scan and COVIDx-19, for detecting infected lung segmentation (ILS). The experimental findings have been subjected to a comprehensive analysis using various evaluation metrics, including accuracy (ACC), area under curve (AUC), recall (REC), specificity (SPE), dice similarity coefficient (DSC), mean absolute error (MAE), precision (PRE), and mean squared error (MSE) to ensure rigorous validation. The results demonstrate the superior performance of the proposed system compared to the state-of-the-art (SOTA) segmentation models on both X-ray and CT datasets.
Collapse
Affiliation(s)
- Anita Murmu
- Computer Science and Engineering Department, National Institute of Technology Patna, Ashok Rajpath, Patna, Bihar, 800005, India.
| | - Piyush Kumar
- Computer Science and Engineering Department, National Institute of Technology Patna, Ashok Rajpath, Patna, Bihar, 800005, India
| |
Collapse
|
11
|
Amgothu S, Koppu S. COVID-19 prediction using Caviar Squirrel Jellyfish Search Optimization technique in fog-cloud based architecture. PLoS One 2023; 18:e0295599. [PMID: 38127990 PMCID: PMC10735048 DOI: 10.1371/journal.pone.0295599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 11/24/2023] [Indexed: 12/23/2023] Open
Abstract
In the pandemic of COVID-19 patients approach to the hospital for prescription, yet due to extreme line up the patient gets treatment after waiting for more than one hour. Generally, wearable devices directly measure the preliminary data of the patient stored in capturing mode. In order to store the data, the hospitals require large storage devices that make the progression of data more complex. To bridge this gap, a potent scheme is established for COVID-19 prediction based fog-cloud named Caviar Squirrel Jellyfish Search Optimization (CSJSO). Here, CSJSO is the amalgamation of CAViar Squirrel Search Algorithm (CSSA) and Jellyfish Search Optimization (JSO), where CSSA is blended by the Conditional Autoregressive Value-at-Risk (CAViar) and Squirrel Search Algorithm (SSA). This architecture comprises the healthcare IoT sensor layer, fog layer and cloud layer. In the healthcare IoT sensor layer, the routing process with the collection of patient health condition data is carried out. On the other hand, in the fog layer COVID-19 detection is performed by employing a Deep Neuro Fuzzy Network (DNFN) trained by the proposed Remora Namib Beetle JSO (RNBJSO). Here, RNBJSO is the combination of Namib Beetle Optimization (NBO), Remora Optimization Algorithm (ROA) and Jellyfish Search optimization (JSO). Finally, in the cloud layer, the detection of COVID-19 employing Deep Long Short Term Memory (Deep LSTM) trained utilizing proposed CSJSO is performed. The evaluation measures utilized for CSJSO_Deep LSTM in database-1, such as Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) observed 0.062 and 0.252 in confirmed cases. The measures employed in database-2 are accuracy, sensitivity and specificity achieved 0.925, 0.928 and 0.925 in K-set.
Collapse
Affiliation(s)
- Shanthi Amgothu
- School of Computer Science Engineering and Information Systems, Vellore, India
| | - Srinivas Koppu
- School of Computer Science Engineering and Information Systems, Vellore, India
| |
Collapse
|
12
|
Buongiorno R, Del Corso G, Germanese D, Colligiani L, Python L, Romei C, Colantonio S. Enhancing COVID-19 CT Image Segmentation: A Comparative Study of Attention and Recurrence in UNet Models. J Imaging 2023; 9:283. [PMID: 38132701 PMCID: PMC10744014 DOI: 10.3390/jimaging9120283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/09/2023] [Accepted: 12/13/2023] [Indexed: 12/23/2023] Open
Abstract
Imaging plays a key role in the clinical management of Coronavirus disease 2019 (COVID-19) as the imaging findings reflect the pathological process in the lungs. The visual analysis of High-Resolution Computed Tomography of the chest allows for the differentiation of parenchymal abnormalities of COVID-19, which are crucial to be detected and quantified in order to obtain an accurate disease stratification and prognosis. However, visual assessment and quantification represent a time-consuming task for radiologists. In this regard, tools for semi-automatic segmentation, such as those based on Convolutional Neural Networks, can facilitate the detection of pathological lesions by delineating their contour. In this work, we compared four state-of-the-art Convolutional Neural Networks based on the encoder-decoder paradigm for the binary segmentation of COVID-19 infections after training and testing them on 90 HRCT volumetric scans of patients diagnosed with COVID-19 collected from the database of the Pisa University Hospital. More precisely, we started from a basic model, the well-known UNet, then we added an attention mechanism to obtain an Attention-UNet, and finally we employed a recurrence paradigm to create a Recurrent-Residual UNet (R2-UNet). In the latter case, we also added attention gates to the decoding path of an R2-UNet, thus designing an R2-Attention UNet so as to make the feature representation and accumulation more effective. We compared them to gain understanding of both the cognitive mechanism that can lead a neural model to the best performance for this task and the good compromise between the amount of data, time, and computational resources required. We set up a five-fold cross-validation and assessed the strengths and limitations of these models by evaluating the performances in terms of Dice score, Precision, and Recall defined both on 2D images and on the entire 3D volume. From the results of the analysis, it can be concluded that Attention-UNet outperforms the other models by achieving the best performance of 81.93%, in terms of 2D Dice score, on the test set. Additionally, we conducted statistical analysis to assess the performance differences among the models. Our findings suggest that integrating the recurrence mechanism within the UNet architecture leads to a decline in the model's effectiveness for our particular application.
Collapse
Affiliation(s)
- Rossana Buongiorno
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| | - Giulio Del Corso
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| | - Danila Germanese
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| | - Leonardo Colligiani
- Department of Translational Research, Academic Radiology, University of Pisa, 56124 Pisa, PI, Italy;
| | - Lorenzo Python
- 2nd Radiology Unit, Pisa University Hospital, 56124 Pisa, PI, Italy; (L.P.)
| | - Chiara Romei
- 2nd Radiology Unit, Pisa University Hospital, 56124 Pisa, PI, Italy; (L.P.)
| | - Sara Colantonio
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| |
Collapse
|
13
|
Khan SH, Alahmadi TJ, Alsahfi T, Alsadhan AA, Mazroa AA, Alkahtani HK, Albanyan A, Sakr HA. COVID-19 infection analysis framework using novel boosted CNNs and radiological images. Sci Rep 2023; 13:21837. [PMID: 38071373 PMCID: PMC10710448 DOI: 10.1038/s41598-023-49218-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
COVID-19, a novel pathogen that emerged in late 2019, has the potential to cause pneumonia with unique variants upon infection. Hence, the development of efficient diagnostic systems is crucial in accurately identifying infected patients and effectively mitigating the spread of the disease. However, the system poses several challenges because of the limited availability of labeled data, distortion, and complexity in image representation, as well as variations in contrast and texture. Therefore, a novel two-phase analysis framework has been developed to scrutinize the subtle irregularities associated with COVID-19 contamination. A new Convolutional Neural Network-based STM-BRNet is developed, which integrates the Split-Transform-Merge (STM) block and Feature map enrichment (FME) techniques in the first phase. The STM block captures boundary and regional-specific features essential for detecting COVID-19 infectious CT slices. Additionally, by incorporating the FME and Transfer Learning (TL) concept into the STM blocks, multiple enhanced channels are generated to effectively capture minute variations in illumination and texture specific to COVID-19-infected images. Additionally, residual multipath learning is used to improve the learning capacity of STM-BRNet and progressively increase the feature representation by boosting at a high level through TL. In the second phase of the analysis, the COVID-19 CT scans are processed using the newly developed SA-CB-BRSeg segmentation CNN to accurately delineate infection in the images. The SA-CB-BRSeg method utilizes a unique approach that combines smooth and heterogeneous processes in both the encoder and decoder. These operations are structured to effectively capture COVID-19 patterns, including region-homogenous, texture variation, and border. By incorporating these techniques, the SA-CB-BRSeg method demonstrates its ability to accurately analyze and segment COVID-19 related data. Furthermore, the SA-CB-BRSeg model incorporates the novel concept of CB in the decoder, where additional channels are combined using TL to enhance the learning of low contrast regions. The developed STM-BRNet and SA-CB-BRSeg models achieve impressive results, with an accuracy of 98.01%, recall of 98.12%, F-score of 98.11%, Dice Similarity of 96.396%, and IOU of 98.85%. The proposed framework will alleviate the workload and enhance the radiologist's decision-making capacity in identifying the infected region of COVID-19 and evaluating the severity stages of the disease.
Collapse
Affiliation(s)
- Saddam Hussain Khan
- Department of Computer Systems Engineering, University of Engineering and Applied Science, Swat, 19060, Pakistan
| | - Tahani Jaser Alahmadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia.
| | - Tariq Alsahfi
- Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia
| | - Abeer Abdullah Alsadhan
- Computer Science Department, Applied College, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia.
| | - Alanoud Al Mazroa
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Hend Khalid Alkahtani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Abdullah Albanyan
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Hesham A Sakr
- Nile Higher Institute for Engineering and Technology, Mansoura, Egypt
| |
Collapse
|
14
|
Lyu F, Ye M, Yip TCF, Wong GLH, Yuen PC. Local Style Transfer via Latent Space Manipulation for Cross-Disease Lesion Segmentation. IEEE J Biomed Health Inform 2023; PP:273-284. [PMID: 37883256 DOI: 10.1109/jbhi.2023.3327726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Automatic lesion segmentation is important for assisting doctors in the diagnostic process. Recent deep learning approaches heavily rely on large-scale datasets, which are difficult to obtain in many clinical applications. Leveraging external labelled datasets is an effective solution to tackle the problem of insufficient training data. In this paper, we propose a new framework, namely LatenTrans, to utilize existing datasets for boosting the performance of lesion segmentation in extremely low data regimes. LatenTrans translates non-target lesions into target-like lesions and expands the training dataset with target-like data for better performance. Images are first projected to the latent space via aligned style-based generative models, and rich lesion semantics are encoded using the latent codes. A novel consistency-aware latent code manipulation module is proposed to enable high-quality local style transfer from non-target lesions to target-like lesions while preserving other parts. Moreover, we propose a new metric, Normalized Latent Distance, to solve the question of how to select an adequate one from various existing datasets for knowledge transfer. Extensive experiments are conducted on segmenting lung and brain lesions, and the experimental results demonstrate that our proposed LatenTrans is superior to existing methods for cross-disease lesion segmentation.
Collapse
|
15
|
Du P, Niu X, Li X, Ying C, Zhou Y, He C, Lv S, Liu X, Du W, Wu W. Automatically transferring supervised targets method for segmenting lung lesion regions with CT imaging. BMC Bioinformatics 2023; 24:332. [PMID: 37667214 PMCID: PMC10478337 DOI: 10.1186/s12859-023-05435-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 08/02/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND To present an approach that autonomously identifies and selects a self-selective optimal target for the purpose of enhancing learning efficiency to segment infected regions of the lung from chest computed tomography images. We designed a semi-supervised dual-branch framework for training, where the training set consisted of limited expert-annotated data and a large amount of coarsely annotated data that was automatically segmented based on Hu values, which were used to train both strong and weak branches. In addition, we employed the Lovasz scoring method to automatically switch the supervision target in the weak branch and select the optimal target as the supervision object for training. This method can use noisy labels for rapid localization during the early stages of training, and gradually use more accurate targets for supervised training as the training progresses. This approach can utilize a large number of samples that do not require manual annotation, and with the iterations of training, the supervised targets containing noise become closer and closer to the fine-annotated data, which significantly improves the accuracy of the final model. RESULTS The proposed dual-branch deep learning network based on semi-supervision together with cost-effective samples achieved 83.56 ± 12.10 and 82.67 ± 8.04 on our internal and external test benchmarks measured by the mean Dice similarity coefficient (DSC). Through experimental comparison, the DSC value of the proposed algorithm was improved by 13.54% and 2.02% on the internal benchmark and 13.37% and 2.13% on the external benchmark compared with U-Net without extra sample assistance and the mean-teacher frontier algorithm, respectively. CONCLUSION The cost-effective pseudolabeled samples assisted the training of DL models and achieved much better results compared with traditional DL models with manually labeled samples only. Furthermore, our method also achieved the best performance compared with other up-to-date dual branch structures.
Collapse
Affiliation(s)
- Peng Du
- Hangzhou AiSmartIoT Co., Ltd., Hangzhou, Zhejiang, China
| | - Xiaofeng Niu
- Artificial Intelligence Lab, Hangzhou AiSmartVision Co., Ltd., Hangzhou, Zhejiang, China
| | - Xukun Li
- Artificial Intelligence Lab, Hangzhou AiSmartVision Co., Ltd., Hangzhou, Zhejiang, China
| | - Chiqing Ying
- State Key Laboratory for Diagnosis and Treatment of Infectious Diseases, National Clinical Research Center for Infectious Diseases, Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, School of Medicine, Zhejiang University, 79 QingChun Road, Hangzhou, 310003, Zhejiang, China
| | - Yukun Zhou
- Artificial Intelligence Lab, Hangzhou AiSmartVision Co., Ltd., Hangzhou, Zhejiang, China
| | - Chang He
- State Key Laboratory for Diagnosis and Treatment of Infectious Diseases, National Clinical Research Center for Infectious Diseases, Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, School of Medicine, Zhejiang University, 79 QingChun Road, Hangzhou, 310003, Zhejiang, China
| | - Shuangzhi Lv
- Department of Radiology The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiaoli Liu
- State Key Laboratory for Diagnosis and Treatment of Infectious Diseases, National Clinical Research Center for Infectious Diseases, Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, School of Medicine, Zhejiang University, 79 QingChun Road, Hangzhou, 310003, Zhejiang, China
| | - Weibo Du
- State Key Laboratory for Diagnosis and Treatment of Infectious Diseases, National Clinical Research Center for Infectious Diseases, Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, School of Medicine, Zhejiang University, 79 QingChun Road, Hangzhou, 310003, Zhejiang, China.
| | - Wei Wu
- State Key Laboratory for Diagnosis and Treatment of Infectious Diseases, National Clinical Research Center for Infectious Diseases, Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, School of Medicine, Zhejiang University, 79 QingChun Road, Hangzhou, 310003, Zhejiang, China.
| |
Collapse
|
16
|
Fu Y, Xue P, Zhang Z, Dong E. PKA 2-Net: Prior Knowledge-Based Active Attention Network for Accurate Pneumonia Diagnosis on Chest X-Ray Images. IEEE J Biomed Health Inform 2023; 27:3513-3524. [PMID: 37058372 DOI: 10.1109/jbhi.2023.3267057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/15/2023]
Abstract
To accurately diagnose pneumonia patients on a limited annotated chest X-ray image dataset, a prior knowledge-based active attention network (PKA2-Net1) was constructed. The PKA2-Net uses improved ResNet as the backbone network and consists of residual blocks, novel subject enhancement and background suppression (SEBS) blocks and candidate template generators, where template generators are designed to generate candidate templates for characterizing the importance of different spatial locations in feature maps. The core of PKA2-Net is SEBS block, which is proposed based on the prior knowledge that highlighting distinctive features and suppressing irrelevant features can improve the recognition effect. The purpose of SEBS block is to generate active attention features without any high-level features and enhance the ability of the model to localize lung lesions. In SEBS block, first, a series of candidate templates T with different spatial energy distributions are generated and the controllability of the energy distribution in T enables active attention features to maintain the continuity and integrity of the feature space distributions. Second, Top-n templates are selected from T according to certain learning rules, which are then operated by a convolution layer for generating supervision information that can guide the inputs of SEBS block to form active attention features. We evaluated the PKA2-Net on the binary classification problem of identifying pneumonia and healthy controls on a dataset containing 5856 chest X-ray images (ChestXRay2017), the results showed that our method can achieve 97.63% accuracy and 0.9872 sensitivity.
Collapse
|
17
|
Shanthi A, Koppu S. Remora Namib Beetle Optimization Enabled Deep Learning for Severity of COVID-19 Lung Infection Identification and Classification Using CT Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115316. [PMID: 37300043 DOI: 10.3390/s23115316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 02/26/2023] [Accepted: 04/14/2023] [Indexed: 06/12/2023]
Abstract
Coronavirus disease 2019 (COVID-19) has seen a crucial outburst for both females and males worldwide. Automatic lung infection detection from medical imaging modalities provides high potential for increasing the treatment for patients to tackle COVID-19 disease. COVID-19 detection from lung CT images is a rapid way of diagnosing patients. However, identifying the occurrence of infectious tissues and segmenting this from CT images implies several challenges. Therefore, efficient techniques termed as Remora Namib Beetle Optimization_ Deep Quantum Neural Network (RNBO_DQNN) and RNBO_Deep Neuro Fuzzy Network (RNBO_DNFN) are introduced for the identification as well as classification of COVID-19 lung infection. Here, the pre-processing of lung CT images is performed utilizing an adaptive Wiener filter, whereas lung lobe segmentation is performed employing the Pyramid Scene Parsing Network (PSP-Net). Afterwards, feature extraction is carried out wherein features are extracted for the classification phase. In the first level of classification, DQNN is utilized, tuned by RNBO. Furthermore, RNBO is designed by merging the Remora Optimization Algorithm (ROA) and Namib Beetle Optimization (NBO). If a classified output is COVID-19, then the second-level classification is executed using DNFN for further classification. Additionally, DNFN is also trained by employing the newly proposed RNBO. Furthermore, the devised RNBO_DNFN achieved maximum testing accuracy, with TNR and TPR obtaining values of 89.4%, 89.5% and 87.5%.
Collapse
Affiliation(s)
- Amgothu Shanthi
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Srinivas Koppu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| |
Collapse
|
18
|
Qiao P, Li H, Song G, Han H, Gao Z, Tian Y, Liang Y, Li X, Zhou SK, Chen J. Semi-Supervised CT Lesion Segmentation Using Uncertainty-Based Data Pairing and SwapMix. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1546-1562. [PMID: 37015649 DOI: 10.1109/tmi.2022.3232572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Semi-supervised learning (SSL) methods show their powerful performance to deal with the issue of data shortage in the field of medical image segmentation. However, existing SSL methods still suffer from the problem of unreliable predictions on unannotated data due to the lack of manual annotations for them. In this paper, we propose an unreliability-diluted consistency training (UDiCT) mechanism to dilute the unreliability in SSL by assembling reliable annotated data into unreliable unannotated data. Specifically, we first propose an uncertainty-based data pairing module to pair annotated data with unannotated data based on a complementary uncertainty pairing rule, which avoids two hard samples being paired off. Secondly, we develop SwapMix, a mixed sample data augmentation method, to integrate annotated data into unannotated data for training our model in a low-unreliability manner. Finally, UDiCT is trained by minimizing a supervised loss and an unreliability-diluted consistency loss, which makes our model robust to diverse backgrounds. Extensive experiments on three chest CT datasets show the effectiveness of our method for semi-supervised CT lesion segmentation.
Collapse
|
19
|
Karri M, Annavarapu CSR, Acharya UR. Skin lesion segmentation using two-phase cross-domain transfer learning framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107408. [PMID: 36805279 DOI: 10.1016/j.cmpb.2023.107408] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/31/2023] [Accepted: 02/04/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning (DL) models have been used for medical imaging for a long time but they did not achieve their full potential in the past because of insufficient computing power and scarcity of training data. In recent years, we have seen substantial growth in DL networks because of improved technology and an abundance of data. However, previous studies indicate that even a well-trained DL algorithm may struggle to generalize data from multiple sources because of domain shifts. Additionally, ineffectiveness of basic data fusion methods, complexity of segmentation target and low interpretability of current DL models limit their use in clinical decisions. To meet these challenges, we present a new two-phase cross-domain transfer learning system for effective skin lesion segmentation from dermoscopic images. METHODS Our system is based on two significant technical inventions. We examine a two- phase cross-domain transfer learning approach, including model-level and data-level transfer learning, by fine-tuning the system on two datasets, MoleMap and ImageNet. We then present nSknRSUNet, a high-performing DL network, for skin lesion segmentation using broad receptive fields and spatial edge attention feature fusion. We examine the trained model's generalization capabilities on skin lesion segmentation to quantify these two inventions. We cross-examine the model using two skin lesion image datasets, MoleMap and HAM10000, obtained from varied clinical contexts. RESULTS At data-level transfer learning for the HAM10000 dataset, the proposed model obtained 94.63% of DSC and 99.12% accuracy. In cross-examination at data-level transfer learning for the Molemap dataset, the proposed model obtained 93.63% of DSC and 97.01% of accuracy. CONCLUSION Numerous experiments reveal that our system produces excellent performance and improves upon state-of-the-art methods on both qualitative and quantitative measures.
Collapse
Affiliation(s)
- Meghana Karri
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - Chandra Sekhara Rao Annavarapu
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of science and Technology, SUSS university, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia university, Taichung, Taiwan.
| |
Collapse
|
20
|
Ding W, Abdel-Basset M, Hawash H, Pedrycz W. MIC-Net: A deep network for cross-site segmentation of COVID-19 infection in the fog-assisted IoMT. Inf Sci (N Y) 2023; 623:20-39. [PMID: 36532157 PMCID: PMC9745980 DOI: 10.1016/j.ins.2022.12.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 12/02/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022]
Abstract
The automatic segmentation of COVID-19 pneumonia from a computerized tomography (CT) scan has become a major interest for scholars in developing a powerful diagnostic framework in the Internet of Medical Things (IoMT). Federated deep learning (FDL) is considered a promising approach for efficient and cooperative training from multi-institutional image data. However, the nonindependent and identically distributed (Non-IID) data from health care remain a remarkable challenge, limiting the applicability of FDL in the real world. The variability in features incurred by different scanning protocols, scanners, or acquisition parameters produces the learning drift phenomena during the training, which impairs both the training speed and segmentation performance of the model. This paper proposes a novel FDL approach for reliable and efficient multi-institutional COVID-19 segmentation, called MIC-Net. MIC-Net consists of three main building modules: the down-sampler, context enrichment (CE) module, and up-sampler. The down-sampler was designed to effectively learn both local and global representations from input CT scans by combining the advantages of lightweight convolutional and attention modules. The contextual enrichment (CE) module is introduced to enable the network to capture the contextual representation that can be later exploited to enrich the semantic knowledge of the up-sampler through skip connections. To further tackle the inter-site heterogeneity within the model, the approach uses an adaptive and switchable normalization (ASN) to adaptively choose the best normalization strategy according to the underlying data. A novel federated periodic selection protocol (FED-PCS) is proposed to fairly select the training participants according to their resource state, data quality, and loss of a local model. The results of an experimental evaluation of MIC-Net on three publicly available data sets show its robust performance, with an average dice score of 88.90% and an average surface dice of 87.53%.
Collapse
Affiliation(s)
- Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, China
- Faculty of Data Science, City University of Macau, Macau, China
| | | | | | - Witold Pedrycz
- Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6R 2V4, Canada
| |
Collapse
|
21
|
PDAtt-Unet: Pyramid Dual-Decoder Attention Unet for Covid-19 infection segmentation from CT-scans. Med Image Anal 2023; 86:102797. [PMID: 36966605 PMCID: PMC10027962 DOI: 10.1016/j.media.2023.102797] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 01/10/2023] [Accepted: 03/08/2023] [Indexed: 03/23/2023]
Abstract
Since the emergence of the Covid-19 pandemic in late 2019, medical imaging has been widely used to analyse this disease. Indeed, CT-scans of the lungs can help diagnose, detect, and quantify Covid-19 infection. In this paper, we address the segmentation of Covid-19 infection from CT-scans. To improve the performance of the Att-Unet architecture and maximize the use of the Attention Gate, we propose the PAtt-Unet and DAtt-Unet architectures. PAtt-Unet aims to exploit the input pyramids to preserve the spatial awareness in all of the encoder layers. On the other hand, DAtt-Unet is designed to guide the segmentation of Covid-19 infection inside the lung lobes. We also propose to combine these two architectures into a single one, which we refer to as PDAtt-Unet. To overcome the blurry boundary pixels segmentation of Covid-19 infection, we propose a hybrid loss function. The proposed architectures were tested on four datasets with two evaluation scenarios (intra and cross datasets). Experimental results showed that both PAtt-Unet and DAtt-Unet improve the performance of Att-Unet in segmenting Covid-19 infections. Moreover, the combination architecture PDAtt-Unet led to further improvement. To Compare with other methods, three baseline segmentation architectures (Unet, Unet++, and Att-Unet) and three state-of-the-art architectures (InfNet, SCOATNet, and nCoVSegNet) were tested. The comparison showed the superiority of the proposed PDAtt-Unet trained with the proposed hybrid loss (PDEAtt-Unet) over all other methods. Moreover, PDEAtt-Unet is able to overcome various challenges in segmenting Covid-19 infections in four datasets and two evaluation scenarios.
Collapse
|
22
|
Lyu F, Ye M, Carlsen JF, Erleben K, Darkner S, Yuen PC. Pseudo-Label Guided Image Synthesis for Semi-Supervised COVID-19 Pneumonia Infection Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:797-809. [PMID: 36288236 DOI: 10.1109/tmi.2022.3217501] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Coronavirus disease 2019 (COVID-19) has become a severe global pandemic. Accurate pneumonia infection segmentation is important for assisting doctors in diagnosing COVID-19. Deep learning-based methods can be developed for automatic segmentation, but the lack of large-scale well-annotated COVID-19 training datasets may hinder their performance. Semi-supervised segmentation is a promising solution which explores large amounts of unlabelled data, while most existing methods focus on pseudo-label refinement. In this paper, we propose a new perspective on semi-supervised learning for COVID-19 pneumonia infection segmentation, namely pseudo-label guided image synthesis. The main idea is to keep the pseudo-labels and synthesize new images to match them. The synthetic image has the same COVID-19 infected regions as indicated in the pseudo-label, and the reference style extracted from the style code pool is added to make it more realistic. We introduce two representative methods by incorporating the synthetic images into model training, including single-stage Synthesis-Assisted Cross Pseudo Supervision (SA-CPS) and multi-stage Synthesis-Assisted Self-Training (SA-ST), which can work individually as well as cooperatively. Synthesis-assisted methods expand the training data with high-quality synthetic data, thus improving the segmentation performance. Extensive experiments on two COVID-19 CT datasets for segmenting the infections demonstrate our method is superior to existing schemes for semi-supervised segmentation, and achieves the state-of-the-art performance on both datasets. Code is available at: https://github.com/FeiLyu/SASSL.
Collapse
|
23
|
Wu X, Gao P, Zhang P, Shang Y, He B, Zhang L, Jiang J, Hui H, Tian J. Cross-domain knowledge transfer based parallel-cascaded multi-scale attention network for limited view reconstruction in projection magnetic particle imaging. Comput Biol Med 2023; 158:106809. [PMID: 37004433 DOI: 10.1016/j.compbiomed.2023.106809] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 02/20/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023]
Abstract
Projection magnetic particle imaging (MPI) can significantly improve the temporal resolution of three-dimensional (3D) imaging compared to that using traditional point by point scanning. However, the dense view of projections required for tomographic reconstruction limits the scope of temporal resolution optimization. The solution to this problem in computed tomography (CT) is using limited view projections (sparse view or limited angle) for reconstruction, which can be divided into: completing the limited view sinogram and image post-processing for streaking artifacts caused by insufficient projections. Benefiting from large-scale CT datasets, both categories of deep learning-based methods have achieved tremendous progress; yet, there is a data scarcity limitation in MPI. We propose a cross-domain knowledge transfer learning strategy that can transfer the prior knowledge of the limited view learned by the model in CT to MPI, which can help reduce the network requirements for real MPI data. In addition, the size of the imaging target affects the scale of the streaking artifacts caused by insufficient projections. Therefore, we propose a parallel-cascaded multi-scale attention module that allows the network to adaptively identify streaking artifacts at different scales. The proposed method was evaluated on real phantom and in vivo mouse data, and it significantly outperformed several advanced limited view methods. The streaking artifacts caused by an insufficient number of projections can be overcome using the proposed method.
Collapse
Affiliation(s)
- Xiangjun Wu
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Pengli Gao
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Peng Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
| | - Yaxin Shang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
| | - Bingxi He
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Liwen Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Beijing Key Laboratory of Molecular Imaging, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Jingying Jiang
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China.
| | - Hui Hui
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Beijing Key Laboratory of Molecular Imaging, Beijing, China; University of Chinese Academy of Sciences, Beijing, China.
| | - Jie Tian
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China; CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Beijing Key Laboratory of Molecular Imaging, Beijing, China; Zhuhai Precision Medical Center, Zhuhai People's Hospital, Jinan University, Zhuhai, China.
| |
Collapse
|
24
|
Meng Y, Bridge J, Addison C, Wang M, Merritt C, Franks S, Mackey M, Messenger S, Sun R, Fitzmaurice T, McCann C, Li Q, Zhao Y, Zheng Y. Bilateral adaptive graph convolutional network on CT based Covid-19 diagnosis with uncertainty-aware consensus-assisted multiple instance learning. Med Image Anal 2023; 84:102722. [PMID: 36574737 PMCID: PMC9753459 DOI: 10.1016/j.media.2022.102722] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 10/17/2022] [Accepted: 12/02/2022] [Indexed: 12/23/2022]
Abstract
Coronavirus disease (COVID-19) has caused a worldwide pandemic, putting millions of people's health and lives in jeopardy. Detecting infected patients early on chest computed tomography (CT) is critical in combating COVID-19. Harnessing uncertainty-aware consensus-assisted multiple instance learning (UC-MIL), we propose to diagnose COVID-19 using a new bilateral adaptive graph-based (BA-GCN) model that can use both 2D and 3D discriminative information in 3D CT volumes with arbitrary number of slices. Given the importance of lung segmentation for this task, we have created the largest manual annotation dataset so far with 7,768 slices from COVID-19 patients, and have used it to train a 2D segmentation model to segment the lungs from individual slices and mask the lungs as the regions of interest for the subsequent analyses. We then used the UC-MIL model to estimate the uncertainty of each prediction and the consensus between multiple predictions on each CT slice to automatically select a fixed number of CT slices with reliable predictions for the subsequent model reasoning. Finally, we adaptively constructed a BA-GCN with vertices from different granularity levels (2D and 3D) to aggregate multi-level features for the final diagnosis with the benefits of the graph convolution network's superiority to tackle cross-granularity relationships. Experimental results on three largest COVID-19 CT datasets demonstrated that our model can produce reliable and accurate COVID-19 predictions using CT volumes with any number of slices, which outperforms existing approaches in terms of learning and generalisation ability. To promote reproducible research, we have made the datasets, including the manual annotations and cleaned CT dataset, as well as the implementation code, available at https://doi.org/10.5281/zenodo.6361963.
Collapse
Affiliation(s)
- Yanda Meng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Joshua Bridge
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Cliff Addison
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | - Manhui Wang
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | | | - Stu Franks
- Alces Flight Limited, Bicester, United Kingdom
| | - Maria Mackey
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Steve Messenger
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Renrong Sun
- Department of Radiology, Hubei Provincial Hospital of Integrated Chinese and Western Medicine, Hubei University of Chinese Medicine, Wuhan, China
| | - Thomas Fitzmaurice
- Adult Cystic Fibrosis Unit, Liverpool Heart and Chest Hospital NHS Foundation Trust, Liverpool, United Kingdom
| | - Caroline McCann
- Radiology, Liverpool Heart and Chest Hospital NHS Foundation Trust, United Kingdom
| | - Qiang Li
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Yitian Zhao
- The Affiliated People's Hospital of Ningbo University, Ningbo, China; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Science, Ningbo, China.
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom; Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart & Chest Hospital, Liverpool, United Kingdom.
| |
Collapse
|
25
|
Ji GP, Fan DP, Chou YC, Dai D, Liniger A, Van Gool L. Deep Gradient Learning for Efficient Camouflaged Object Detection. MACHINE INTELLIGENCE RESEARCH 2023. [PMCID: PMC9831373 DOI: 10.1007/s11633-022-1365-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
AbstractThis paper introduces deep gradient network (DGNet), a novel deep framework that exploits object gradient supervision for camouflaged object detection (COD). It decouples the task into two connected branches, i.e., a context and a texture encoder. The essential connection is the gradient-induced transition, representing a soft grouping between context and texture features. Benefiting from the simple but efficient framework, DGNet outperforms existing state-of-the-art COD models by a large margin. Notably, our efficient version, DGNet-S, runs in real-time (80 fps) and achieves comparable results to the cutting-edge model JCSOD-CVPR21 with only 6.82% parameters. The application results also show that the proposed DGNet performs well in the polyp segmentation, defect detection, and transparent object segmentation tasks. The code will be made available at https://github.com/GewelsJI/DGNet.
Collapse
|
26
|
Interpretable Differential Diagnosis of Non-COVID Viral Pneumonia, Lung Opacity and COVID-19 Using Tuned Transfer Learning and Explainable AI. Healthcare (Basel) 2023; 11:healthcare11030410. [PMID: 36766986 PMCID: PMC9914430 DOI: 10.3390/healthcare11030410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 01/20/2023] [Accepted: 01/28/2023] [Indexed: 02/04/2023] Open
Abstract
The coronavirus epidemic has spread to virtually every country on the globe, inflicting enormous health, financial, and emotional devastation, as well as the collapse of healthcare systems in some countries. Any automated COVID detection system that allows for fast detection of the COVID-19 infection might be highly beneficial to the healthcare service and people around the world. Molecular or antigen testing along with radiology X-ray imaging is now utilized in clinics to diagnose COVID-19. Nonetheless, due to a spike in coronavirus and hospital doctors' overwhelming workload, developing an AI-based auto-COVID detection system with high accuracy has become imperative. On X-ray images, the diagnosis of COVID-19, non-COVID-19 non-COVID viral pneumonia, and other lung opacity can be challenging. This research utilized artificial intelligence (AI) to deliver high-accuracy automated COVID-19 detection from normal chest X-ray images. Further, this study extended to differentiate COVID-19 from normal, lung opacity and non-COVID viral pneumonia images. We have employed three distinct pre-trained models that are Xception, VGG19, and ResNet50 on a benchmark dataset of 21,165 X-ray images. Initially, we formulated the COVID-19 detection problem as a binary classification problem to classify COVID-19 from normal X-ray images and gained 97.5%, 97.5%, and 93.3% accuracy for Xception, VGG19, and ResNet50 respectively. Later we focused on developing an efficient model for multi-class classification and gained an accuracy of 75% for ResNet50, 92% for VGG19, and finally 93% for Xception. Although Xception and VGG19's performances were identical, Xception proved to be more efficient with its higher precision, recall, and f-1 scores. Finally, we have employed Explainable AI on each of our utilized model which adds interpretability to our study. Furthermore, we have conducted a comprehensive comparison of the model's explanations and the study revealed that Xception is more precise in indicating the actual features that are responsible for a model's predictions.This addition of explainable AI will benefit the medical professionals greatly as they will get to visualize how a model makes its prediction and won't have to trust our developed machine-learning models blindly.
Collapse
|
27
|
Ma L, Song S, Guo L, Tan W, Xu L. COVID-19 lung infection segmentation from chest CT images based on CAPA-ResUNet. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2023; 33:6-17. [PMID: 36713026 PMCID: PMC9874448 DOI: 10.1002/ima.22819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 09/13/2022] [Accepted: 10/02/2022] [Indexed: 06/18/2023]
Abstract
Coronavirus disease 2019 (COVID-19) epidemic has devastating effects on personal health around the world. It is significant to achieve accurate segmentation of pulmonary infection regions, which is an early indicator of disease. To solve this problem, a deep learning model, namely, the content-aware pre-activated residual UNet (CAPA-ResUNet), was proposed for segmenting COVID-19 lesions from CT slices. In this network, the pre-activated residual block was used for down-sampling to solve the problems of complex foreground and large fluctuations of distribution in datasets during training and to avoid gradient disappearance. The area loss function based on the false segmentation regions was proposed to solve the problem of fuzzy boundary of the lesion area. This model was evaluated by the public dataset (COVID-19 Lung CT Lesion Segmentation Challenge-2020) and compared its performance with those of classical models. Our method gains an advantage over other models in multiple metrics. Such as the Dice coefficient, specificity (Spe), and intersection over union (IoU), our CAPA-ResUNet obtained 0.775 points, 0.972 points, and 0.646 points, respectively. The Dice coefficient of our model was 2.51% higher than Content-aware residual UNet (CARes-UNet). The code is available at https://github.com/malu108/LungInfectionSeg.
Collapse
Affiliation(s)
- Lu Ma
- School of ScienceNortheastern UniversityShenyangChina
| | - Shuni Song
- Guangdong Peizheng CollegeGuangzhouChina
| | - Liting Guo
- College of Medicine and Biological Information EngineeringNortheastern UniversityShenyangChina
| | - Wenjun Tan
- School of Computer Science and EngineeringNortheastern UniversityShenyangChina
- Key Laboratory of Medical Image ComputingMinistry of EducationShenyangLiaoningChina
| | - Lisheng Xu
- College of Medicine and Biological Information EngineeringNortheastern UniversityShenyangChina
- Key Laboratory of Medical Image ComputingMinistry of EducationShenyangLiaoningChina
- Neusoft Research of Intelligent Healthcare Technology, Co. Ltd.ShenyangLiaoningChina
| |
Collapse
|
28
|
DeepPDT-Net: predicting the outcome of photodynamic therapy for chronic central serous chorioretinopathy using two-stage multimodal transfer learning. Sci Rep 2022; 12:18689. [PMID: 36333442 PMCID: PMC9636239 DOI: 10.1038/s41598-022-22984-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 10/21/2022] [Indexed: 11/06/2022] Open
Abstract
Central serous chorioretinopathy (CSC), characterized by serous detachment of the macular retina, can cause permanent vision loss in the chronic course. Chronic CSC is generally treated with photodynamic therapy (PDT), which is costly and quite invasive, and the results are unpredictable. In a retrospective case-control study design, we developed a two-stage deep learning model to predict 1-year outcome of PDT using initial multimodal clinical data. The training dataset included 166 eyes with chronic CSC and an additional learning dataset containing 745 healthy control eyes. A pre-trained ResNet50-based convolutional neural network was first trained with normal fundus photographs (FPs) to detect CSC and then adapted to predict CSC treatability through transfer learning. The domain-specific ResNet50 successfully predicted treatable and refractory CSC (accuracy, 83.9%). Then other multimodal clinical data were integrated with the FP deep features using XGBoost.The final combined model (DeepPDT-Net) outperformed the domain-specific ResNet50 (accuracy, 88.0%). The FP deep features had the greatest impact on DeepPDT-Net performance, followed by central foveal thickness and age. In conclusion, DeepPDT-Net could solve the PDT outcome prediction task challenging even to retinal specialists. This two-stage strategy, adopting transfer learning and concatenating multimodal data, can overcome the clinical prediction obstacles arising from insufficient datasets.
Collapse
|
29
|
Yu Y, Tao Y, Guan H, Xiao S, Li F, Yu C, Liu Z, Li J. A multi-branch hierarchical attention network for medical target segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
30
|
Self-supervised region-aware segmentation of COVID-19 CT images using 3D GAN and contrastive learning. Comput Biol Med 2022; 149:106033. [PMID: 36041270 PMCID: PMC9419627 DOI: 10.1016/j.compbiomed.2022.106033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 07/23/2022] [Accepted: 08/20/2022] [Indexed: 11/20/2022]
Abstract
Medical image segmentation is a key initial step in several therapeutic applications. While most of the automatic segmentation models are supervised, which require a well-annotated paired dataset, we introduce a novel annotation-free pipeline to perform segmentation of COVID-19 CT images. Our pipeline consists of three main subtasks: automatically generating a 3D pseudo-mask in self-supervised mode using a generative adversarial network (GAN), leveraging the quality of the pseudo-mask, and building a multi-objective segmentation model to predict lesions. Our proposed 3D GAN architecture removes infected regions from COVID-19 images and generates synthesized healthy images while keeping the 3D structure of the lung the same. Then, a 3D pseudo-mask is generated by subtracting the synthesized healthy images from the original COVID-19 CT images. We enhanced pseudo-masks using a contrastive learning approach to build a region-aware segmentation model to focus more on the infected area. The final segmentation model can be used to predict lesions in COVID-19 CT images without any manual annotation at the pixel level. We show that our approach outperforms the existing state-of-the-art unsupervised and weakly-supervised segmentation techniques on three datasets by a reasonable margin. Specifically, our method improves the segmentation results for the CT images with low infection by increasing sensitivity by 20% and the dice score up to 4%. The proposed pipeline overcomes some of the major limitations of existing unsupervised segmentation approaches and opens up a novel horizon for different applications of medical image segmentation.
Collapse
|
31
|
Deep feature fusion classification network (DFFCNet): Towards accurate diagnosis of COVID-19 using chest X-rays images. Biomed Signal Process Control 2022; 76:103677. [PMID: 35432578 PMCID: PMC9005442 DOI: 10.1016/j.bspc.2022.103677] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 03/22/2022] [Accepted: 04/09/2022] [Indexed: 12/12/2022]
|
32
|
Wan J, Yue S, Ma J, Ma X. A coarse-to-fine full attention guided capsule network for medical image segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103682] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
33
|
Huang YS, Chou PR, Chen HM, Chang YC, Chang RF. One-stage pulmonary nodule detection using 3-D DCNN with feature fusion and attention mechanism in CT image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106786. [PMID: 35398579 DOI: 10.1016/j.cmpb.2022.106786] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/28/2022] [Accepted: 03/29/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Lung cancer is the most common cause of cancer-related death in the world. Low-dose computed tomography (LDCT) is a widely used modality in lung cancer detection. The nodule is an abnormal tissue and may evolve into lung cancer. Hence, it is crucial to detect nodules in the early detection stage. However, reviewing the LDCT scans to observe suspicious nodules is a time-consuming task. Recently, designing a computer-aided detection (CADe) system with convolutional neural network (CNN) architecture has been proven that it is helpful for radiologists. Hence, in this study, a 3-D YOLO-based CADe system, 3-D OSAF-YOLOv3, is proposed for nodule detection in LDCT images. METHODS The proposed CADe system consists of data preprocessing, nodule detection, and non-maximum suppression algorithm (NMS). At first, the data preprocessing including the background elimination, the spacing normalization, and the volume of interest (VOI) extraction, are conducted to remove the non-lung region, normalize the image spacing, and divide LDCT image into numerous VOIs. Then, the VOIs are fed into the 3-D OSAF-YOLOv3 model, to detect the suspicious nodules. The proposed model is constructed by integrating the 3-D YOLOv3 with the one-shot aggregation module (OSA), the receptive field block (RFB), and the feature fusion scheme (FFS). Finally, the NMS algorithm is performed to eliminate the duplicated detection generated by the model. RESULTS In this study, the LUNA-16 dataset composed 1186 nodules from 888 LDCT scans and the competition performance metric (CPM) are used to evaluate our CADe system. In the experiment results, the proposed system can achieve a sensitivities rate of 0.962 with the false positive rate of 8 and complete a CPM value of 0.905. Moreover, according to the ablation study results, the employment of OSA module, RFB, and FFS could improve the detection performance actually. Furthermore, compared to other start-of-the-art (SOTA) models, our detection system could also achieve the higher performance. CONCLUSIONS In this study, a YOLO-based CADe system for nodule detection in CT image system integrating additional modules and scheme is proposed for nodule detection in LDCT. The result indicates that the proposed the modification can significantly improve detection performance.
Collapse
Affiliation(s)
- Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Changhua University of Education, Changhua, Taiwan
| | - Ping-Ru Chou
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan
| | - Hsin-Ming Chen
- Department of Medical Imaging, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 10617, Taiwan.
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan; Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan; MOST Joint Research Center for AI Technology and All Vista Healthcare, Taipei, Taiwan.
| |
Collapse
|
34
|
Shamim S, Awan MJ, Mohd Zain A, Naseem U, Mohammed MA, Garcia-Zapirain B. Automatic COVID-19 Lung Infection Segmentation through Modified Unet Model. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:6566982. [PMID: 35422980 PMCID: PMC9002904 DOI: 10.1155/2022/6566982] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 02/23/2022] [Accepted: 02/28/2022] [Indexed: 11/23/2022]
Abstract
The coronavirus (COVID-19) pandemic has had a terrible impact on human lives globally, with far-reaching consequences for the health and well-being of many people around the world. Statistically, 305.9 million people worldwide tested positive for COVID-19, and 5.48 million people died due to COVID-19 up to 10 January 2022. CT scans can be used as an alternative to time-consuming RT-PCR testing for COVID-19. This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level. The problem with segmentation is that the GGO often appears indistinguishable from a healthy lung in the initial stages of COVID-19, and so, to cope with this, the increased set of weights in contracting and expanding the Unet path and an improved convolutional module is added in order to establish the connection between the encoder and decoder pipeline. This has a major capacity to segment the GGO in the case of COVID-19, with the proposed model being referred to as "convUnet." The experiment was performed on the Medseg1 dataset, and the addition of a set of weights at each layer of the model and modification in the connected module in Unet led to an improvement in overall segmentation results. The quantitative results obtained using accuracy, recall, precision, dice-coefficient, F1score, and IOU were 93.29%, 93.01%, 93.67%, 92.46%, 93.34%, 86.96%, respectively, which is better than that obtained using Unet and other state-of-the-art models. Therefore, this segmentation approach proved to be more accurate, fast, and reliable in helping doctors to diagnose COVID-19 quickly and efficiently.
Collapse
Affiliation(s)
- Sania Shamim
- Department of Software Engineering, University of Management and Technology, Lahore, Pakistan
| | - Mazhar Javed Awan
- Department of Software Engineering, University of Management and Technology, Lahore, Pakistan
| | - Azlan Mohd Zain
- School of Computing, UTM Big Data Centre, Universiti Teknologi Malaysia, Skudai 81310, Johor, Malaysia
| | - Usman Naseem
- School of Computer Science, The University of Sydney, Sydney, Australia
| | - Mazin Abed Mohammed
- College of Computer Science and Information Technology, University of Anbar, 11, Ramadi 31001, Iraq
| | | |
Collapse
|
35
|
Khan A, Garner R, Rocca ML, Salehi S, Duncan D. A Novel Threshold-Based Segmentation Method for Quantification of COVID-19 Lung Abnormalities. SIGNAL, IMAGE AND VIDEO PROCESSING 2022; 17:907-914. [PMID: 35371333 PMCID: PMC8958480 DOI: 10.1007/s11760-022-02183-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 11/23/2021] [Accepted: 02/17/2022] [Indexed: 06/14/2023]
Abstract
Since December 2019, the novel coronavirus disease 2019 (COVID-19) has claimed the lives of more than 3.75 million people worldwide. Consequently, methods for accurate COVID-19 diagnosis and classification are necessary to facilitate rapid patient care and terminate viral spread. Lung infection segmentations are useful to identify unique infection patterns that may support rapid diagnosis, severity assessment, and patient prognosis prediction, but manual segmentations are time-consuming and depend on radiologic expertise. Deep learning-based methods have been explored to reduce the burdens of segmentation; however, their accuracies are limited due to the lack of large, publicly available annotated datasets that are required to establish ground truths. For these reasons, we propose a semi-automatic, threshold-based segmentation method to generate region of interest (ROI) segmentations of infection visible on lung computed tomography (CT) scans. Infection masks are then used to calculate the percentage of lung abnormality (PLA) to determine COVID-19 severity and to analyze the disease progression in follow-up CTs. Compared with other COVID-19 ROI segmentation methods, on average, the proposed method achieved improved precision ( 47.49 % ) and specificity ( 98.40 % ) scores. Furthermore, the proposed method generated PLAs with a difference of ± 3.89 % from the ground-truth PLAs. The improved ROI segmentation results suggest that the proposed method has potential to assist radiologists in assessing infection severity and analyzing disease progression in follow-up CTs.
Collapse
Affiliation(s)
- Azrin Khan
- Laboratory of Neuro Imaging, Keck School of Medicine of USC, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA USA
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA USA
| | - Rachael Garner
- Laboratory of Neuro Imaging, Keck School of Medicine of USC, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA USA
| | - Marianna La Rocca
- Laboratory of Neuro Imaging, Keck School of Medicine of USC, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA USA
- Dipartimento Interateneo di Fisica, Universitá degli Studi di Bari Aldo Moro, Bari, Italy
| | - Sana Salehi
- Laboratory of Neuro Imaging, Keck School of Medicine of USC, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA USA
| | - Dominique Duncan
- Laboratory of Neuro Imaging, Keck School of Medicine of USC, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA USA
| |
Collapse
|
36
|
Gillman AG, Lunardo F, Prinable J, Belous G, Nicolson A, Min H, Terhorst A, Dowling JA. Automated COVID-19 diagnosis and prognosis with medical imaging and who is publishing: a systematic review. Phys Eng Sci Med 2022; 45:13-29. [PMID: 34919204 PMCID: PMC8678975 DOI: 10.1007/s13246-021-01093-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 12/13/2021] [Indexed: 12/31/2022]
Abstract
OBJECTIVES To conduct a systematic survey of published techniques for automated diagnosis and prognosis of COVID-19 diseases using medical imaging, assessing the validity of reported performance and investigating the proposed clinical use-case. To conduct a scoping review into the authors publishing such work. METHODS The Scopus database was queried and studies were screened for article type, and minimum source normalized impact per paper and citations, before manual relevance assessment and a bias assessment derived from a subset of the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). The number of failures of the full CLAIM was adopted as a surrogate for risk-of-bias. Methodological and performance measurements were collected from each technique. Each study was assessed by one author. Comparisons were evaluated for significance with a two-sided independent t-test. FINDINGS Of 1002 studies identified, 390 remained after screening and 81 after relevance and bias exclusion. The ratio of exclusion for bias was 71%, indicative of a high level of bias in the field. The mean number of CLAIM failures per study was 8.3 ± 3.9 [1,17] (mean ± standard deviation [min,max]). 58% of methods performed diagnosis versus 31% prognosis. Of the diagnostic methods, 38% differentiated COVID-19 from healthy controls. For diagnostic techniques, area under the receiver operating curve (AUC) = 0.924 ± 0.074 [0.810,0.991] and accuracy = 91.7% ± 6.4 [79.0,99.0]. For prognostic techniques, AUC = 0.836 ± 0.126 [0.605,0.980] and accuracy = 78.4% ± 9.4 [62.5,98.0]. CLAIM failures did not correlate with performance, providing confidence that the highest results were not driven by biased papers. Deep learning techniques reported higher AUC (p < 0.05) and accuracy (p < 0.05), but no difference in CLAIM failures was identified. INTERPRETATION A majority of papers focus on the less clinically impactful diagnosis task, contrasted with prognosis, with a significant portion performing a clinically unnecessary task of differentiating COVID-19 from healthy. Authors should consider the clinical scenario in which their work would be deployed when developing techniques. Nevertheless, studies report superb performance in a potentially impactful application. Future work is warranted in translating techniques into clinical tools.
Collapse
Affiliation(s)
- Ashley G Gillman
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia.
| | - Febrio Lunardo
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
- College of Science and Engineering, James Cook University, Australian Tropical Science Innovation Precinct, Townsville, QLD, 4814, Australia
| | - Joseph Prinable
- ACRF Image X Institute, University of Sydney, Level 2, Biomedical Building (C81), 1 Central Ave, Australian Technology Park, Eveleigh, Sydney, NSW, 2015, Australia
| | - Gregg Belous
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Aaron Nicolson
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Hang Min
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Andrew Terhorst
- Data61, Commonwealth Scientific and Industrial Research Organisation, College Road, Sandy Bay, Hobart, TAS, 7005, Australia
| | - Jason A Dowling
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| |
Collapse
|
37
|
Enshaei N, Oikonomou A, Rafiee MJ, Afshar P, Heidarian S, Mohammadi A, Plataniotis KN, Naderkhani F. COVID-rate: an automated framework for segmentation of COVID-19 lesions from chest CT images. Sci Rep 2022; 12:3212. [PMID: 35217712 PMCID: PMC8881477 DOI: 10.1038/s41598-022-06854-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 01/21/2022] [Indexed: 11/09/2022] Open
Abstract
Novel Coronavirus disease (COVID-19) is a highly contagious respiratory infection that has had devastating effects on the world. Recently, new COVID-19 variants are emerging making the situation more challenging and threatening. Evaluation and quantification of COVID-19 lung abnormalities based on chest Computed Tomography (CT) images can help determining the disease stage, efficiently allocating limited healthcare resources, and making informed treatment decisions. During pandemic era, however, visual assessment and quantification of COVID-19 lung lesions by expert radiologists become expensive and prone to error, which raises an urgent quest to develop practical autonomous solutions. In this context, first, the paper introduces an open-access COVID-19 CT segmentation dataset containing 433 CT images from 82 patients that have been annotated by an expert radiologist. Second, a Deep Neural Network (DNN)-based framework is proposed, referred to as the [Formula: see text], that autonomously segments lung abnormalities associated with COVID-19 from chest CT images. Performance of the proposed [Formula: see text] framework is evaluated through several experiments based on the introduced and external datasets. Third, an unsupervised enhancement approach is introduced that can reduce the gap between the training set and test set and improve the model generalization. The enhanced results show a dice score of 0.8069 and specificity and sensitivity of 0.9969 and 0.8354, respectively. Furthermore, the results indicate that the [Formula: see text] model can efficiently segment COVID-19 lesions in both 2D CT images and whole lung volumes. Results on the external dataset illustrate generalization capabilities of the [Formula: see text] model to CT images obtained from a different scanner.
Collapse
Affiliation(s)
- Nastaran Enshaei
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | - Anastasia Oikonomou
- Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON, Canada.
| | - Moezedin Javad Rafiee
- Department of Medicine and Diagnostic Radiology, McGill University, Montreal, QC, Canada
| | - Parnian Afshar
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | - Shahin Heidarian
- Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada
| | - Arash Mohammadi
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | | | - Farnoosh Naderkhani
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| |
Collapse
|
38
|
Kaur J, Kaur P. Outbreak COVID-19 in Medical Image Processing Using Deep Learning: A State-of-the-Art Review. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2021; 29:2351-2382. [PMID: 34690493 PMCID: PMC8525064 DOI: 10.1007/s11831-021-09667-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 10/01/2021] [Indexed: 06/13/2023]
Abstract
From the month of December-19, the outbreak of Coronavirus (COVID-19) triggered several deaths and overstated every aspect of individual health. COVID-19 has been designated as a pandemic by World Health Organization. The circumstances placed serious trouble on every country worldwide, particularly with health arrangements and time-consuming responses. The increase in the positive cases of COVID-19 globally spread every day. The quantity of accessible diagnosing kits is restricted because of complications in detecting the existence of the illness. Fast and correct diagnosis of COVID-19 is a timely requirement for the prevention and controlling of the pandemic through suitable isolation and medicinal treatment. The significance of the present work is to discuss the outline of the deep learning techniques with medical imaging such as outburst prediction, virus transmitted indications, detection and treatment aspects, vaccine availability with remedy research. Abundant image resources of medical imaging as X-rays, Computed Tomography Scans, Magnetic Resonance imaging, formulate deep learning high-quality methods to fight against the pandemic COVID-19. The review presents a comprehensive idea of deep learning and its related applications in healthcare received over the past decade. At the last, some issues and confrontations to control the health crisis and outbreaks have been introduced. The progress in technology has contributed to developing individual's lives. The problems faced by the radiologists during medical imaging techniques and deep learning approaches for diagnosing the COVID-19 infections have been also discussed.
Collapse
Affiliation(s)
- Jaspreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab India
| | - Prabhpreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab India
| |
Collapse
|