1
|
Xiang Z, Mao Q, Wang J, Tian Y, Zhang Y, Wang W. Dmbg-Net: Dilated multiresidual boundary guidance network for COVID-19 infection segmentation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:20135-20154. [PMID: 38052640 DOI: 10.3934/mbe.2023892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Accurate segmentation of infected regions in lung computed tomography (CT) images is essential for the detection and diagnosis of coronavirus disease 2019 (COVID-19). However, lung lesion segmentation has some challenges, such as obscure boundaries, low contrast and scattered infection areas. In this paper, the dilated multiresidual boundary guidance network (Dmbg-Net) is proposed for COVID-19 infection segmentation in CT images of the lungs. This method focuses on semantic relationship modelling and boundary detail guidance. First, to effectively minimize the loss of significant features, a dilated residual block is substituted for a convolutional operation, and dilated convolutions are employed to expand the receptive field of the convolution kernel. Second, an edge-attention guidance preservation block is designed to incorporate boundary guidance of low-level features into feature integration, which is conducive to extracting the boundaries of the region of interest. Third, the various depths of features are used to generate the final prediction, and the utilization of a progressive multi-scale supervision strategy facilitates enhanced representations and highly accurate saliency maps. The proposed method is used to analyze COVID-19 datasets, and the experimental results reveal that the proposed method has a Dice similarity coefficient of 85.6% and a sensitivity of 84.2%. Extensive experimental results and ablation studies have shown the effectiveness of Dmbg-Net. Therefore, the proposed method has a potential application in the detection, labeling and segmentation of other lesion areas.
Collapse
Affiliation(s)
- Zhenwu Xiang
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Qi Mao
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Jintao Wang
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Yi Tian
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Yan Zhang
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Wenfeng Wang
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| |
Collapse
|
2
|
Das R, Bose S, Chowdhury RS, Maulik U. Dense Dilated Multi-Scale Supervised Attention-Guided Network for histopathology image segmentation. Comput Biol Med 2023; 163:107182. [PMID: 37379615 DOI: 10.1016/j.compbiomed.2023.107182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/24/2023] [Accepted: 06/13/2023] [Indexed: 06/30/2023]
Abstract
Over the last couple of decades, the introduction and proliferation of whole-slide scanners led to increasing interest in the research of digital pathology. Although manual analysis of histopathological images is still the gold standard, the process is often tedious and time consuming. Furthermore, manual analysis also suffers from intra- and interobserver variability. Separating structures or grading morphological changes can be difficult due to architectural variability of these images. Deep learning techniques have shown great potential in histopathology image segmentation that drastically reduces the time needed for downstream tasks of analysis and providing accurate diagnosis. However, few algorithms have clinical implementations. In this paper, we propose a new deep learning model Dense Dilated Multiscale Supervised Attention-Guided (D2MSA) Network for histopathology image segmentation that makes use of deep supervision coupled with a hierarchical system of novel attention mechanisms. The proposed model surpasses state-of-the-art performance while using similar computational resources. The performance of the model has been evaluated for the tasks of gland segmentation and nuclei instance segmentation, both of which are clinically relevant tasks to assess the state and progress of malignancy. Here, we have used histopathology image datasets for three different types of cancer. We have also performed extensive ablation tests and hyperparameter tuning to ensure the validity and reproducibility of the model performance. The proposed model is available at www.github.com/shirshabose/D2MSA-Net.
Collapse
Affiliation(s)
- Rangan Das
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Shirsha Bose
- Department of Informatics, Technical University of Munich, Munich, Bavaria 85748, Germany.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
3
|
Chauhan J, Bedi J. EffViT-COVID: A dual-path network for COVID-19 percentage estimation. EXPERT SYSTEMS WITH APPLICATIONS 2023; 213:118939. [PMID: 36210962 PMCID: PMC9527203 DOI: 10.1016/j.eswa.2022.118939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/12/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
The first case of novel Coronavirus (COVID-19) was reported in December 2019 in Wuhan City, China and led to an international outbreak. This virus causes serious respiratory illness and affects several other organs of the body differently for different patient. Worldwide, several waves of this infection have been reported, and researchers/doctors are working hard to develop novel solutions for the COVID diagnosis. Imaging and vision-based techniques are widely explored for the prediction of COVID-19; however, COVID infection percentage estimation is under explored. In this work, we propose a novel framework for the estimation of COVID-19 infection percentage based on deep learning techniques. The proposed network utilizes the features from vision transformers and CNN (Convolutional Neural Networks), specifically EfficientNet-B7. The features of both are fused together for preparing an information-rich feature vector that contributes to a more precise estimation of infection percentage. We evaluate our model on the Per-COVID-19 dataset (Bougourzi et al., 2021b) which comprises labeled CT data of COVID-19 patients. For the evaluation of the model on this dataset, we employ the most widely-used slice-level metrics, i.e., Pearson correlation coefficient (PC), Mean absolute error (MAE), and Root mean square error (RMSE). The network outperforms the other state-of-the-art methods and achieves 0 . 9886 ± 0 . 009 , 1 . 23 ± 0 . 378 , and 3 . 12 ± 1 . 56 , PC, MAE, and RMSE, respectively, using a 5-fold cross-validation technique. In addition, the overall average difference in the actual and predicted infection percentage is observed to be < 2 % . In conclusion, the detailed experimental results reveal the robustness and efficiency of the proposed network.
Collapse
Affiliation(s)
- Joohi Chauhan
- Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala 147004, Punjab, India
| | - Jatin Bedi
- Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala 147004, Punjab, India
| |
Collapse
|
4
|
Eldem H, Ülker E, Yaşar Işıklı O. Encoder–decoder semantic segmentation models for pressure wound images. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2022.2163531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Affiliation(s)
- Hüseyin Eldem
- Vocational School of Technical Sciences, Computer Technologies Department, Karamanoğlu Mehmetbey University, Karaman, Turkey
| | - Erkan Ülker
- Faculty of Engineering and Natural Sciences, Department of Computer Engineering, Konya Technical University, Konya, Turkey
| | - Osman Yaşar Işıklı
- Karaman Education and Research Hospital, Vascular Surgery Department, Karaman, Turkey
| |
Collapse
|
5
|
Sun Q, Dai M, Lan Z, Cai F, Wei L, Yang C, Chen R. UCR-Net: U-shaped context residual network for medical image segmentation. Comput Biol Med 2022; 151:106203. [PMID: 36306581 DOI: 10.1016/j.compbiomed.2022.106203] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 09/04/2022] [Accepted: 10/09/2022] [Indexed: 12/27/2022]
Abstract
Medical image segmentation prerequisite for numerous clinical needs is a critical step in biomedical image analysis. The U-Net framework is one of the most popular deep networks in this field. However, U-Net's successive pooling and downsampling operations result in some loss of spatial information. In this paper, we propose a U-shaped context residual network, called UCR-Net, to capture more context and high-level information for medical image segmentation. The proposed UCR-Net is an encoder-decoder framework comprising a feature encoder module and a feature decoder module. The feature decoder module contains four newly proposed context attention exploration(CAE) modules, a newly proposed global and spatial attention (GSA) module, and four decoder blocks. We use the proposed CAE module to capture more multi-scale context features from the encoder. The proposed GSA module further explores global context features and semantically enhanced deep-level features. The proposed UCR-Net can recover more high-level semantic features and fuse context attention information from CAE and global and spatial attention information from GSA module. Experiments on the retinal vessel, femoropopliteal artery stent, and polyp datasets demonstrate that the proposed UCR-Net performs favorably against the original U-Net and other advanced methods.
Collapse
Affiliation(s)
- Qi Sun
- Digital Fujian Research Institute of Big Data for Agriculture and Forestry, College of Computer and Information Sciences, Fujian Agriculture and Forestry University, Fuzhou 350002, China.
| | - Mengyun Dai
- Digital Fujian Research Institute of Big Data for Agriculture and Forestry, College of Computer and Information Sciences, Fujian Agriculture and Forestry University, Fuzhou 350002, China.
| | - Ziyang Lan
- Digital Fujian Research Institute of Big Data for Agriculture and Forestry, College of Computer and Information Sciences, Fujian Agriculture and Forestry University, Fuzhou 350002, China.
| | - Fanggang Cai
- Department of vascular surgery, the First Affiliated Hospital, Fujian Medical University, Fuzhou 350108, China.
| | - Lifang Wei
- Digital Fujian Research Institute of Big Data for Agriculture and Forestry, College of Computer and Information Sciences, Fujian Agriculture and Forestry University, Fuzhou 350002, China.
| | - Changcai Yang
- Digital Fujian Research Institute of Big Data for Agriculture and Forestry, College of Computer and Information Sciences, Fujian Agriculture and Forestry University, Fuzhou 350002, China.
| | - Riqing Chen
- Digital Fujian Research Institute of Big Data for Agriculture and Forestry, College of Computer and Information Sciences, Fujian Agriculture and Forestry University, Fuzhou 350002, China.
| |
Collapse
|
6
|
Liu S, Wang H, Li Y, Li X, Cao G, Cao W. AHU-MultiNet: Adaptive loss balancing based on homoscedastic uncertainty in multi-task medical image segmentation network. Comput Biol Med 2022; 150:106157. [PMID: 37859277 DOI: 10.1016/j.compbiomed.2022.106157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 09/05/2022] [Accepted: 09/24/2022] [Indexed: 11/19/2022]
Abstract
Medical image segmentation is an important field in medical image analysis and a vital part of computer-aided diagnosis. Due to the challenges in acquiring image annotations, semi-supervised learning has attracted high attention in medical image segmentation. Despite their impressive performance, most existing semi-supervised approaches lack attention to ambiguous regions (e.g., some edges or corners around the organs). To achieve better performance, we propose a novel semi-supervised method called Adaptive Loss Balancing based on Homoscedastic Uncertainty in Multi-task Medical Image Segmentation Network (AHU-MultiNet). This model contains the main task for segmentation, one auxiliary task for signed distance, and another auxiliary task for contour detection. Our multi-task approach can effectively and sufficiently extract the semantic information of medical images by auxiliary tasks. Simultaneously, we introduce an inter-task consistency to explore the underlying information of the images and regularize the predictions in the right direction. More importantly, we notice and analyze that searching an optimal weighting manually to balance each task is a difficult and time-consuming process. Therefore, we introduce an adaptive loss balancing strategy based on homoscedastic uncertainty. Experimental results show that the two auxiliary tasks explicitly enforce shape-priors on the segmentation output to further generate more accurate masks under the adaptive loss balancing strategy. On several standard benchmarks, the 2018 Atrial Segmentation Challenge and the 2017 Liver Tumor Segmentation Challenge, our proposed method achieves improvements and outperforms the new state-of-the-art in semi-supervised learning.
Collapse
Affiliation(s)
- Shasha Liu
- The MOE Research Center for Software/Hardware Co-Design Engineering, East China Normal University, Shanghai, China.
| | - Hailing Wang
- The MOE Research Center for Software/Hardware Co-Design Engineering, East China Normal University, Shanghai, China.
| | - Yan Li
- The MOE Research Center for Software/Hardware Co-Design Engineering, East China Normal University, Shanghai, China.
| | - Xiaohu Li
- The MOE Research Center for Software/Hardware Co-Design Engineering, East China Normal University, Shanghai, China.
| | - Guitao Cao
- The MOE Research Center for Software/Hardware Co-Design Engineering, East China Normal University, Shanghai, China.
| | - Wenming Cao
- The College of Information Engineering, Shenzhen University, Shenzhen, China.
| |
Collapse
|
7
|
Deep learning with multiresolution handcrafted features for brain MRI segmentation. Artif Intell Med 2022; 131:102365. [DOI: 10.1016/j.artmed.2022.102365] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 06/28/2022] [Accepted: 07/09/2022] [Indexed: 12/26/2022]
|