1
|
Xie J, Zhou J, Yang M, Xu L, Li T, Jia H, Gong Y, Li X, Song B, Wei Y, Liu M. Lesion segmentation method for multiple types of liver cancer based on balanced dice loss. Med Phys 2025. [PMID: 39945728 DOI: 10.1002/mp.17624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 11/14/2024] [Accepted: 12/13/2024] [Indexed: 03/17/2025] Open
Abstract
BACKGROUND Obtaining accurate segmentation regions for liver cancer is of paramount importance for the clinical diagnosis and treatment of the disease. In recent years, a large number of variants of deep learning based liver cancer segmentation methods have been proposed to assist radiologists. Due to the differences in characteristics between different types of liver tumors and data imbalance, it is difficult to train a deep model that can achieve accurate segmentation for multiple types of liver cancer. PURPOSE In this paper, We propose a balance Dice Loss(BD Loss) function for balanced learning of multiple categories segmentation features. We also introduce a comprehensive method based on BD Loss to achieve accurate segmentation of multiple categories of liver cancer. MATERIALS AND METHODS We retrospectively collected computed tomography (CT) screening images and tumor segmentation of 591 patients with malignant liver tumors from West China Hospital of Sichuan University. We use the proposed BD Loss to train a deep model that can segment multiple types of liver tumors and, through a greedy parameter averaging algorithm (GPA algorithm) obtain a more generalized segmentation model. Finally, we employ model integration and our proposed post-processing method, which leverages inter-slice information, to achieve more accurate segmentation of liver cancer lesions. RESULTS We evaluated the performance of our proposed automatic liver cancer segmentation method on the dataset we collected. The BD loss we proposed can effectively mitigate the adverse effects of data imbalance on the segmentation model. Our proposed method can achieve a dice per case (DPC) of 0.819 (95%CI 0.798-0.841), significantly higher than baseline which achieve a DPC of 0.768(95%CI 0.740-0.796). CONCLUSIONS The differences in CT images between different types of liver cancer necessitate deep learning models to learn distinct features. Our method addresses this challenge, enabling balanced and accurate segmentation performance across multiple types of liver cancer.
Collapse
Affiliation(s)
- Jun Xie
- Information Technology Center, West China Hospital of Sichuan University, Chengdu, China
- Information Technology Center, People's Hospital of Sanya, Sanya, Hainan, China
| | - Jiajun Zhou
- School of Computer Science and Engineering, University of Electronic Science and technology of China, Chengdu, Sichuan, China
| | - Meiyi Yang
- School of Computer Science and Engineering, University of Electronic Science and technology of China, Chengdu, Sichuan, China
| | - Lifeng Xu
- Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People's Hospital, Quzhou, Zhejiang, China
| | - Tongtong Li
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Haoyang Jia
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Yu Gong
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Xiansong Li
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Bin Song
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Yi Wei
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Ming Liu
- Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People's Hospital, Quzhou, Zhejiang, China
- Yangtze Delta Region Institute(Quzhou), University of Electronic Science and Technology of China, Quzhou, Zhejiang, China
| |
Collapse
|
2
|
Zhang T, Liu Y, Zhao Q, Xue G, Shen H. Edge-guided multi-scale adaptive feature fusion network for liver tumor segmentation. Sci Rep 2024; 14:28370. [PMID: 39551810 PMCID: PMC11570674 DOI: 10.1038/s41598-024-79379-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 11/08/2024] [Indexed: 11/19/2024] Open
Abstract
Automated segmentation of liver tumors on CT scans is essential for aiding diagnosis and assessing treatment. Computer-aided diagnosis can reduce the costs and errors associated with manual processes and ensure the provision of accurate and reliable clinical assessments. However, liver tumors in CT images vary significantly in size and have fuzzy boundaries, making it difficult for existing methods to achieve accurate segmentation. Therefore, this paper proposes MAEG-Net, a multi-scale adaptive feature fusion liver tumor segmentation network based on edge guidance. Specifically, we design a multi-scale adaptive feature fusion module that effectively incorporates multi-scale information to better guide the segmentation of tumors of different sizes. Additionally, to address the problem of blurred tumor boundaries in images, we introduce an edge-aware guidance module to improve the model's feature learning ability under these conditions. Evaluation results on the liver tumor dataset (LiTS2017) show that our method achieves a Dice coefficient of 71.84% and a VOE of 38.64%, demonstrating the best performance for liver tumor segmentation in CT images.
Collapse
Affiliation(s)
- Tiange Zhang
- School of Digital and Intelligent Industry, Inner Mongolia University of Science & Technology, Baotou, 014010, China
| | - Yuefeng Liu
- School of Digital and Intelligent Industry, Inner Mongolia University of Science & Technology, Baotou, 014010, China.
| | - Qiyan Zhao
- School of Digital and Intelligent Industry, Inner Mongolia University of Science & Technology, Baotou, 014010, China
| | - Guoyue Xue
- School of Digital and Intelligent Industry, Inner Mongolia University of Science & Technology, Baotou, 014010, China
| | - Hongyu Shen
- School of Digital and Intelligent Industry, Inner Mongolia University of Science & Technology, Baotou, 014010, China
| |
Collapse
|
3
|
Liu H, Zhou Y, Gou S, Luo Z. Tumor conspicuity enhancement-based segmentation model for liver tumor segmentation and RECIST diameter measurement in non-contrast CT images. Comput Biol Med 2024; 174:108420. [PMID: 38613896 DOI: 10.1016/j.compbiomed.2024.108420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 04/04/2024] [Accepted: 04/04/2024] [Indexed: 04/15/2024]
Abstract
BACKGROUND AND OBJECTIVE Liver tumor segmentation (LiTS) accuracy on contrast-enhanced computed tomography (CECT) images is higher than that on non-contrast computed tomography (NCCT) images. However, CECT requires contrast medium and repeated scans to obtain multiphase enhanced CT images, which is time-consuming and cost-increasing. Therefore, despite the lower accuracy of LiTS on NCCT images, which still plays an irreplaceable role in some clinical settings, such as guided brachytherapy, ablation, or evaluation of patients with renal function damage. In this study, we intend to generate enhanced high-contrast pseudo-color CT (PCCT) images to improve the accuracy of LiTS and RECIST diameter measurement on NCCT images. METHODS To generate high-contrast CT liver tumor region images, an intensity-based tumor conspicuity enhancement (ITCE) model was first developed. In the ITCE model, a pseudo color conversion function from an intensity distribution of the tumor was established, and it was applied in NCCT to generate enhanced PCCT images. Additionally, we design a tumor conspicuity enhancement-based liver tumor segmentation (TCELiTS) model, which was applied to improve the segmentation of liver tumors on NCCT images. The TCELiTS model consists of three components: an image enhancement module based on the ITCE model, a segmentation module based on a deep convolutional neural network, and an attention loss module based on restricted activation. Segmentation performance was analyzed using the Dice similarity coefficient (DSC), sensitivity, specificity, and RECIST diameter error. RESULTS To develop the deep learning model, 100 patients with histopathologically confirmed liver tumors (hepatocellular carcinoma, 64 patients; hepatic hemangioma, 36 patients) were randomly divided into a training set (75 patients) and an independent test set (25 patients). Compared with existing tumor automatic segmentation networks trained on CECT images (U-Net, nnU-Net, DeepLab-V3, Modified U-Net), the DSCs achieved on the enhanced PCCT images are both improved compared with those on NCCT images. We observe improvements of 0.696-0.713, 0.715 to 0.776, 0.748 to 0.788, and 0.733 to 0.799 in U-Net, nnU-Net, DeepLab-V3, and Modified U-Net, respectively, in terms of DSC values. In addition, an observer study including 5 doctors was conducted to compare the segmentation performance of enhanced PCCT images with that of NCCT images and showed that enhanced PCCT images are more advantageous for doctors to segment tumor regions. The results showed an accuracy improvement of approximately 3%-6%, but the time required to segment a single CT image was reduced by approximately 50 %. CONCLUSIONS Experimental results show that the ITCE model can generate high-contrast enhanced PCCT images, especially in liver regions, and the TCELiTS model can improve LiTS accuracy in NCCT images.
Collapse
Affiliation(s)
- Haofeng Liu
- School of Artificial Intelligence, Xidian University, Xi'An, 710071, China
| | - Yanyan Zhou
- Department of Interventional Radiology, Tangdu Hospital, Airforce Medical University, Xi'an, 710038, China
| | - Shuiping Gou
- School of Artificial Intelligence, Xidian University, Xi'An, 710071, China
| | - Zhonghua Luo
- Department of Interventional Radiology, Tangdu Hospital, Airforce Medical University, Xi'an, 710038, China.
| |
Collapse
|
4
|
Wang KN, Li SX, Bu Z, Zhao FX, Zhou GQ, Zhou SJ, Chen Y. SBCNet: Scale and Boundary Context Attention Dual-Branch Network for Liver Tumor Segmentation. IEEE J Biomed Health Inform 2024; 28:2854-2865. [PMID: 38427554 DOI: 10.1109/jbhi.2024.3370864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2024]
Abstract
Automated segmentation of liver tumors in CT scans is pivotal for diagnosing and treating liver cancer, offering a valuable alternative to labor-intensive manual processes and ensuring the provision of accurate and reliable clinical assessment. However, the inherent variability of liver tumors, coupled with the challenges posed by blurred boundaries in imaging characteristics, presents a substantial obstacle to achieving their precise segmentation. In this paper, we propose a novel dual-branch liver tumor segmentation model, SBCNet, to address these challenges effectively. Specifically, our proposed method introduces a contextual encoding module, which enables a better identification of tumor variability using an advanced multi-scale adaptive kernel. Moreover, a boundary enhancement module is designed for the counterpart branch to enhance the perception of boundaries by incorporating contour learning with the Sobel operator. Finally, we propose a hybrid multi-task loss function, concurrently concerning tumors' scale and boundary features, to foster interaction across different tasks of dual branches, further improving tumor segmentation. Experimental validation on the publicly available LiTS dataset demonstrates the practical efficacy of each module, with SBCNet yielding competitive results compared to other state-of-the-art methods for liver tumor segmentation.
Collapse
|
5
|
Zhan F, Wang W, Chen Q, Guo Y, He L, Wang L. Three-Direction Fusion for Accurate Volumetric Liver and Tumor Segmentation. IEEE J Biomed Health Inform 2024; 28:2175-2186. [PMID: 38109246 DOI: 10.1109/jbhi.2023.3344392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Biomedical image segmentation of organs, tissues and lesions has gained increasing attention in clinical treatment planning and navigation, which involves the exploration of two-dimensional (2D) and three-dimensional (3D) contexts in the biomedical image. Compared to 2D methods, 3D methods pay more attention to inter-slice correlations, which offer additional spatial information for image segmentation. An organ or tumor has a 3D structure that can be observed from three directions. Previous studies focus only on the vertical axis, limiting the understanding of the relationship between a tumor and its surrounding tissues. Important information can also be obtained from sagittal and coronal axes. Therefore, spatial information of organs and tumors can be obtained from three directions, i.e. the sagittal, coronal and vertical axes, to understand better the invasion depth of tumor and its relationship with the surrounding tissues. Moreover, the edges of organs and tumors in biomedical image may be blurred. To address these problems, we propose a three-direction fusion volumetric segmentation (TFVS) model for segmenting 3D biomedical images from three perspectives in sagittal, coronal and transverse planes, respectively. We use the dataset of the liver task provided by the Medical Segmentation Decathlon challenge to train our model. The TFVS method demonstrates a competitive performance on the 3D-IRCADB dataset. In addition, the t-test and Wilcoxon signed-rank test are also performed to show the statistical significance of the improvement by the proposed method as compared with the baseline methods. The proposed method is expected to be beneficial in guiding and facilitating clinical diagnosis and treatment.
Collapse
|
6
|
Chen Y, Zheng C, Zhang W, Lin H, Chen W, Zhang G, Xu G, Wu F. MS-FANet: Multi-scale feature attention network for liver tumor segmentation. Comput Biol Med 2023; 163:107208. [PMID: 37421737 DOI: 10.1016/j.compbiomed.2023.107208] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 06/07/2023] [Accepted: 06/25/2023] [Indexed: 07/10/2023]
Abstract
Accurate segmentation of liver tumors is a prerequisite for early diagnosis of liver cancer. Segmentation networks extract features continuously at the same scale, which cannot adapt to the variation of liver tumor volume in computed tomography (CT). Hence, a multi-scale feature attention network (MS-FANet) for liver tumor segmentation is proposed in this paper. The novel residual attention (RA) block and multi-scale atrous downsampling (MAD) are introduced in the encoder of MS-FANet to sufficiently learn variable tumor features and extract tumor features at different scales simultaneously. The dual-path feature (DF) filter and dense upsampling (DU) are introduced in the feature reduction process to reduce effective features for the accurate segmentation of liver tumors. On the public LiTS dataset and 3DIRCADb dataset, MS-FANet achieved 74.2% and 78.0% of average Dice, respectively, outperforming most state-of-the-art networks, this strongly proves the excellent liver tumor segmentation performance and the ability to learn features at different scales.
Collapse
Affiliation(s)
- Ying Chen
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Cheng Zheng
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Wei Zhang
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Hongping Lin
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Wang Chen
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Guimei Zhang
- Institute of Computer Vision, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Guohui Xu
- Department of Hepatobiliary Surgery, Jiangxi Cancer Hospital, Nanchang, 330029, PR China.
| | - Fang Wu
- Department of Gastroenterology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325035, PR China.
| |
Collapse
|
7
|
Li Y, Zou B, Dai P, Liao M, Bai HX, Jiao Z. AC-E Network: Attentive Context-Enhanced Network for Liver Segmentation. IEEE J Biomed Health Inform 2023; 27:4052-4061. [PMID: 37204947 DOI: 10.1109/jbhi.2023.3278079] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Segmentation of liver from CT scans is essential in computer-aided liver disease diagnosis and treatment. However, the 2DCNN ignores the 3D context, and the 3DCNN suffers from numerous learnable parameters and high computational cost. In order to overcome this limitation, we propose an Attentive Context-Enhanced Network (AC-E Network) consisting of 1) an attentive context encoding module (ACEM) that can be integrated into the 2D backbone to extract 3D context without a sharp increase in the number of learnable parameters; 2) a dual segmentation branch including complemental loss making the network attend to both the liver region and boundary so that getting the segmented liver surface with high accuracy. Extensive experiments on the LiTS and the 3D-IRCADb datasets demonstrate that our method outperforms existing approaches and is competitive to the state-of-the-art 2D-3D hybrid method on the equilibrium of the segmentation precision and the number of model parameters.
Collapse
|
8
|
Li Z, Wang Y, Zhu Y, Xu J, Wei J, Xie J, Zhang J. Modality-based attention and dual-stream multiple instance convolutional neural network for predicting microvascular invasion of hepatocellular carcinoma. Front Oncol 2023; 13:1195110. [PMID: 37434971 PMCID: PMC10331018 DOI: 10.3389/fonc.2023.1195110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 05/30/2023] [Indexed: 07/13/2023] Open
Abstract
BACKGROUND AND PURPOSE The presence of microvascular invasion (MVI) is a crucial indicator of postoperative recurrence in patients with hepatocellular carcinoma (HCC). Detecting MVI before surgery can improve personalized surgical planning and enhance patient survival. However, existing automatic diagnosis methods for MVI have certain limitations. Some methods only analyze information from a single slice and overlook the context of the entire lesion, while others require high computational resources to process the entire tumor with a three-dimension (3D) convolutional neural network (CNN), which could be challenging to train. To address these limitations, this paper proposes a modality-based attention and dual-stream multiple instance learning(MIL) CNN. MATERIALS AND METHODS In this retrospective study, 283 patients with histologically confirmed HCC who underwent surgical resection between April 2017 and September 2019 were included. Five magnetic resonance (MR) modalities including T2-weighted, arterial phase, venous phase, delay phase and apparent diffusion coefficient images were used in image acquisition of each patient. Firstly, Each two-dimension (2D) slice of HCC magnetic resonance image (MRI) was converted into an instance embedding. Secondly, modality attention module was designed to emulates the decision-making process of doctors and helped the model to focus on the important MRI sequences. Thirdly, instance embeddings of 3D scans were aggregated into a bag embedding by a dual-stream MIL aggregator, in which the critical slices were given greater consideration. The dataset was split into a training set and a testing set in a 4:1 ratio, and model performance was evaluated using five-fold cross-validation. RESULTS Using the proposed method, the prediction of MVI achieved an accuracy of 76.43% and an AUC of 74.22%, significantly surpassing the performance of the baseline methods. CONCLUSION Our modality-based attention and dual-stream MIL CNN can achieve outstanding results for MVI prediction.
Collapse
Affiliation(s)
- Zhi Li
- School of Medicine, Shanghai University, Shanghai, China
- Shanghai Universal Medical Imaging Diagnostic Center, Shanghai University, Shanghai, China
| | - Yutao Wang
- The First Affiliated Hospital of Ningbo University, Ningbo, China
| | - Yuzhao Zhu
- Shanghai Universal Medical Imaging Diagnostic Center, Shanghai University, Shanghai, China
| | - Jiafeng Xu
- Shanghai Universal Medical Imaging Diagnostic Center, Shanghai University, Shanghai, China
| | - Jinzhu Wei
- School of Medicine, Shanghai University, Shanghai, China
| | - Jiang Xie
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Jian Zhang
- Shanghai Universal Medical Imaging Diagnostic Center, Shanghai University, Shanghai, China
| |
Collapse
|
9
|
Chen G, Li Z, Wang J, Wang J, Du S, Zhou J, Shi J, Zhou Y. An improved 3D KiU-Net for segmentation of liver tumor. Comput Biol Med 2023; 160:107006. [PMID: 37159962 DOI: 10.1016/j.compbiomed.2023.107006] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 04/08/2023] [Accepted: 05/03/2023] [Indexed: 05/11/2023]
Abstract
It is a challenging task to accurately segment liver tumors from Computed Tomography (CT) images. The widely used U-Net and its variants generally suffer from the issue to accurately segment the detailed edges of small tumors, because the progressive down sampling operations in the encoder module will gradually increase the receptive fields. These enlarged receptive filed have limited ability to learn the information about tiny structures. KiU-Net is a newly proposed dual-branch model that can effectively perform image segmentation for small targets. However, the 3D version of KiU-Net has high computational complexity, which limits its application. In this work, an improved 3D KiU-Net (named TKiU-NeXt) is proposed for liver tumor segmentation from CT images. In TKiU-NeXt, a Transformer-based Kite-Net (TK-Net) branch is proposed to build the over-complete architecture to learn more detailed features for small structures, and an extended 3D version of UNeXt is developed to replace the original U-Net branch, which can effectively reduce computational complexity but still with superior segmentation performance. Moreover, a Mutual Guided Fusion Block (MGFB) is designed to effectively learn more features from two branches and then fuse the complementary features for image segmentation. The experimental results on two public CT datasets and a private dataset demonstrate that the proposed TKiU-NeXt outperforms all the compared algorithms, and it also has less computational complexity. It suggests the effectiveness and efficiency of TKiU-NeXt.
Collapse
Affiliation(s)
- Guodong Chen
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China
| | - Zheng Li
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China
| | - Jian Wang
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China
| | - Jun Wang
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China
| | - Shisuo Du
- Department of Radiation Oncology, Zhongshan Hospital Fudan University Shanghai, China
| | - Jinghao Zhou
- University of Maryland School of Medicine, Baltimore, MD, USA
| | - Jun Shi
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China.
| | - Yongkang Zhou
- Department of Radiation Oncology, Zhongshan Hospital Fudan University Shanghai, China.
| |
Collapse
|
10
|
Chen Y, Zheng C, Zhou T, Feng L, Liu L, Zeng Q, Wang G. A deep residual attention-based U-Net with a biplane joint method for liver segmentation from CT scans. Comput Biol Med 2023; 152:106421. [PMID: 36527780 DOI: 10.1016/j.compbiomed.2022.106421] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 11/17/2022] [Accepted: 12/11/2022] [Indexed: 12/15/2022]
Abstract
Liver tumours are diseases with high morbidity and high deterioration probabilities, and accurate liver area segmentation from computed tomography (CT) scans is a prerequisite for quick tumour diagnosis. While 2D network segmentation methods can perform segmentation with lower device performance requirements, they often discard the rich 3D spatial information contained in CT scans, limiting their segmentation accuracy. Hence, a deep residual attention-based U-shaped network (DRAUNet) with a biplane joint method for liver segmentation is proposed in this paper, where the biplane joint method introduces coronal CT slices to assist the transverse slices with segmentation, incorporating more 3D spatial information into the segmentation results to improve the segmentation performance of the network. Additionally, a novel deep residual block (DR block) and dual-effect attention module (DAM) are introduced in DRAUNet, where the DR block has deeper layers and two shortcut paths. The DAM efficiently combines the correlations of feature channels and the spatial locations of feature maps. The DRAUNet with the biplane joint method is tested on three datasets, Liver Tumour Segmentation (LiTS), 3D Image Reconstruction for Comparison of Algorithms Database (3DIRCADb), and Segmentation of the Liver Competition 2007 (Sliver07), and it achieves 97.3%, 97.4%, and 96.9% Dice similarity coefficients (DSCs) for liver segmentation, respectively, outperforming most state-of-the-art networks; this strongly demonstrates the segmentation performance of DRAUNet and the ability of the biplane joint method to obtain 3D spatial information from 3D images.
Collapse
Affiliation(s)
- Ying Chen
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Cheng Zheng
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Taohui Zhou
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Longfeng Feng
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China
| | - Lan Liu
- Department of Radiology, Jiangxi Cancer Hospital, Nanchang, 330029, PR China.
| | - Qiao Zeng
- Department of Radiology, Jiangxi Cancer Hospital, Nanchang, 330029, PR China
| | - Guoqing Wang
- Zhejiang Suosi Technology Co. Ltd, Wenzhou, 325000, PR China.
| |
Collapse
|
11
|
RTUNet: Residual transformer UNet specifically for pancreas segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
12
|
Wang X, Wang S, Zhang Z, Yin X, Wang T, Li N. CPAD-Net: Contextual parallel attention and dilated network for liver tumor segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
13
|
Gong Z, Song J, Guo W, Ju R, Zhao D, Tan W, Zhou W, Zhang G. Abdomen tissues segmentation from computed tomography images using deep learning and level set methods. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:14074-14085. [PMID: 36654080 DOI: 10.3934/mbe.2022655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Accurate abdomen tissues segmentation is one of the crucial tasks in radiation therapy planning of related diseases. However, abdomen tissues segmentation (liver, kidney) is difficult because the low contrast between abdomen tissues and their surrounding organs. In this paper, an attention-based deep learning method for automated abdomen tissues segmentation is proposed. In our method, image cropping is first applied to the original images. U-net model with attention mechanism is then constructed to obtain the initial abdomen tissues. Finally, level set evolution which consists of three energy terms is used for optimize the initial abdomen segmentation. The proposed model is evaluated across 470 subsets. For liver segmentation, the mean dice are 96.2 and 95.1% for the FLARE21 datasets and the LiTS datasets, respectively. For kidney segmentation, the mean dice are 96.6 and 95.7% for the FLARE21 datasets and the LiTS datasets, respectively. Experimental evaluation exhibits that the proposed method can obtain better segmentation results than other methods.
Collapse
Affiliation(s)
- Zhaoxuan Gong
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Jing Song
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
| | - Wei Guo
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Ronghui Ju
- Liaoning provincial people's hospital, Shenyang 110067, China
| | - Dazhe Zhao
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Wei Zhou
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
| | - Guodong Zhang
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| |
Collapse
|
14
|
He R, Xu S, Liu Y, Li Q, Liu Y, Zhao N, Yuan Y, Zhang H. Three-Dimensional Liver Image Segmentation Using Generative Adversarial Networks Based on Feature Restoration. Front Med (Lausanne) 2022; 8:794969. [PMID: 35071275 PMCID: PMC8777029 DOI: 10.3389/fmed.2021.794969] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 11/22/2021] [Indexed: 11/13/2022] Open
Abstract
Medical imaging provides a powerful tool for medical diagnosis. In the process of computer-aided diagnosis and treatment of liver cancer based on medical imaging, accurate segmentation of liver region from abdominal CT images is an important step. However, due to defects of liver tissue and limitations of CT imaging procession, the gray level of liver region in CT image is heterogeneous, and the boundary between the liver and those of adjacent tissues and organs is blurred, which makes the liver segmentation an extremely difficult task. In this study, aiming at solving the problem of low segmentation accuracy of the original 3D U-Net network, an improved network based on the three-dimensional (3D) U-Net, is proposed. Moreover, in order to solve the problem of insufficient training data caused by the difficulty of acquiring labeled 3D data, an improved 3D U-Net network is embedded into the framework of generative adversarial networks (GAN), which establishes a semi-supervised 3D liver segmentation optimization algorithm. Finally, considering the problem of poor quality of 3D abdominal fake images generated by utilizing random noise as input, deep convolutional neural networks (DCNN) based on feature restoration method is designed to generate more realistic fake images. By testing the proposed algorithm on the LiTS-2017 and KiTS19 dataset, experimental results show that the proposed semi-supervised 3D liver segmentation method can greatly improve the segmentation performance of liver, with a Dice score of 0.9424 outperforming other methods.
Collapse
Affiliation(s)
- Runnan He
- Peng Cheng Laboratory, Shenzhen, China
| | - Shiqi Xu
- School of Computer Science and Technology, Harbin Institute of Technology (HIT), Harbin, China
| | - Yashu Liu
- School of Computer Science and Technology, Harbin Institute of Technology (HIT), Harbin, China
| | - Qince Li
- Peng Cheng Laboratory, Shenzhen, China
- School of Computer Science and Technology, Harbin Institute of Technology (HIT), Harbin, China
| | - Yang Liu
- School of Computer Science and Technology, Harbin Institute of Technology (HIT), Harbin, China
| | - Na Zhao
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Yongfeng Yuan
- School of Computer Science and Technology, Harbin Institute of Technology (HIT), Harbin, China
| | - Henggui Zhang
- Peng Cheng Laboratory, Shenzhen, China
- School of Physics and Astronomy, The University of Manchester, Manchester, United Kingdom
- Key Laboratory of Medical Electrophysiology of Ministry of Education and Medical Electrophysiological Key Laboratory of Sichuan Province, Institute of Cardiovascular Research, Southwest Medical University, Luzhou, China
| |
Collapse
|