1
|
Wen M, Shcherbakov P, Xu Y, Li J, Hu Y, Zhou Q, Liang H, Yuan L, Zhang X. A temporal enhanced semi-supervised training framework for needle segmentation in 3D ultrasound images. Phys Med Biol 2024; 69:115023. [PMID: 38684166 DOI: 10.1088/1361-6560/ad450b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 04/29/2024] [Indexed: 05/02/2024]
Abstract
Objective.Automated biopsy needle segmentation in 3D ultrasound images can be used for biopsy navigation, but it is quite challenging due to the low ultrasound image resolution and interference similar to the needle appearance. For 3D medical image segmentation, such deep learning networks as convolutional neural network and transformer have been investigated. However, these segmentation methods require numerous labeled data for training, have difficulty in meeting the real-time segmentation requirement and involve high memory consumption.Approach.In this paper, we have proposed the temporal information-based semi-supervised training framework for fast and accurate needle segmentation. Firstly, a novel circle transformer module based on the static and dynamic features has been designed after the encoders for extracting and fusing the temporal information. Then, the consistency constraints of the outputs before and after combining temporal information are proposed to provide the semi-supervision for the unlabeled volume. Finally, the model is trained using the loss function which combines the cross-entropy and Dice similarity coefficient (DSC) based segmentation loss with mean square error based consistency loss. The trained model with the single ultrasound volume input is applied to realize the needle segmentation in ultrasound volume.Main results.Experimental results on three needle ultrasound datasets acquired during the beagle biopsy show that our approach is superior to the most competitive mainstream temporal segmentation model and semi-supervised method by providing higher DSC (77.1% versus 76.5%), smaller needle tip position (1.28 mm versus 1.87 mm) and length (1.78 mm versus 2.19 mm) errors on the kidney dataset as well as DSC (78.5% versus 76.9%), needle tip position (0.86 mm versus 1.12 mm) and length (1.01 mm versus 1.26 mm) errors on the prostate dataset.Significance.The proposed method can significantly enhance needle segmentation accuracy by training with sequential images at no additional cost. This enhancement may further improve the effectiveness of biopsy navigation systems.
Collapse
Affiliation(s)
- Mingwei Wen
- Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luoyu Road, Wuhan 430074, People's Republic of China
| | - Pavel Shcherbakov
- Institute for Control Science, Russian Academy of Sciences, 65, Profsoyuznaya str., Moscow 117997, Russia
| | - Yang Xu
- Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luoyu Road, Wuhan 430074, People's Republic of China
- Hubei Medical Devices Quality Supervision and Test Institute, Wuhan, 430075, People's Republic of China
| | - Jing Li
- Hubei Medical Devices Quality Supervision and Test Institute, Wuhan, 430075, People's Republic of China
| | - Yi Hu
- Hubei Medical Devices Quality Supervision and Test Institute, Wuhan, 430075, People's Republic of China
| | - Quan Zhou
- Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luoyu Road, Wuhan 430074, People's Republic of China
| | - Huageng Liang
- Department of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, No 13, Hangkong Road, Wuhan 430022, People's Republic of China
| | - Li Yuan
- Department of Ultrasound imaging, Wuhan Children's Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, People's Republic of China
| | - Xuming Zhang
- Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luoyu Road, Wuhan 430074, People's Republic of China
| |
Collapse
|
2
|
Amjad A, Xu J, Thill D, Zhang Y, Ding J, Paulson E, Hall W, Erickson BA, Li XA. Deep learning auto-segmentation on multi-sequence magnetic resonance images for upper abdominal organs. Front Oncol 2023; 13:1209558. [PMID: 37483486 PMCID: PMC10358771 DOI: 10.3389/fonc.2023.1209558] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 06/19/2023] [Indexed: 07/25/2023] Open
Abstract
Introduction Multi-sequence multi-parameter MRIs are often used to define targets and/or organs at risk (OAR) in radiation therapy (RT) planning. Deep learning has so far focused on developing auto-segmentation models based on a single MRI sequence. The purpose of this work is to develop a multi-sequence deep learning based auto-segmentation (mS-DLAS) based on multi-sequence abdominal MRIs. Materials and methods Using a previously developed 3DResUnet network, a mS-DLAS model using 4 T1 and T2 weighted MRI acquired during routine RT simulation for 71 cases with abdominal tumors was trained and tested. Strategies including data pre-processing, Z-normalization approach, and data augmentation were employed. Additional 2 sequence specific T1 weighted (T1-M) and T2 weighted (T2-M) models were trained to evaluate performance of sequence-specific DLAS. Performance of all models was quantitatively evaluated using 6 surface and volumetric accuracy metrics. Results The developed DLAS models were able to generate reasonable contours of 12 upper abdomen organs within 21 seconds for each testing case. The 3D average values of dice similarity coefficient (DSC), mean distance to agreement (MDA mm), 95 percentile Hausdorff distance (HD95% mm), percent volume difference (PVD), surface DSC (sDSC), and relative added path length (rAPL mm/cc) over all organs were 0.87, 1.79, 7.43, -8.95, 0.82, and 12.25, respectively, for mS-DLAS model. Collectively, 71% of the auto-segmented contours by the three models had relatively high quality. Additionally, the obtained mS-DLAS successfully segmented 9 out of 16 MRI sequences that were not used in the model training. Conclusion We have developed an MRI-based mS-DLAS model for auto-segmenting of upper abdominal organs on MRI. Multi-sequence segmentation is desirable in routine clinical practice of RT for accurate organ and target delineation, particularly for abdominal tumors. Our work will act as a stepping stone for acquiring fast and accurate segmentation on multi-contrast MRI and make way for MR only guided radiation therapy.
Collapse
Affiliation(s)
- Asma Amjad
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | | | - Dan Thill
- Elekta Inc., ST. Charles, MO, United States
| | - Ying Zhang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Jie Ding
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Eric Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - William Hall
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Beth A. Erickson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - X. Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| |
Collapse
|
3
|
Li C, Mao Y, Guo Y, Li J, Wang Y. Multi-Dimensional Cascaded Net with Uncertain Probability Reduction for Abdominal Multi-Organ Segmentation in CT Sequences. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106887. [PMID: 35597204 DOI: 10.1016/j.cmpb.2022.106887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 05/03/2022] [Accepted: 05/11/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning abdominal multi-organ segmentation provides preoperative guidance for abdominal surgery. However, due to the large volume of 3D CT sequences, the existing methods cannot balance complete semantic features and high-resolution detail information, which leads to uncertain, rough, and inaccurate segmentation, especially in small and irregular organs. In this paper, we propose a two-stage algorithm named multi-dimensional cascaded net (MDCNet) to solve the above problems and segment multi-organs in CT images, including the spleen, kidney, gallbladder, esophagus, liver, stomach, pancreas, and duodenum. METHODS MDCNet combines the powerful semantic encoder ability of a 3D net and the rich high-resolution information of a 2.5D net. In stage1, a prior-guided shallow-layer-enhanced 3D location net extracts entire semantic features from a downsampled CT volume to perform rough segmentation. Additionally, we use circular inference and parameter Dice loss to alleviate uncertain boundary. The inputs of stage2 are high-resolution slices, which are obtained by the original image and coarse segmentation of stage1. Stage2 offsets the details lost during downsampling, resulting in smooth and accurate refined contours. The 2.5D net from the axial, coronal, and sagittal views also compensates for the missing spatial information of a single view. RESULTS The experiments on the two datasets both obtained the best performance, particularly a higher Dice on small gallbladders and irregular duodenums, which reached 0.85±0.12 and 0.77±0.07 respectively, increasing by 0.02 and 0.03 compared to the state-of-the-art method. CONCLUSION Our method can extract all semantic and high-resolution detail information from a large-volume CT image. It reduces the boundary uncertainty while yielding smoother segmentation edges, indicating good clinical application prospects.
Collapse
Affiliation(s)
- Chengkang Li
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China
| | - Yishen Mao
- Department of Pancreatic Surgery, Pancreatic Disease Institute, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Yi Guo
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| | - Ji Li
- Department of Pancreatic Surgery, Pancreatic Disease Institute, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China.
| | - Yuanyuan Wang
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| |
Collapse
|
4
|
Liang X, Li N, Zhang Z, Xiong J, Zhou S, Xie Y. Incorporating the hybrid deformable model for improving the performance of abdominal CT segmentation via multi-scale feature fusion network. Med Image Anal 2021; 73:102156. [PMID: 34274689 DOI: 10.1016/j.media.2021.102156] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 06/22/2021] [Accepted: 06/28/2021] [Indexed: 01/17/2023]
Abstract
Automated multi-organ abdominal Computed Tomography (CT) image segmentation can assist the treatment planning, diagnosis, and improve many clinical workflows' efficiency. The 3-D Convolutional Neural Network (CNN) recently attained state-of-the-art accuracy, which typically relies on supervised training with many manual annotated data. Many methods used the data augmentation strategy with a rigid or affine spatial transformation to alleviate the over-fitting problem and improve the network's robustness. However, the rigid or affine spatial transformation fails to capture the complex voxel-based deformation in the abdomen, filled with many soft organs. We developed a novel Hybrid Deformable Model (HDM), which consists of the inter-and intra-patient deformation for more effective data augmentation to tackle this issue. The inter-patient deformations were extracted from the learning-based deformable registration between different patients, while the intra-patient deformations were formed using the random 3-D Thin-Plate-Spline (TPS) transformation. Incorporating the HDM enabled the network to capture many of the subtle deformations of abdominal organs. To find a better solution and achieve faster convergence for network training, we fused the pre-trained multi-scale features into the a 3-D attention U-Net. We directly compared the segmentation accuracy of the proposed method to the previous techniques on several centers' datasets via cross-validation. The proposed method achieves the average Dice Similarity Coefficient (DSC) 0.852, which outperformed the other state-of-the-art on multi-organ abdominal CT segmentation results.
Collapse
Affiliation(s)
- Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China; Shenzhen Colleges of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| | - Na Li
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China; Shenzhen Colleges of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Jing Xiong
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Shoujun Zhou
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| |
Collapse
|