1
|
Liu M, Shao X, Jiang L, Wu K. 3D EAGAN: 3D edge-aware attention generative adversarial network for prostate segmentation in transrectal ultrasound images. Quant Imaging Med Surg 2024; 14:4067-4085. [PMID: 38846298 PMCID: PMC11151225 DOI: 10.21037/qims-23-1698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 04/18/2024] [Indexed: 06/09/2024]
Abstract
Background The segmentation of prostates from transrectal ultrasound (TRUS) images is a critical step in the diagnosis and treatment of prostate cancer. Nevertheless, the manual segmentation performed by physicians is a time-consuming and laborious task. To address this challenge, there is a pressing need to develop computerized algorithms capable of autonomously segmenting prostates from TRUS images, which sets a direction and form for future development. However, automatic prostate segmentation in TRUS images has always been a challenging problem since prostates in TRUS images have ambiguous boundaries and inhomogeneous intensity distribution. Although many prostate segmentation methods have been proposed, they still need to be improved due to the lack of sensibility to edge information. Consequently, the objective of this study is to devise a highly effective prostate segmentation method that overcomes these limitations and achieves accurate segmentation of prostates in TRUS images. Methods A three-dimensional (3D) edge-aware attention generative adversarial network (3D EAGAN)-based prostate segmentation method is proposed in this paper, which consists of an edge-aware segmentation network (EASNet) that performs the prostate segmentation and a discriminator network that distinguishes predicted prostates from real prostates. The proposed EASNet is composed of an encoder-decoder-based U-Net backbone network, a detail compensation module (DCM), four 3D spatial and channel attention modules (3D SCAM), an edge enhancement module (EEM), and a global feature extractor (GFE). The DCM is proposed to compensate for the loss of detailed information caused by the down-sampling process of the encoder. The features of the DCM are selectively enhanced by the 3D spatial and channel attention module. Furthermore, an EEM is proposed to guide shallow layers in the EASNet to focus on contour and edge information in prostates. Finally, features from shallow layers and hierarchical features from the decoder module are fused through the GFE to predict the segmentation prostates. Results The proposed method is evaluated on our TRUS image dataset and the open-source µRegPro dataset. Specifically, experimental results on two datasets show that the proposed method significantly improved the average segmentation Dice score from 85.33% to 90.06%, Jaccard score from 76.09% to 84.11%, Hausdorff distance (HD) score from 8.59 to 4.58 mm, Precision score from 86.48% to 90.58%, and Recall score from 84.79% to 89.24%. Conclusions A novel 3D EAGAN-based prostate segmentation method is proposed. The proposed method consists of an EASNet and a discriminator network. Experimental results demonstrate that the proposed method has achieved satisfactory results on 3D TRUS image segmentation for prostates.
Collapse
Affiliation(s)
- Mengqing Liu
- School of Computer and Information Engineering, Nantong Institute of Technology, Nantong, China
- School of Information Engineering, Nanchang Hangkong University, Nanchang, China
| | - Xiao Shao
- School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, China
| | - Liping Jiang
- Department of Ultrasound Medicine, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Kaizhi Wu
- School of Information Engineering, Nanchang Hangkong University, Nanchang, China
| |
Collapse
|
2
|
Xu X, Sanford T, Turkbey B, Xu S, Wood BJ, Yan P. Shadow-Consistent Semi-Supervised Learning for Prostate Ultrasound Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1331-1345. [PMID: 34971530 PMCID: PMC9709821 DOI: 10.1109/tmi.2021.3139999] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Prostate segmentation in transrectal ultrasound (TRUS) image is an essential prerequisite for many prostate-related clinical procedures, which, however, is also a long-standing problem due to the challenges caused by the low image quality and shadow artifacts. In this paper, we propose a Shadow-consistent Semi-supervised Learning (SCO-SSL) method with two novel mechanisms, namely shadow augmentation (Shadow-AUG) and shadow dropout (Shadow-DROP), to tackle this challenging problem. Specifically, Shadow-AUG enriches training samples by adding simulated shadow artifacts to the images to make the network robust to the shadow patterns. Shadow-DROP enforces the segmentation network to infer the prostate boundary using the neighboring shadow-free pixels. Extensive experiments are conducted on two large clinical datasets (a public dataset containing 1,761 TRUS volumes and an in-house dataset containing 662 TRUS volumes). In the fully-supervised setting, a vanilla U-Net equipped with our Shadow-AUG&Shadow-DROP outperforms the state-of-the-arts with statistical significance. In the semi-supervised setting, even with only 20% labeled training data, our SCO-SSL method still achieves highly competitive performance, suggesting great clinical value in relieving the labor of data annotation. Source code is released at https://github.com/DIAL-RPI/SCO-SSL.
Collapse
|
3
|
Xu X, Sanford T, Turkbey B, Xu S, Wood BJ, Yan P. Polar transform network for prostate ultrasound segmentation with uncertainty estimation. Med Image Anal 2022; 78:102418. [PMID: 35349838 PMCID: PMC9082929 DOI: 10.1016/j.media.2022.102418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 12/08/2021] [Accepted: 03/07/2022] [Indexed: 10/18/2022]
Abstract
Automatic and accurate prostate ultrasound segmentation is a long-standing and challenging problem due to the severe noise and ambiguous/missing prostate boundaries. In this work, we propose a novel polar transform network (PTN) to handle this problem from a fundamentally new perspective, where the prostate is represented and segmented in the polar coordinate space rather than the original image grid space. This new representation gives a prostate volume, especially the most challenging apex and base sub-areas, much denser samples than the background and thus facilitate the learning of discriminative features for accurate prostate segmentation. Moreover, in the polar representation, the prostate surface can be efficiently parameterized using a 2D surface radius map with respect to a centroid coordinate, which allows the proposed PTN to obtain superior accuracy compared with its counterparts using convolutional neural networks while having significantly fewer (18%∼41%) trainable parameters. We also equip our PTN with a novel strategy of centroid perturbed test-time augmentation (CPTTA), which is designed to further improve the segmentation accuracy and quantitatively assess the model uncertainty at the same time. The uncertainty estimation function provides valuable feedback to clinicians when manual modifications or approvals are required for the segmentation, substantially improving the clinical significance of our work. We conduct a three-fold cross validation on a clinical dataset consisting of 315 transrectal ultrasound (TRUS) images to comprehensively evaluate the performance of the proposed method. The experimental results show that our proposed PTN with CPTTA outperforms the state-of-the-art methods with statistical significance on most of the metrics while exhibiting a much smaller model size. Source code of the proposed PTN is released at https://github.com/DIAL-RPI/PTN.
Collapse
Affiliation(s)
- Xuanang Xu
- Department of Biomedical Engineering and the Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Thomas Sanford
- Department of Urology, The State University of New York Upstate Medical University, Syracuse, NY 13210, USA
| | - Baris Turkbey
- Molecular Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Sheng Xu
- Center for Interventional Oncology, Radiology & Imaging Sciences at National Institutes of Health, Bethesda, MD 20892, USA
| | - Bradford J Wood
- Center for Interventional Oncology, Radiology & Imaging Sciences at National Institutes of Health, Bethesda, MD 20892, USA
| | - Pingkun Yan
- Department of Biomedical Engineering and the Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| |
Collapse
|
4
|
Jiang J, Guo Y, Bi Z, Huang Z, Yu G, Wang J. Segmentation of prostate ultrasound images: the state of the art and the future directions of segmentation algorithms. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10179-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
5
|
Zhu Q, Li L, Hao J, Zha Y, Zhang Y, Cheng Y, Liao F, Li P. Selective information passing for MR/CT image segmentation. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05407-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
6
|
Wang Y, Dou H, Hu X, Zhu L, Yang X, Xu M, Qin J, Heng PA, Wang T, Ni D. Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2768-2778. [PMID: 31021793 DOI: 10.1109/tmi.2019.2913184] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Automatic prostate segmentation in transrectal ultrasound (TRUS) images is of essential importance for image-guided prostate interventions and treatment planning. However, developing such automatic solutions remains very challenging due to the missing/ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS, as well as the large variability in prostate shapes. This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our attention module utilizes the attention mechanism to selectively leverage the multi-level features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance. The proposed attention mechanism is a general strategy to aggregate multi-level deep features and has the potential to be used for other medical image segmentation tasks. The code is publicly available at https://github.com/wulalago/DAF3D.
Collapse
|
7
|
Karimi D, Zeng Q, Mathur P, Avinash A, Mahdavi S, Spadinger I, Abolmaesumi P, Salcudean SE. Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images. Med Image Anal 2019; 57:186-196. [PMID: 31325722 DOI: 10.1016/j.media.2019.07.005] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 06/06/2019] [Accepted: 07/04/2019] [Indexed: 12/31/2022]
Abstract
The goal of this work was to develop a method for accurate and robust automatic segmentation of the prostate clinical target volume in transrectal ultrasound (TRUS) images for brachytherapy. These images can be difficult to segment because of weak or insufficient landmarks or strong artifacts. We devise a method, based on convolutional neural networks (CNNs), that produces accurate segmentations on easy and difficult images alike. We propose two strategies to achieve improved segmentation accuracy on difficult images. First, for CNN training we adopt an adaptive sampling strategy, whereby the training process is encouraged to pay more attention to images that are difficult to segment. Secondly, we train a CNN ensemble and use the disagreement among this ensemble to identify uncertain segmentations and to estimate a segmentation uncertainty map. We improve uncertain segmentations by utilizing the prior shape information in the form of a statistical shape model. Our method achieves Hausdorff distance of 2.7 ± 2.3 mm and Dice score of 93.9 ± 3.5%. Comparisons with several competing methods show that our method achieves significantly better results and reduces the likelihood of committing large segmentation errors. Furthermore, our experiments show that our approach to estimating segmentation uncertainty is better than or on par with recent methods for estimation of prediction uncertainty in deep learning models. Our study demonstrates that estimation of model uncertainty and use of prior shape information can significantly improve the performance of CNN-based medical image segmentation methods, especially on difficult images.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Qi Zeng
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Prateek Mathur
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Apeksha Avinash
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | | | | | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
8
|
Jensen C, Sørensen KS, Jørgensen CK, Nielsen CW, Høy PC, Langkilde NC, Østergaard LR. Prostate zonal segmentation in 1.5T and 3T T2W MRI using a convolutional neural network. J Med Imaging (Bellingham) 2019; 6:014501. [PMID: 30820440 DOI: 10.1117/1.jmi.6.1.014501] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Accepted: 12/28/2018] [Indexed: 12/22/2022] Open
Abstract
Zonal segmentation of the prostate gland using magnetic resonance imaging (MRI) is clinically important for prostate cancer (PCa) diagnosis and image-guided treatments. A two-dimensional convolutional neural network (CNN) based on the U-net architecture was evaluated for segmentation of the central gland (CG) and peripheral zone (PZ) using a dataset of 40 patients (34 PCa positive and 6 PCa negative) scanned on two different MRI scanners (1.5T GE and 3T Siemens). Images were cropped around the prostate gland to exclude surrounding tissues, resampled to 0.5 × 0.5 × 0.5 mm voxels and z -score normalized before being propagated through the CNN. Performance was evaluated using the Dice similarity coefficient (DSC) and mean absolute distance (MAD) in a fivefold cross-validation setup. Overall performance showed DSC of 0.794 and 0.692, and MADs of 3.349 and 2.993 for CG and PZ, respectively. Dividing the gland into apex, mid, and base showed higher DSC for the midgland compared to apex and base for both CG and PZ. We found no significant difference in DSC between the two scanners. A larger dataset, preferably with multivendor scanners, is necessary for validation of the proposed algorithm; however, our results are promising and have clinical potential.
Collapse
Affiliation(s)
- Carina Jensen
- Aalborg University Hospital, Department of Medical Physics, Department of Oncology, Aalborg, Denmark
| | | | | | | | - Pia Christine Høy
- Aalborg University, Department of Health Science and Technology, Aalborg, Denmark
| | | | | |
Collapse
|
9
|
Ghavami N, Hu Y, Bonmati E, Rodell R, Gibson E, Moore C, Barratt D. Integration of spatial information in convolutional neural networks for automatic segmentation of intraoperative transrectal ultrasound images. J Med Imaging (Bellingham) 2018; 6:011003. [PMID: 30840715 PMCID: PMC6102407 DOI: 10.1117/1.jmi.6.1.011003] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 07/30/2018] [Indexed: 12/04/2022] Open
Abstract
Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition.
Collapse
Affiliation(s)
- Nooshin Ghavami
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Yipeng Hu
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Ester Bonmati
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Rachael Rodell
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Eli Gibson
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Caroline Moore
- University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom.,University College London, Division of Surgery and Interventional Science, London, United Kingdom.,University College London Hospitals NHS Foundation Trust, Department of Urology, London, United Kingdom
| | - Dean Barratt
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| |
Collapse
|
10
|
Tang S, Cong W, Yang J, Fu T, Song H, Ai D, Wang Y. Local statistical deformation models for deformable image registration. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.03.039] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
11
|
Meiburger KM, Acharya UR, Molinari F. Automated localization and segmentation techniques for B-mode ultrasound images: A review. Comput Biol Med 2017; 92:210-235. [PMID: 29247890 DOI: 10.1016/j.compbiomed.2017.11.018] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 11/30/2017] [Accepted: 11/30/2017] [Indexed: 12/14/2022]
Abstract
B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed.
Collapse
Affiliation(s)
- Kristen M Meiburger
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy
| | - U Rajendra Acharya
- Department of Electronic & Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Filippo Molinari
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy.
| |
Collapse
|
12
|
Li X, Li C, Fedorov A, Kapur T, Yang X. Segmentation of prostate from ultrasound images using level sets on active band and intensity variation across edges. Med Phys 2017; 43:3090-3103. [PMID: 27277056 DOI: 10.1118/1.4950721] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In this paper, the authors propose a novel efficient method to segment ultrasound images of the prostate with weak boundaries. Segmentation of the prostate from ultrasound images with weak boundaries widely exists in clinical applications. One of the most typical examples is the diagnosis and treatment of prostate cancer. Accurate segmentation of the prostate boundaries from ultrasound images plays an important role in many prostate-related applications such as the accurate placement of the biopsy needles, the assignment of the appropriate therapy in cancer treatment, and the measurement of the prostate volume. METHODS Ultrasound images of the prostate are usually corrupted with intensity inhomogeneities, weak boundaries, and unwanted edges, which make the segmentation of the prostate an inherently difficult task. Regarding to these difficulties, the authors introduce an active band term and an edge descriptor term in the modified level set energy functional. The active band term is to deal with intensity inhomogeneities and the edge descriptor term is to capture the weak boundaries or to rule out unwanted boundaries. The level set function of the proposed model is updated in a band region around the zero level set which the authors call it an active band. The active band restricts the authors' method to utilize the local image information in a banded region around the prostate contour. Compared to traditional level set methods, the average intensities inside∖outside the zero level set are only computed in this banded region. Thus, only pixels in the active band have influence on the evolution of the level set. For weak boundaries, they are hard to be distinguished by human eyes, but in local patches in the band region around prostate boundaries, they are easier to be detected. The authors incorporate an edge descriptor to calculate the total intensity variation in a local patch paralleled to the normal direction of the zero level set, which can detect weak boundaries and avoid unwanted edges in the ultrasound images. RESULTS The efficiency of the proposed model is demonstrated by experiments on real 3D volume images and 2D ultrasound images and comparisons with other approaches. Validation results on real 3D TRUS prostate images show that the authors' model can obtain a Dice similarity coefficient (DSC) of 94.03% ± 1.50% and a sensitivity of 93.16% ± 2.30%. Experiments on 100 typical 2D ultrasound images show that the authors' method can obtain a sensitivity of 94.87% ± 1.85% and a DSC of 95.82% ± 2.23%. A reproducibility experiment is done to evaluate the robustness of the proposed model. CONCLUSIONS As far as the authors know, prostate segmentation from ultrasound images with weak boundaries and unwanted edges is a difficult task. A novel method using level sets with active band and the intensity variation across edges is proposed in this paper. Extensive experimental results demonstrate that the proposed method is more efficient and accurate.
Collapse
Affiliation(s)
- Xu Li
- School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Chunming Li
- School of Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Andriy Fedorov
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts 02446
| | - Tina Kapur
- Department of Mathematics, Nanjing University, Nanjing 210093, China
| | - Xiaoping Yang
- School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
| |
Collapse
|
13
|
Fully automatic prostate segmentation from transrectal ultrasound images based on radial bas-relief initialization and slice-based propagation. Comput Biol Med 2016; 74:74-90. [PMID: 27208705 DOI: 10.1016/j.compbiomed.2016.05.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2015] [Revised: 05/03/2016] [Accepted: 05/05/2016] [Indexed: 11/22/2022]
Abstract
Prostate segmentation from transrectal ultrasound (TRUS) images plays an important role in the diagnosis and treatment planning of prostate cancer. In this paper, a fully automatic slice-based segmentation method was developed to segment TRUS prostate images. The initial prostate contour was determined using a novel method based on the radial bas-relief (RBR) method, and a false edge removal algorithm proposed here in. 2D slice-based propagation was used in which the contour on each image slice was deformed using a level-set evolution model, which was driven by edge-based and region-based energy fields generated by dyadic wavelet transform. The optimized contour on an image slice propagated to the adjacent slice, and subsequently deformed using the level-set model. The propagation continued until all image slices were segmented. To determine the initial slice where the propagation began, the initial prostate contour was deformed individually on each transverse image. A method was developed to self-assess the accuracy of the deformed contour based on the average image intensity inside and outside of the contour. The transverse image on which highest accuracy was attained was chosen to be the initial slice for the propagation process. Evaluation was performed for 336 transverse images from 15 prostates that include images acquired at mid-gland, base and apex regions of the prostates. The average mean absolute difference (MAD) between algorithm and manual segmentations was 0.79±0.26mm, which is comparable to results produced by previously published semi-automatic segmentation methods. Statistical evaluation shows that accurate segmentation was not only obtained at the mid-gland, but also at the base and apex regions.
Collapse
|
14
|
Yang X, Rossi PJ, Jani AB, Mao H, Curran WJ, Liu T. 3D Transrectal Ultrasound (TRUS) Prostate Segmentation Based on Optimal Feature Learning Framework. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9784. [PMID: 31467459 DOI: 10.1117/12.2216396] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We propose a 3D prostate segmentation method for transrectal ultrasound (TRUS) images, which is based on patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to train the kernel support vector machine (KSVM). The well-trained SVM was used to localize the prostate of the new patient. Our segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentations (gold standard). The mean volume Dice overlap coefficient was 89.7%. In this study, we have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute
| | - Peter J Rossi
- Department of Radiation Oncology and Winship Cancer Institute
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute
| |
Collapse
|
15
|
Sakalauskas A, Laučkaitė K, Lukoševičius A, Rastenytė D. Computer-Aided Segmentation of the Mid-Brain in Trans-Cranial Ultrasound Images. ULTRASOUND IN MEDICINE & BIOLOGY 2016; 42:322-332. [PMID: 26603659 DOI: 10.1016/j.ultrasmedbio.2015.09.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2015] [Revised: 08/19/2015] [Accepted: 09/08/2015] [Indexed: 06/05/2023]
Abstract
This paper presents a novel and rapid method developed for semi-automated segmentation of the mid-brain region in B-mode trans-cranial ultrasound (TCS) images. TCS is a relatively new neuroimaging tool having promising application in early diagnosis of Parkinson's disease. The quality of TCS images is much lower compared with the ultrasound images obtained during scanning of the soft tissues; the structures of interest in TCS are difficult to extract and to evaluate. The combination of an experience-based statistical shape model and intensity-amplitude invariant edge detector was proposed for the extraction of fuzzy boundaries of the mid-brain in TCS images. A statistical shape model was constructed using 90 manual delineations of the mid-brain region made by professional neurosonographer. Local phase-based edge detection strategy was applied for determination of plausible mid-brain boundary points used for statistical shape fitting. The proposed method was tested on other 40 clinical TCS images evaluated by two experts. The obtained averaged results of segmentation revealed that the differences between manual and automated measurements are statistically insignificant (p > 0.05).
Collapse
Affiliation(s)
- Andrius Sakalauskas
- Biomedical Engineering Institute, Kaunas University of Technology, Kaunas, Lithuania.
| | - Kristina Laučkaitė
- Department of Neurology, Lithuanian University of Health Sciences, Academy of Medicine, Kaunas, Lithuania
| | - Arūnas Lukoševičius
- Biomedical Engineering Institute, Kaunas University of Technology, Kaunas, Lithuania
| | - Daiva Rastenytė
- Department of Neurology, Lithuanian University of Health Sciences, Academy of Medicine, Kaunas, Lithuania
| |
Collapse
|
16
|
Segmentation of uterine fibroid ultrasound images using a dynamic statistical shape model in HIFU therapy. Comput Med Imaging Graph 2015; 46 Pt 3:302-14. [PMID: 26459767 DOI: 10.1016/j.compmedimag.2015.07.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2014] [Revised: 06/24/2015] [Accepted: 07/13/2015] [Indexed: 11/20/2022]
Abstract
Segmenting the lesion areas from ultrasound (US) images is an important step in the intra-operative planning of high-intensity focused ultrasound (HIFU). However, accurate segmentation remains a challenge due to intensity inhomogeneity, blurry boundaries in HIFU US images and the deformation of uterine fibroids caused by patient's breathing or external force. This paper presents a novel dynamic statistical shape model (SSM)-based segmentation method to accurately and efficiently segment the target region in HIFU US images of uterine fibroids. For accurately learning the prior shape information of lesion boundary fluctuations in the training set, the dynamic properties of stochastic differential equation and Fokker-Planck equation are incorporated into SSM (referred to as SF-SSM). Then, a new observation model of lesion areas (named to RPFM) in HIFU US images is developed to describe the features of the lesion areas and provide a likelihood probability to the prior shape given by SF-SSM. SF-SSM and RPFM are integrated into active contour model to improve the accuracy and robustness of segmentation in HIFU US images. We compare the proposed method with four well-known US segmentation methods to demonstrate its superiority. The experimental results in clinical HIFU US images validate the high accuracy and robustness of our approach, even when the quality of the images is unsatisfactory, indicating its potential for practical application in HIFU therapy.
Collapse
|
17
|
Wu P, Liu Y, Li Y, Liu B. Robust Prostate Segmentation Using Intrinsic Properties of TRUS Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1321-1335. [PMID: 25576565 DOI: 10.1109/tmi.2015.2388699] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Accurate segmentation is usually crucial in transrectal ultrasound (TRUS) image based prostate diagnosis; however, it is always hampered by heavy speckles. Contrary to the traditional view that speckles are adverse to segmentation, we exploit intrinsic properties induced by speckles to facilitate the task, based on the observations that sizes and orientations of speckles provide salient cues to determine the prostate boundary. Since the speckle orientation changes in accordance with a statistical prior rule, rotation-invariant texture feature is extracted along the orientations revealed by the rule. To address the problem of feature changes due to different speckle sizes, TRUS images are split into several arc-like strips. In each strip, every individual feature vector is sparsely represented, and representation residuals are obtained. The residuals, along with the spatial coherence inherited from biological tissues, are combined to segment the prostate preliminarily via graph cuts. After that, the segmentation is fine-tuned by a novel level sets model, which integrates (1) the prostate shape prior, (2) dark-to-light intensity transition near the prostate boundary, and (3) the texture feature just obtained. The proposed method is validated on two 2-D image datasets obtained from two different sonographic imaging systems, with the mean absolute distance on the mid gland images only 1.06±0.53 mm and 1.25±0.77 mm, respectively. The method is also extended to segment apex and base images, producing competitive results over the state of the art.
Collapse
|
18
|
Qin X, Tian Y, Yan P. Feature competition and partial sparse shape modeling for cardiac image sequences segmentation. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.07.044] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
19
|
Qiu W, Yuan J, Ukwatta E, Fenster A. Rotationally resliced 3D prostate TRUS segmentation using convex optimization with shape priors. Med Phys 2015; 42:877-91. [DOI: 10.1118/1.4906129] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
20
|
Sun F, Li H, Hao S. Shape analysis based on feature-preserving Elastic Quadratic Patch Modeling. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.05.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
21
|
Li ZC, Li K, Zhan HL, Chen K, Chen MM, Xie YQ, Wang L. Augmenting interventional ultrasound using statistical shape model for guiding percutaneous nephrolithotomy: Initial evaluation in pigs. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.01.059] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
22
|
Yang X, Rossi P, Ogunleye T, Marcus DM, Jani AB, Mao H, Curran WJ, Liu T. Prostate CT segmentation method based on nonrigid registration in ultrasound-guided CT-based HDR prostate brachytherapy. Med Phys 2014; 41:111915. [PMID: 25370648 PMCID: PMC4241831 DOI: 10.1118/1.4897615] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2014] [Revised: 09/22/2014] [Accepted: 09/24/2014] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The technological advances in real-time ultrasound image guidance for high-dose-rate (HDR) prostate brachytherapy have placed this treatment modality at the forefront of innovation in cancer radiotherapy. Prostate HDR treatment often involves placing the HDR catheters (needles) into the prostate gland under the transrectal ultrasound (TRUS) guidance, then generating a radiation treatment plan based on CT prostate images, and subsequently delivering high dose of radiation through these catheters. The main challenge for this HDR procedure is to accurately segment the prostate volume in the CT images for the radiation treatment planning. In this study, the authors propose a novel approach that integrates the prostate volume from 3D TRUS images into the treatment planning CT images to provide an accurate prostate delineation for prostate HDR treatment. METHODS The authors' approach requires acquisition of 3D TRUS prostate images in the operating room right after the HDR catheters are inserted, which takes 1-3 min. These TRUS images are used to create prostate contours. The HDR catheters are reconstructed from the intraoperative TRUS and postoperative CT images, and subsequently used as landmarks for the TRUS-CT image fusion. After TRUS-CT fusion, the TRUS-based prostate volume is deformed to the CT images for treatment planning. This method was first validated with a prostate-phantom study. In addition, a pilot study of ten patients undergoing HDR prostate brachytherapy was conducted to test its clinical feasibility. The accuracy of their approach was assessed through the locations of three implanted fiducial (gold) markers, as well as T2-weighted MR prostate images of patients. RESULTS For the phantom study, the target registration error (TRE) of gold-markers was 0.41 ± 0.11 mm. For the ten patients, the TRE of gold markers was 1.18 ± 0.26 mm; the prostate volume difference between the authors' approach and the MRI-based volume was 7.28% ± 0.86%, and the prostate volume Dice overlap coefficient was 91.89% ± 1.19%. CONCLUSIONS The authors have developed a novel approach to improve prostate contour utilizing intraoperative TRUS-based prostate volume in the CT-based prostate HDR treatment planning, demonstrated its clinical feasibility, and validated its accuracy with MRIs. The proposed segmentation method would improve prostate delineations, enable accurate dose planning and treatment delivery, and potentially enhance the treatment outcome of prostate HDR brachytherapy.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Peter Rossi
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - David M Marcus
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| |
Collapse
|
23
|
A homotopy-based sparse representation for fast and accurate shape prior modeling in liver surgical planning. Med Image Anal 2014; 19:176-86. [PMID: 25461336 DOI: 10.1016/j.media.2014.10.003] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2014] [Revised: 08/28/2014] [Accepted: 10/10/2014] [Indexed: 11/21/2022]
Abstract
Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement was 94.31 ± 3.04%, 1.12 ± 0.69 mm and 3.65 ± 1.40 mm respectively.
Collapse
|
24
|
Qin X, Li X, Liu Y, Lu H, Yan P. Adaptive shape prior constrained level sets for bladder MR image segmentation. IEEE J Biomed Health Inform 2014; 18:1707-16. [PMID: 24235318 DOI: 10.1109/jbhi.2013.2288935] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Three-dimensional bladder wall segmentation for thickness measuring can be very useful for bladder magnetic resonance (MR) image analysis, since thickening of the bladder wall can indicate abnormality. However, it is a challenging task due to the artifacts inside bladder lumen, weak boundaries in the apex and base areas, and complicated outside intensity distributions. To deal with these difficulties, in this paper, an adaptive shape prior constrained directional level set model is proposed to segment the inner and outer boundaries of the bladder wall. In addition, a coupled directional level set model is presented to refine the segmentation by exploiting the prior knowledge of region information and minimum thickness. With our proposed method, the influence of the artifacts in the bladder lumen and the complicated outside tissues surrounding the bladder can be appreciably reduced. Furthermore, the leakage on the weak boundaries can be avoided. Compared with other related methods, better results were obtained on 11 patients' 3-D bladder MR images by using the proposed method.
Collapse
|
25
|
Ciurte A, Bresson X, Cuisenaire O, Houhou N, Nedevschi S, Thiran JP, Cuadra MB. Semi-supervised segmentation of ultrasound images based on patch representation and continuous min cut. PLoS One 2014; 9:e100972. [PMID: 25010530 PMCID: PMC4091944 DOI: 10.1371/journal.pone.0100972] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/01/2014] [Indexed: 11/18/2022] Open
Abstract
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Collapse
Affiliation(s)
- Anca Ciurte
- Department of Computer Science, Technical University of Cluj-Napoca, Cluj-Napoca, Romania
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- * E-mail:
| | - Xavier Bresson
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Center for Biomedical Imaging, Signal Processing Core, Lausanne, Switzerland
| | - Olivier Cuisenaire
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Center for Biomedical Imaging, Signal Processing Core, Lausanne, Switzerland
| | - Nawal Houhou
- Swiss Institute of Bioinformatics (SIB), University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Sergiu Nedevschi
- Department of Computer Science, Technical University of Cluj-Napoca, Cluj-Napoca, Romania
| | - Jean-Philippe Thiran
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Meritxell Bach Cuadra
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Center for Biomedical Imaging, Signal Processing Core, Lausanne, Switzerland
| |
Collapse
|
26
|
Yang X, Rossi P, Ogunleye T, Jani AB, Curran WJ, Liu T. A New CT Prostate Segmentation for CT-Based HDR Brachytherapy. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2014; 9036:90362K. [PMID: 25821388 DOI: 10.1117/12.2043695] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
High-dose-rate (HDR) brachytherapy has become a popular treatment modality for localized prostate cancer. Prostate HDR treatment involves placing 10 to 20 catheters (needles) into the prostate gland, and then delivering radiation dose to the cancerous regions through these catheters. These catheters are often inserted with transrectal ultrasound (TRUS) guidance and the HDR treatment plan is based on the CT images. The main challenge for CT-based HDR planning is to accurately segment prostate volume in CT images due to the poor soft tissue contrast and additional artifacts introduced by the catheters. To overcome these limitations, we propose a novel approach to segment the prostate in CT images through TRUS-CT deformable registration based on the catheter locations. In this approach, the HDR catheters are reconstructed from the intra-operative TRUS and planning CT images, and then used as landmarks for the TRUS-CT image registration. The prostate contour generated from the TRUS images captured during the ultrasound-guided HDR procedure was used to segment the prostate on the CT images through deformable registration. We conducted two studies. A prostate-phantom study demonstrated a submillimeter accuracy of our method. A pilot study of 5 prostate-cancer patients was conducted to further test its clinical feasibility. All patients had 3 gold markers implanted in the prostate that were used to evaluate the registration accuracy, as well as previous diagnostic MR images that were used as the gold standard to assess the prostate segmentation. For the 5 patients, the mean gold-marker displacement was 1.2 mm; the prostate volume difference between our approach and the MRI was 7.2%, and the Dice volume overlap was over 91%. Our proposed method could improve prostate delineation, enable accurate dose planning and delivery, and potentially enhance prostate HDR treatment outcome.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Peter Rossi
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Ashesh B Jani
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
27
|
Gao Y, Zhan Y, Shen D. Incremental learning with selective memory (ILSM): towards fast prostate localization for image guided radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:518-34. [PMID: 24495983 PMCID: PMC4379484 DOI: 10.1109/tmi.2013.2291495] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to "personalize" the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼ 0.89 ) and fast ( ∼ 4 s), which satisfies the real-world clinical requirements of IGRT.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Computer Science and the Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Yiqiang Zhan
- SYNGO Division, Siemens Medical Solutions, Malvern, PA 19355 USA
| | - Dinggang Shen
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 136-701, Korea
| |
Collapse
|
28
|
Qiu W, Yuan J, Ukwatta E, Tessier D, Fenster A. Three-dimensional prostate segmentation using level set with shape constraint based on rotational slices for 3D end-firing TRUS guided biopsy. Med Phys 2014; 40:072903. [PMID: 23822454 DOI: 10.1118/1.4810968] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Prostate segmentation is an important step in the planning and treatment of 3D end-firing transrectal ultrasound (TRUS) guided prostate biopsy. In order to improve the accuracy and efficiency of prostate segmentation in 3D TRUS images, an improved level set method is incorporated into a rotational-slice-based 3D prostate segmentation to decrease the accumulated segmentation errors produced by the slice-by-slice segmentation method. METHODS A 3D image is first resliced into 2D slices in a rotational manner in both the clockwise and counterclockwise directions. All slices intersect approximately along the rotational scanning axis and have an equal angular spacing. Six to eight boundary points are selected to initialize a level set function to extract the prostate contour within the first slice. The segmented contour is then propagated to the adjacent slice and is used as the initial contour for segmentation. This process is repeated until all slices are segmented. A modified distance regularization level set method is used to segment the prostate in all resliced 2D slices. In addition, shape-constraint and local-region-based energies are imposed to discourage the evolved level set function to leak in regions with weak edges or without edges. An anchor point based energy is used to promote the level set function to pass through the initial selected boundary points. The algorithm's performance was evaluated using distance- and volume-based metrics (sensitivity (Se), Dice similarity coefficient (DSC), mean absolute surface distance (MAD), maximum absolute surface distance (MAXD), and volume difference) by comparison with expert delineations. RESULTS The validation results using thirty 3D patient images showed that the authors' method can obtain a DSC of 93.1% ± 1.6%, a sensitivity of 93.0% ± 2.0%, a MAD of 1.18 ± 0.36 mm, a MAXD of 3.44 ± 0.8 mm, and a volume difference of 2.6 ± 1.9 cm(3) for the entire prostate. A reproducibility experiment demonstrated that the proposed method yielded low intraobserver and interobserver variability in terms of DSC. The mean segmentation time of the authors' method for all patient 3D TRUS images was 55 ± 3.5 s, in addition to 30 ± 5 s for initialization. CONCLUSIONS To address the challenges involved with slice-based 3D prostate segmentation, a level set based method is proposed in this paper. This method is especially developed for a 3D end-firing TRUS guided prostate biopsy system. The extensive experimental results demonstrate that the proposed method is accurate, robust, and computationally efficient.
Collapse
Affiliation(s)
- Wu Qiu
- Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8, Canada.
| | | | | | | | | |
Collapse
|
29
|
Kim SG, Seo YG. A TRUS Prostate Segmentation using Gabor Texture Features and Snake-like Contour. JOURNAL OF INFORMATION PROCESSING SYSTEMS 2013. [DOI: 10.3745/jips.2013.9.1.103] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
30
|
Mahdavi SS, Moradi M, Morris WJ, Goldenberg SL, Salcudean SE. Fusion of ultrasound B-mode and vibro-elastography images for automatic 3D segmentation of the prostate. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:2073-2082. [PMID: 22829391 DOI: 10.1109/tmi.2012.2209204] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Prostate segmentation in B-mode images is a challenging task even when done manually by experts. In this paper we propose a 3D automatic prostate segmentation algorithm which makes use of information from both ultrasound B-mode and vibro-elastography data.We exploit the high contrast to noise ratio of vibro-elastography images of the prostate, in addition to the commonly used B-mode images, to implement a 2D Active Shape Model (ASM)-based segmentation algorithm on the midgland image. The prostate model is deformed by a combination of two measures: the gray level similarity and the continuity of the prostate edge in both image types. The automatically obtained mid-gland contour is then used to initialize a 3D segmentation algorithm which models the prostate as a tapered and warped ellipsoid. Vibro-elastography images are used in addition to ultrasound images to improve boundary detection.We report a Dice similarity coefficient of 0.87±0.07 and 0.87±0.08 comparing the 2D automatic contours with manual contours of two observers on 61 images. For 11 cases, a whole gland volume error of 10.2±2.2% and 13.5±4.1% and whole gland volume difference of -7.2±9.1% and -13.3±12.6% between 3D automatic and manual surfaces of two observers is obtained. This is the first validated work showing the fusion of B-mode and vibro-elastography data for automatic 3D segmentation of the prostate.
Collapse
|
31
|
Pereyra M, Dobigeon N, Batatia H, Tourneret JY. Segmentation of skin lesions in 2-D and 3-D ultrasound images using a spatially coherent generalized Rayleigh mixture model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:1509-1520. [PMID: 22434797 DOI: 10.1109/tmi.2012.2190617] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
This paper addresses the problem of jointly estimating the statistical distribution and segmenting lesions in multiple-tissue high-frequency skin ultrasound images. The distribution of multiple-tissue images is modeled as a spatially coherent finite mixture of heavy-tailed Rayleigh distributions. Spatial coherence inherent to biological tissues is modeled by enforcing local dependence between the mixture components. An original Bayesian algorithm combined with a Markov chain Monte Carlo method is then proposed to jointly estimate the mixture parameters and a label-vector associating each voxel to a tissue. More precisely, a hybrid Metropolis-within-Gibbs sampler is used to draw samples that are asymptotically distributed according to the posterior distribution of the Bayesian model. The Bayesian estimators of the model parameters are then computed from the generated samples. Simulation results are conducted on synthetic data to illustrate the performance of the proposed estimation strategy. The method is then successfully applied to the segmentation of in vivo skin tumors in high-frequency 2-D and 3-D ultrasound images.
Collapse
Affiliation(s)
- Marcelo Pereyra
- University of Toulouse, IRIT/INP-ENSEEIHT, 31071 Toulouse Cedex 7, France.
| | | | | | | |
Collapse
|
32
|
Akbari H, Fei B. 3D ultrasound image segmentation using wavelet support vector machines. Med Phys 2012; 39:2972-84. [PMID: 22755682 PMCID: PMC3360689 DOI: 10.1118/1.4709607] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2011] [Revised: 04/09/2012] [Accepted: 04/11/2012] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS) imaging is clinically used in prostate biopsy and therapy. Segmentation of the prostate on TRUS images has many applications. In this study, a three-dimensional (3D) segmentation method for TRUS images of the prostate is presented for 3D ultrasound-guided biopsy. METHODS This segmentation method utilizes a statistical shape, texture information, and intensity profiles. A set of wavelet support vector machines (W-SVMs) is applied to the images at various subregions of the prostate. The W-SVMs are trained to adaptively capture the features of the ultrasound images in order to differentiate the prostate and nonprostate tissue. This method consists of a set of wavelet transforms for extraction of prostate texture features and a kernel-based support vector machine to classify the textures. The voxels around the surface of the prostate are labeled in sagittal, coronal, and transverse planes. The weight functions are defined for each labeled voxel on each plane and on the model at each region. In the 3D segmentation procedure, the intensity profiles around the boundary between the tentatively labeled prostate and nonprostate tissue are compared to the prostate model. Consequently, the surfaces are modified based on the model intensity profiles. The segmented prostate is updated and compared to the shape model. These two steps are repeated until they converge. Manual segmentation of the prostate serves as the gold standard and a variety of methods are used to evaluate the performance of the segmentation method. RESULTS The results from 40 TRUS image volumes of 20 patients show that the Dice overlap ratio is 90.3% ± 2.3% and that the sensitivity is 87.7% ± 4.9%. CONCLUSIONS The proposed method provides a useful tool in our 3D ultrasound image-guided prostate biopsy and can also be applied to other applications in the prostate.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329, USA
| | | |
Collapse
|
33
|
Chen HC, Tsai PY, Huang HH, Shih HH, Wang YY, Chang CH, Sun YN. Registration-based segmentation of three-dimensional ultrasound images for quantitative measurement of fetal craniofacial structure. ULTRASOUND IN MEDICINE & BIOLOGY 2012; 38:811-823. [PMID: 22425377 DOI: 10.1016/j.ultrasmedbio.2012.01.025] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2011] [Revised: 01/05/2012] [Accepted: 01/26/2012] [Indexed: 05/31/2023]
Abstract
Segmentation of a fetal head from three-dimensional (3-D) ultrasound images is a critical step in the quantitative measurement of fetal craniofacial structure. However, two main issues complicate segmentation, including fuzzy boundaries and large variations in pose and shape among different ultrasound images. In this article, we propose a new registration-based method for automatically segmenting the fetal head from 3-D ultrasound images. The proposed method first detects the eyes based on Gabor features to identify the pose of the fetus image. Then, a reference model, which is constructed from a fetal phantom and contains prior knowledge of head shape, is aligned to the image via feature-based registration. Finally, 3-D snake deformation is utilized to improve the boundary fitness between the model and image. Four clinically useful parameters including inter-orbital diameter (IOD), bilateral orbital diameter (BOD), occipital frontal diameter (OFD) and bilateral parietal diameter (BPD) are measured based on the results of the eye detection and head segmentation. Ultrasound volumes from 11 subjects were used for validation of the method accuracy. Experimental results showed that the proposed method was able to overcome the aforementioned difficulties and achieve good agreement between automatic and manual measurements.
Collapse
Affiliation(s)
- Hsin-Chen Chen
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan ROC
| | | | | | | | | | | | | |
Collapse
|
34
|
Yang X, Fei B. 3D Prostate Segmentation of Ultrasound Images Combining Longitudinal Image Registration and Machine Learning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 8316:83162O. [PMID: 24027622 DOI: 10.1117/12.912188] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 ± 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | |
Collapse
|
35
|
Fei B, Schuster DM, Master V, Akbari H, Fenster A, Nieh P. A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 2012. [PMID: 22708023 DOI: 10.1117/12.912182] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.
Collapse
Affiliation(s)
- Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30329
| | | | | | | | | | | |
Collapse
|
36
|
Rotational-Slice-Based Prostate Segmentation Using Level Set with Shape Constraint for 3D End-Firing TRUS Guided Biopsy. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2012 2012; 15:537-44. [DOI: 10.1007/978-3-642-33415-3_66] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
37
|
Unsupervised 3D Prostate Segmentation Based on Diffusion-Weighted Imaging MRI Using Active Contour Models with a Shape Prior. JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING 2011. [DOI: 10.1155/2011/410912] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Accurate estimation of the prostate location and volume fromin vivoimages plays a crucial role in various clinical applications. Recently, magnetic resonance imaging (MRI) is proposed as a promising modality to detect and monitor prostate-related diseases. In this paper, we propose an unsupervised algorithm to segment prostate with 3D apparent diffusion coefficient (ADC) images derived from diffusion-weighted imaging (DWI) MRI without the need of a training dataset, whereas previous methods for this purpose require training datasets. We first apply a coarse segmentation to extract the shape information. Then, the shape prior is incorporated into the active contour model. Finally, morphological operations are applied to refine the segmentation results. We apply our method to an MR dataset obtained from three patients and provide segmentation results obtained by our method and an expert. Our experimental results show that the performance of the proposed method is quite successful.
Collapse
|