1
|
Wang H, Wu H, Wang Z, Yue P, Ni D, Heng PA, Wang Y. A Narrative Review of Image Processing Techniques Related to Prostate Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:189-209. [PMID: 39551652 DOI: 10.1016/j.ultrasmedbio.2024.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 09/15/2024] [Accepted: 10/06/2024] [Indexed: 11/19/2024]
Abstract
Prostate cancer (PCa) poses a significant threat to men's health, with early diagnosis being crucial for improving prognosis and reducing mortality rates. Transrectal ultrasound (TRUS) plays a vital role in the diagnosis and image-guided intervention of PCa. To facilitate physicians with more accurate and efficient computer-assisted diagnosis and interventions, many image processing algorithms in TRUS have been proposed and achieved state-of-the-art performance in several tasks, including prostate gland segmentation, prostate image registration, PCa classification and detection and interventional needle detection. The rapid development of these algorithms over the past 2 decades necessitates a comprehensive summary. As a consequence, this survey provides a narrative review of this field, outlining the evolution of image processing methods in the context of TRUS image analysis and meanwhile highlighting their relevant contributions. Furthermore, this survey discusses current challenges and suggests future research directions to possibly advance this field further.
Collapse
Affiliation(s)
- Haiqiao Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hong Wu
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Zhuoyuan Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Peiyan Yue
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Dong Ni
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yi Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China.
| |
Collapse
|
2
|
Bi H, Jiang Y, Tang H, Yang G, Shu H, Dillenseger JL. Fast and accurate segmentation method of active shape model with Rayleigh mixture model clustering for prostate ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 184:105097. [PMID: 31634807 DOI: 10.1016/j.cmpb.2019.105097] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 09/24/2019] [Accepted: 09/25/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE The prostate cancer interventions, which need an accurate prostate segmentation, are performed under ultrasound imaging guidance. However, prostate ultrasound segmentation is facing two challenges. The first is the low signal-to-noise ratio and inhomogeneity of the ultrasound image. The second is the non-standardized shape and size of the prostate. METHODS For prostate ultrasound image segmentation, this paper proposed an accurate and efficient method of Active shape model (ASM) with Rayleigh mixture model Clustering (ASM-RMMC). Firstly, Rayleigh mixture model (RMM) is adopted for clustering the image regions which present similar speckle distributions. These content-based clustered images are then used to initialize and guide the deformation of an ASM model. RESULTS The performance of the proposed method is assessed on 30 prostate ultrasound images using four metrics as Mean Average Distance (MAD), Dice Similarity Coefficient (DSC), False Positive Error (FPE) and False Negative Error (FNE). The proposed ASM-RMMC reaches high segmentation accuracy with 95% ± 0.81% for DSC, 1.86 ± 0.02 pixels for MAD, 2.10% ± 0.36% for FPE and 2.78% ± 0.71% for FNE, respectively. Moreover, the average segmentation time is less than 8 s when treating a single prostate ultrasound image through ASM-RMMC. CONCLUSIONS This paper presents a method for prostate ultrasound image segmentation, which achieves high accuracy with less computational complexity and meets the clinical requirements.
Collapse
Affiliation(s)
- Hui Bi
- Changzhou University, Changzhou, China
| | - Yibo Jiang
- Changzhou Institute of Technology, Changzhou, China
| | - Hui Tang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Guanyu Yang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Huazhong Shu
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China; Centre de Recherche en Information Biomédicale sino-français (CRIBs), Nanjing, China.
| | - Jean-Louis Dillenseger
- Centre de Recherche en Information Biomédicale sino-français (CRIBs), Nanjing, China; Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| |
Collapse
|
3
|
Variability of manual segmentation of the prostate in axial T2-weighted MRI: A multi-reader study. Eur J Radiol 2019; 121:108716. [DOI: 10.1016/j.ejrad.2019.108716] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 10/14/2019] [Accepted: 10/16/2019] [Indexed: 01/24/2023]
|
4
|
Jaouen V, Bert J, Mountris KA, Boussion N, Schick U, Pradier O, Valeri A, Visvikis D. Prostate Volume Segmentation in TRUS Using Hybrid Edge-Bhattacharyya Active Surfaces. IEEE Trans Biomed Eng 2019; 66:920-933. [DOI: 10.1109/tbme.2018.2865428] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
5
|
Mason SA, O’Shea TP, White IM, Lalondrelle S, Downey K, Baker M, Behrens CF, Bamber JC, Harris EJ. Towards ultrasound-guided adaptive radiotherapy for cervical cancer: Evaluation of Elekta's semiautomated uterine segmentation method on 3D ultrasound images. Med Phys 2017; 44:3630-3638. [PMID: 28493295 PMCID: PMC5575494 DOI: 10.1002/mp.12325] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 02/10/2017] [Accepted: 03/29/2017] [Indexed: 11/06/2022] Open
Abstract
PURPOSE 3D ultrasound (US) images of the uterus may be used to adapt radiotherapy (RT) for cervical cancer patients based on changes in daily anatomy. This requires accurate on-line segmentation of the uterus. The aim of this work was to assess the accuracy of Elekta's "Assisted Gyne Segmentation" (AGS) algorithm in semi-automatically segmenting the uterus on 3D transabdominal ultrasound images by comparison with manual contours. MATERIALS & METHODS Nine patients receiving RT for cervical cancer were imaged with the 3D Clarity® transabdominal probe at RT planning, and 1 to 7 times during treatment. Image quality was rated from unusable (0)-excellent (3). Four experts segmented the uterus (defined as the uterine body and cervix) manually and using AGS on images with a ranking > 0. Pairwise analysis between manual contours was evaluated to determine interobserver variability. The accuracy of the AGS method was assessed by measuring its agreement with manual contours via pairwise analysis. RESULTS 35/44 images acquired (79.5%) received a ranking > 0. For the manual contour variation, the median [interquartile range (IQR)] distance between centroids (DC) was 5.41 [5.0] mm, the Dice similarity coefficient (DSC) was 0.78 [0.11], the mean surface-to-surface distance (MSSD) was 3.20 [1.8] mm, and the uniform margin of 95% (UM95) was 4.04 [5.8] mm. There was no correlation between image quality and manual contour agreement. AGS failed to give a result in 19.3% of cases. For the remaining cases, the level of agreement between AGS contours and manual contours depended on image quality. There were no significant differences between the AGS segmentations and the manual segmentations on the images that received a quality rating of 3. However, the AGS algorithm had significantly worse agreement with manual contours on images with quality ratings of 1 and 2 compared with the corresponding interobserver manual variation. The overall median [IQR] DC, DSC, MSSD, and UM95 between AGS and manual contours was 5.48 [5.45] mm, 0.77 [0.14], 3.62 [2.7] mm, and 5.19 [8.1] mm, respectively. CONCLUSIONS The AGS tool was able to represent uterine shape of cervical cancer patients in agreement with manual contouring in cases where the image quality was excellent, but not in cases where image quality was degraded by common artifacts such as shadowing and signal attenuation. The AGS tool should be used with caution for adaptive RT purposes, as it is not reliable in accurately segmenting the uterus on 'good' or 'poor' quality images. The interobserver agreement between manual contours of the uterus drawn on 3D US was consistent with results of similar studies performed on CT and MRI images.
Collapse
Affiliation(s)
- Sarah A. Mason
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Tuathan P. O’Shea
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Ingrid M. White
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Susan Lalondrelle
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Kate Downey
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Mariwan Baker
- Department of OncologyHerlev Hospital, University of CopenhagenHerlevDenmark
| | - Claus F. Behrens
- Department of OncologyHerlev Hospital, University of CopenhagenHerlevDenmark
| | - Jeffrey C. Bamber
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Emma J. Harris
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| |
Collapse
|
6
|
Tian Z, Liu L, Zhang Z, Xue J, Fei B. A supervoxel-based segmentation method for prostate MR images. Med Phys 2017; 44:558-569. [PMID: 27991675 DOI: 10.1002/mp.12048] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Revised: 12/02/2016] [Accepted: 12/07/2016] [Indexed: 01/22/2023] Open
Abstract
PURPOSE Segmentation of the prostate on MR images has many applications in prostate cancer management. In this work, we propose a supervoxel-based segmentation method for prostate MR images. METHODS A supervoxel is a set of pixels that have similar intensities, locations, and textures in a 3D image volume. The prostate segmentation problem is considered as assigning a binary label to each supervoxel, which is either the prostate or background. A supervoxel-based energy function with data and smoothness terms is used to model the label. The data term estimates the likelihood of a supervoxel belonging to the prostate by using a supervoxel-based shape feature. The geometric relationship between two neighboring supervoxels is used to build the smoothness term. The 3D graph cut is used to minimize the energy function to get the labels of the supervoxels, which yields the prostate segmentation. A 3D active contour model is then used to get a smooth surface by using the output of the graph cut as an initialization. The performance of the proposed algorithm was evaluated on 30 in-house MR image data and PROMISE12 dataset. RESULTS The mean Dice similarity coefficients are 87.2 ± 2.3% and 88.2 ± 2.8% for our 30 in-house MR volumes and the PROMISE12 dataset, respectively. The proposed segmentation method yields a satisfactory result for prostate MR images. CONCLUSION The proposed supervoxel-based method can accurately segment prostate MR images and can have a variety of application in prostate cancer diagnosis and therapy.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Department of Radiology and Imaging Sciences, School of Medicine, Emory University, 1841 Clifton Road NE, Atlanta, GA, 30329, USA
| | - Lizhi Liu
- Department of Radiology and Imaging Sciences, School of Medicine, Emory University, 1841 Clifton Road NE, Atlanta, GA, 30329, USA.,Center for Medical Imaging and Image-guided Therapy, Sun Yat-Sen University Cancer Center, 651 Dongfeng East Road, Guangzhou, 510060, P. R., China
| | - Zhenfeng Zhang
- Center for Medical Imaging and Image-guided Therapy, Sun Yat-Sen University Cancer Center, 651 Dongfeng East Road, Guangzhou, 510060, P. R., China
| | - Jianru Xue
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, No.28 Xianning West Road, Xi'an, 710049, P. R., China
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, School of Medicine, Emory University, 1841 Clifton Road NE, Atlanta, GA, 30329, USA.,Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, 1841 Clifton Road NE, Atlanta, GA, 30329, USA
| |
Collapse
|
7
|
Tian Z, Liu L, Zhang Z, Fei B. Superpixel-Based Segmentation for 3D Prostate MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:791-801. [PMID: 26540678 PMCID: PMC4831070 DOI: 10.1109/tmi.2015.2496296] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
This paper proposes a method for segmenting the prostate on magnetic resonance (MR) images. A superpixel-based 3D graph cut algorithm is proposed to obtain the prostate surface. Instead of pixels, superpixels are considered as the basic processing units to construct a 3D superpixel-based graph. The superpixels are labeled as the prostate or background by minimizing an energy function using graph cut based on the 3D superpixel-based graph. To construct the energy function, we proposed a superpixel-based shape data term, an appearance data term, and two superpixel-based smoothness terms. The proposed superpixel-based terms provide the effectiveness and robustness for the segmentation of the prostate. The segmentation result of graph cuts is used as an initialization of a 3D active contour model to overcome the drawback of the graph cut. The result of 3D active contour model is then used to update the shape model and appearance model of the graph cut. Iterations of the 3D graph cut and 3D active contour model have the ability to jump out of local minima and obtain a smooth prostate surface. On our 43 MR volumes, the proposed method yields a mean Dice ratio of 89.3 ±1.9%. On PROMISE12 test data set, our method was ranked at the second place; the mean Dice ratio and standard deviation is 87.0±3.2%. The experimental results show that the proposed method outperforms several state-of-the-art prostate MRI segmentation methods.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329 USA
| | - Lizhi Liu
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329 USA. Center for Medical Imaging & Image-guided Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenfeng Zhang
- Center for Medical Imaging & Image-guided Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, also with Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA 30329 USA. website: www.feilab.org
| |
Collapse
|
8
|
Segmentation of uterine fibroid ultrasound images using a dynamic statistical shape model in HIFU therapy. Comput Med Imaging Graph 2015; 46 Pt 3:302-14. [PMID: 26459767 DOI: 10.1016/j.compmedimag.2015.07.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2014] [Revised: 06/24/2015] [Accepted: 07/13/2015] [Indexed: 11/20/2022]
Abstract
Segmenting the lesion areas from ultrasound (US) images is an important step in the intra-operative planning of high-intensity focused ultrasound (HIFU). However, accurate segmentation remains a challenge due to intensity inhomogeneity, blurry boundaries in HIFU US images and the deformation of uterine fibroids caused by patient's breathing or external force. This paper presents a novel dynamic statistical shape model (SSM)-based segmentation method to accurately and efficiently segment the target region in HIFU US images of uterine fibroids. For accurately learning the prior shape information of lesion boundary fluctuations in the training set, the dynamic properties of stochastic differential equation and Fokker-Planck equation are incorporated into SSM (referred to as SF-SSM). Then, a new observation model of lesion areas (named to RPFM) in HIFU US images is developed to describe the features of the lesion areas and provide a likelihood probability to the prior shape given by SF-SSM. SF-SSM and RPFM are integrated into active contour model to improve the accuracy and robustness of segmentation in HIFU US images. We compare the proposed method with four well-known US segmentation methods to demonstrate its superiority. The experimental results in clinical HIFU US images validate the high accuracy and robustness of our approach, even when the quality of the images is unsatisfactory, indicating its potential for practical application in HIFU therapy.
Collapse
|
9
|
Wu P, Liu Y, Li Y, Liu B. Robust Prostate Segmentation Using Intrinsic Properties of TRUS Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1321-1335. [PMID: 25576565 DOI: 10.1109/tmi.2015.2388699] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Accurate segmentation is usually crucial in transrectal ultrasound (TRUS) image based prostate diagnosis; however, it is always hampered by heavy speckles. Contrary to the traditional view that speckles are adverse to segmentation, we exploit intrinsic properties induced by speckles to facilitate the task, based on the observations that sizes and orientations of speckles provide salient cues to determine the prostate boundary. Since the speckle orientation changes in accordance with a statistical prior rule, rotation-invariant texture feature is extracted along the orientations revealed by the rule. To address the problem of feature changes due to different speckle sizes, TRUS images are split into several arc-like strips. In each strip, every individual feature vector is sparsely represented, and representation residuals are obtained. The residuals, along with the spatial coherence inherited from biological tissues, are combined to segment the prostate preliminarily via graph cuts. After that, the segmentation is fine-tuned by a novel level sets model, which integrates (1) the prostate shape prior, (2) dark-to-light intensity transition near the prostate boundary, and (3) the texture feature just obtained. The proposed method is validated on two 2-D image datasets obtained from two different sonographic imaging systems, with the mean absolute distance on the mid gland images only 1.06±0.53 mm and 1.25±0.77 mm, respectively. The method is also extended to segment apex and base images, producing competitive results over the state of the art.
Collapse
|
10
|
Xu M, Zhang D, Yang Y, Liu Y, Yuan Z, Qin Q. A Split-and-Merge-Based Uterine Fibroid Ultrasound Image Segmentation Method in HIFU Therapy. PLoS One 2015; 10:e0125738. [PMID: 25973906 PMCID: PMC4431844 DOI: 10.1371/journal.pone.0125738] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2014] [Accepted: 03/26/2015] [Indexed: 11/19/2022] Open
Abstract
High-intensity focused ultrasound (HIFU) therapy has been used to treat uterine fibroids widely and successfully. Uterine fibroid segmentation plays an important role in positioning the target region for HIFU therapy. Presently, it is completed by physicians manually, reducing the efficiency of therapy. Thus, computer-aided segmentation of uterine fibroids benefits the improvement of therapy efficiency. Recently, most computer-aided ultrasound segmentation methods have been based on the framework of contour evolution, such as snakes and level sets. These methods can achieve good performance, although they need an initial contour that influences segmentation results. It is difficult to obtain the initial contour automatically; thus, the initial contour is always obtained manually in many segmentation methods. A split-and-merge-based uterine fibroid segmentation method, which needs no initial contour to ensure less manual intervention, is proposed in this paper. The method first splits the image into many small homogeneous regions called superpixels. A new feature representation method based on texture histogram is employed to characterize each superpixel. Next, the superpixels are merged according to their similarities, which are measured by integrating their Quadratic-Chi texture histogram distances with their space adjacency. Multi-way Ncut is used as the merging criterion, and an adaptive scheme is incorporated to decrease manual intervention further. The method is implemented using Matlab on a personal computer (PC) platform with Intel Pentium Dual-Core CPU E5700. The method is validated on forty-two ultrasound images acquired from HIFU therapy. The average running time is 9.54 s. Statistical results showed that SI reaches a value as high as 87.58%, and normHD is 5.18% on average. It has been demonstrated that the proposed method is appropriate for segmentation of uterine fibroids in HIFU pre-treatment imaging and planning.
Collapse
Affiliation(s)
- Menglong Xu
- School of Physics and Technology, Wuhan University, Wuhan, Hubei, China
| | - Dong Zhang
- School of Physics and Technology, Wuhan University, Wuhan, Hubei, China
| | - Yan Yang
- School of Physics and Technology, Wuhan University, Wuhan, Hubei, China
| | - Yu Liu
- School of Physics and Technology, Wuhan University, Wuhan, Hubei, China
| | - Zhiyong Yuan
- School of Computer, Wuhan University, Wuhan, Hubei, China
| | - Qianqing Qin
- State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, Hubei, China
| |
Collapse
|
11
|
Tian Z, Liu L, Fei B. A fully automatic multi-atlas based segmentation method for prostate MR images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9413:941340. [PMID: 26798187 PMCID: PMC4717836 DOI: 10.1117/12.2082229] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Most of multi-atlas segmentation methods focus on the registration between the full-size volumes of the data set. Although the transformations obtained from these registrations may be accurate for the global field of view of the images, they may not be accurate for the local prostate region. This is because different magnetic resonance (MR) images have different fields of view and may have large anatomical variability around the prostate. To overcome this limitation, we proposed a two-stage prostate segmentation method based on a fully automatic multi-atlas framework, which includes the detection stage i.e. locating the prostate, and the segmentation stage i.e. extracting the prostate. The purpose of the first stage is to find a cuboid that contains the whole prostate as small cubage as possible. In this paper, the cuboid including the prostate is detected by registering atlas edge volumes to the target volume while an edge detection algorithm is applied to every slice in the volumes. At the second stage, the proposed method focuses on the registration in the region of the prostate vicinity, which can improve the accuracy of the prostate segmentation. We evaluated the proposed method on 12 patient MR volumes by performing a leave-one-out study. Dice similarity coefficient (DSC) and Hausdorff distance (HD) are used to quantify the difference between our method and the manual ground truth. The proposed method yielded a DSC of 83.4%±4.3%, and a HD of 9.3 mm±2.6 mm. The fully automated segmentation method can provide a useful tool in many prostate imaging applications.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - LiZhi Liu
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology
| |
Collapse
|
12
|
Tian Z, Liu L, Fei B. A supervoxel-based segmentation method for prostate MR images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9413:941318. [PMID: 26848206 PMCID: PMC4736748 DOI: 10.1117/12.2082255] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a "Supervoxel" based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A three-dimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%±3.2%. The segmentation method can be used not only for the prostate but also for other organs.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - LiZhi Liu
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology
| |
Collapse
|
13
|
Zhang D, Xu M, Quan L, Yang Y, Qin Q, Zhu W. Segmentation of tumor ultrasound image in HIFU therapy based on texture and boundary encoding. Phys Med Biol 2015; 60:1807-30. [PMID: 25658334 DOI: 10.1088/0031-9155/60/5/1807] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
It is crucial in high intensity focused ultrasound (HIFU) therapy to detect the tumor precisely with less manual intervention for enhancing the therapy efficiency. Ultrasound image segmentation becomes a difficult task due to signal attenuation, speckle effect and shadows. This paper presents an unsupervised approach based on texture and boundary encoding customized for ultrasound image segmentation in HIFU therapy. The approach oversegments the ultrasound image into some small regions, which are merged by using the principle of minimum description length (MDL) afterwards. Small regions belonging to the same tumor are clustered as they preserve similar texture features. The mergence is completed by obtaining the shortest coding length from encoding textures and boundaries of these regions in the clustering process. The tumor region is finally selected from merged regions by a proposed algorithm without manual interaction. The performance of the method is tested on 50 uterine fibroid ultrasound images from HIFU guiding transducers. The segmentations are compared with manual delineations to verify its feasibility. The quantitative evaluation with HIFU images shows that the mean true positive of the approach is 93.53%, the mean false positive is 4.06%, the mean similarity is 89.92%, the mean norm Hausdorff distance is 3.62% and the mean norm maximum average distance is 0.57%. The experiments validate that the proposed method can achieve favorable segmentation without manual initialization and effectively handle the poor quality of the ultrasound guidance image in HIFU therapy, which indicates that the approach is applicable in HIFU therapy.
Collapse
Affiliation(s)
- Dong Zhang
- School of Physics and Technology, Wuhan University, Wuhan, 430072, People's Republic of China
| | | | | | | | | | | |
Collapse
|
14
|
Qiu W, Yuan J, Ukwatta E, Fenster A. Rotationally resliced 3D prostate TRUS segmentation using convex optimization with shape priors. Med Phys 2015; 42:877-91. [DOI: 10.1118/1.4906129] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
15
|
Abstract
An efficient and accurate segmentation of 3D transrectal ultrasound (TRUS) images plays an important role in the planning and treatment of the practical 3D TRUS guided prostate biopsy. However, a meaningful segmentation of 3D TRUS images tends to suffer from US speckles, shadowing and missing edges etc, which make it a challenging task to delineate the correct prostate boundaries. In this paper, we propose a novel convex optimization based approach to extracting the prostate surface from the given 3D TRUS image, while preserving a new global volume-size prior. We, especially, study the proposed combinatorial optimization problem by convex relaxation and introduce its dual continuous max-flow formulation with the new bounded flow conservation constraint, which results in an efficient numerical solver implemented on GPUs. Experimental results using 12 patient 3D TRUS images show that the proposed approach while preserving the volume-size prior yielded a mean DSC of 89.5% +/- 2.4%, a MAD of 1.4 +/- 0.6 mm, a MAXD of 5.2 +/- 3.2 mm, and a VD of 7.5% +/- 6.2% in - 1 minute, deomonstrating the advantages of both accuracy and efficiency. In addition, the low standard deviation of the segmentation accuracy shows a good reliability of the proposed approach.
Collapse
|
16
|
Furuhata T, Song I, Zhang H, Rabin Y, Shimada K. Interactive Prostate Shape Reconstruction from 3D TRUS Images. JOURNAL OF COMPUTATIONAL DESIGN AND ENGINEERING 2014; 1:272-288. [PMID: 29276702 PMCID: PMC5739344 DOI: 10.7315/jcde.2014.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper presents a two-step, semi-automated method for reconstructing a three-dimensional (3D) shape of the prostate from a 3D transrectal ultrasound (TRUS) image. While the method has been developed for prostate ultrasound imaging, it can potentially be applicable to any other organ of the body and other imaging modalities. The proposed method takes as input a 3D TRUS image and generates a watertight 3D surface model of the prostate. In the first step, the system lets the user visualize and navigate through the input volumetric image by displaying cross sectional views oriented in arbitrary directions. The user then draws partial/full contours on selected cross sectional views. In the second step, the method automatically generates a watertight 3D surface of the prostate by fitting a deformable spherical template to the set of user-specified contours. Since the method allows the user to select the best cross-sectional directions and draw only clearly recognizable partial or full contours, the user can avoid time-consuming and inaccurate guesswork on where prostate contours are located. By avoiding the usage of noisy, incomprehensible portions of the TRUS image, the proposed method yields more accurate prostate shapes than conventional methods that demand complete cross-sectional contours selected manually, or automatically using an image processing tool. Our experiments confirmed that a 3D watertight surface of the prostate can be generated within five minutes even from a volumetric image with a high level of speckles and shadow noises.
Collapse
Affiliation(s)
| | | | | | | | - Kenji Shimada
- Corresponding Author:, Kenji Shimada, Ph.D., Department of Mechanical Engineering, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, , Tel: 412.268.3614, Fax: 412.268.2908
| |
Collapse
|
17
|
Qiu W, Yuan J, Ukwatta E, Sun Y, Rajchl M, Fenster A. Prostate segmentation: an efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:947-960. [PMID: 24710163 DOI: 10.1109/tmi.2014.2300694] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We propose a novel global optimization-based approach to segmentation of 3-D prostate transrectal ultrasound (TRUS) and T2 weighted magnetic resonance (MR) images, enforcing inherent axial symmetry of prostate shapes to simultaneously adjust a series of 2-D slice-wise segmentations in a "global" 3-D sense. We show that the introduced challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. In this regard, we propose a novel coherent continuous max-flow model (CCMFM), which derives a new and efficient duality-based algorithm, leading to a GPU-based implementation to achieve high computational speeds. Experiments with 25 3-D TRUS images and 30 3-D T2w MR images from our dataset, and 50 3-D T2w MR images from a public dataset, demonstrate that the proposed approach can segment a 3-D prostate TRUS/MR image within 5-6 s including 4-5 s for initialization, yielding a mean Dice similarity coefficient of 93.2%±2.0% for 3-D TRUS images and 88.5%±3.5% for 3-D MR images. The proposed method also yields relatively low intra- and inter-observer variability introduced by user manual initialization, suggesting a high reproducibility, independent of observers.
Collapse
|
18
|
Qiu W, Yuan J, Ukwatta E, Sun Y, Rajchl M, Fenster A. Dual optimization based prostate zonal segmentation in 3D MR images. Med Image Anal 2014; 18:660-73. [PMID: 24721776 DOI: 10.1016/j.media.2014.02.009] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2013] [Revised: 02/18/2014] [Accepted: 02/24/2014] [Indexed: 10/25/2022]
Abstract
Efficient and accurate segmentation of the prostate and two of its clinically meaningful sub-regions: the central gland (CG) and peripheral zone (PZ), from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, a novel multi-region segmentation approach is proposed to simultaneously segment the prostate and its two major sub-regions from only a single 3D T2-weighted (T2w) MR image, which makes use of the prior spatial region consistency and incorporates a customized prostate appearance model into the segmentation task. The formulated challenging combinatorial optimization problem is solved by means of convex relaxation, for which a novel spatially continuous max-flow model is introduced as the dual optimization formulation to the studied convex relaxed optimization problem with region consistency constraints. The proposed continuous max-flow model derives an efficient duality-based algorithm that enjoys numerical advantages and can be easily implemented on GPUs. The proposed approach was validated using 18 3D prostate T2w MR images with a body-coil and 25 images with an endo-rectal coil. Experimental results demonstrate that the proposed method is capable of efficiently and accurately extracting both the prostate zones: CG and PZ, and the whole prostate gland from the input 3D prostate MR images, with a mean Dice similarity coefficient (DSC) of 89.3±3.2% for the whole gland (WG), 82.2±3.0% for the CG, and 69.1±6.9% for the PZ in 3D body-coil MR images; 89.2±3.3% for the WG, 83.0±2.4% for the CG, and 70.0±6.5% for the PZ in 3D endo-rectal coil MR images. In addition, the experiments of intra- and inter-observer variability introduced by user initialization indicate a good reproducibility of the proposed approach in terms of volume difference (VD) and coefficient-of-variation (CV) of DSC.
Collapse
Affiliation(s)
- Wu Qiu
- Robarts Research Institute, University of Western Ontario, London, ON, Canada.
| | - Jing Yuan
- Robarts Research Institute, University of Western Ontario, London, ON, Canada
| | - Eranga Ukwatta
- Robarts Research Institute, University of Western Ontario, London, ON, Canada; Biomedical Engineering Graduate Program, University of Western Ontario, London, ON, Canada
| | - Yue Sun
- Robarts Research Institute, University of Western Ontario, London, ON, Canada; Biomedical Engineering Graduate Program, University of Western Ontario, London, ON, Canada
| | - Martin Rajchl
- Robarts Research Institute, University of Western Ontario, London, ON, Canada; Biomedical Engineering Graduate Program, University of Western Ontario, London, ON, Canada
| | - Aaron Fenster
- Robarts Research Institute, University of Western Ontario, London, ON, Canada; Biomedical Engineering Graduate Program, University of Western Ontario, London, ON, Canada; Medical Biophysics, University of Western Ontario, London, ON, Canada
| |
Collapse
|
19
|
Qiu W, Yuan J, Ukwatta E, Tessier D, Fenster A. Three-dimensional prostate segmentation using level set with shape constraint based on rotational slices for 3D end-firing TRUS guided biopsy. Med Phys 2014; 40:072903. [PMID: 23822454 DOI: 10.1118/1.4810968] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Prostate segmentation is an important step in the planning and treatment of 3D end-firing transrectal ultrasound (TRUS) guided prostate biopsy. In order to improve the accuracy and efficiency of prostate segmentation in 3D TRUS images, an improved level set method is incorporated into a rotational-slice-based 3D prostate segmentation to decrease the accumulated segmentation errors produced by the slice-by-slice segmentation method. METHODS A 3D image is first resliced into 2D slices in a rotational manner in both the clockwise and counterclockwise directions. All slices intersect approximately along the rotational scanning axis and have an equal angular spacing. Six to eight boundary points are selected to initialize a level set function to extract the prostate contour within the first slice. The segmented contour is then propagated to the adjacent slice and is used as the initial contour for segmentation. This process is repeated until all slices are segmented. A modified distance regularization level set method is used to segment the prostate in all resliced 2D slices. In addition, shape-constraint and local-region-based energies are imposed to discourage the evolved level set function to leak in regions with weak edges or without edges. An anchor point based energy is used to promote the level set function to pass through the initial selected boundary points. The algorithm's performance was evaluated using distance- and volume-based metrics (sensitivity (Se), Dice similarity coefficient (DSC), mean absolute surface distance (MAD), maximum absolute surface distance (MAXD), and volume difference) by comparison with expert delineations. RESULTS The validation results using thirty 3D patient images showed that the authors' method can obtain a DSC of 93.1% ± 1.6%, a sensitivity of 93.0% ± 2.0%, a MAD of 1.18 ± 0.36 mm, a MAXD of 3.44 ± 0.8 mm, and a volume difference of 2.6 ± 1.9 cm(3) for the entire prostate. A reproducibility experiment demonstrated that the proposed method yielded low intraobserver and interobserver variability in terms of DSC. The mean segmentation time of the authors' method for all patient 3D TRUS images was 55 ± 3.5 s, in addition to 30 ± 5 s for initialization. CONCLUSIONS To address the challenges involved with slice-based 3D prostate segmentation, a level set based method is proposed in this paper. This method is especially developed for a 3D end-firing TRUS guided prostate biopsy system. The extensive experimental results demonstrate that the proposed method is accurate, robust, and computationally efficient.
Collapse
Affiliation(s)
- Wu Qiu
- Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8, Canada.
| | | | | | | | | |
Collapse
|
20
|
|
21
|
Habes M, Schiller T, Rosenberg C, Burchardt M, Hoffmann W. Automated prostate segmentation in whole-body MRI scans for epidemiological studies. Phys Med Biol 2013; 58:5899-915. [PMID: 23920310 DOI: 10.1088/0031-9155/58/17/5899] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The whole prostatic volume (PV) is an important indicator for benign prostate hyperplasia. Correlating the PV with other clinical parameters in a population-based prospective cohort study (SHIP-2) requires valid prostate segmentation in a large number of whole-body MRI scans. The axial proton density fast spin echo fat saturated sequence is used for prostate screening in SHIP-2. Our automated segmentation method is based on support vector machines (SVM). We used three-dimensional neighborhood information to build classification vectors from automatically generated features and randomly selected 16 MR examinations for validation. The Hausdorff distance reached a mean value of 5.048 ± 2.413, and a mean value of 5.613 ± 2.897 compared to manual segmentation by observers A and B. The comparison between volume measurement of SVM-based segmentation and manual segmentation of observers A and B depicts a strong correlation resulting in Spearman's rank correlation coefficients (ρ) of 0.936 and 0.859, respectively. Our automated methodology based on SVM for prostate segmentation can segment the prostate in WBI scans with good segmentation quality and has considerable potential for integration in epidemiological studies.
Collapse
Affiliation(s)
- Mohamad Habes
- Institute for Community Medicine, Ernst Moritz Arndt University of Greifswald, Greifswald, Germany.
| | | | | | | | | |
Collapse
|
22
|
Mari JM, Bouchoux G, Dillenseger JL, Gimonet S, Birer A, Garnier C, Brasset L, Ke W, Guey JL, Fleury G, Chapelon JY, Blanc E. Study of a dual-mode array integrated in a multi-element transducer for imaging and therapy of prostate cancer. Ing Rech Biomed 2013. [DOI: 10.1016/j.irbm.2013.01.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
23
|
Jointly Segmenting Prostate Zones in 3D MRIs by Globally Optimized Coupled Level-Sets. ACTA ACUST UNITED AC 2013. [DOI: 10.1007/978-3-642-40395-8_2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2023]
|
24
|
Qiu W, Yuan J, Kishimoto J, Ukwatta E, Fenster A. Lateral Ventricle Segmentation of 3D Pre-term Neonates US Using Convex Optimization. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2013 2013; 16:559-66. [DOI: 10.1007/978-3-642-40760-4_70] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
25
|
Ghose S, Oliver A, Martí R, Lladó X, Vilanova JC, Freixenet J, Mitra J, Sidibé D, Meriaudeau F. A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2012; 108:262-287. [PMID: 22739209 DOI: 10.1016/j.cmpb.2012.04.006] [Citation(s) in RCA: 108] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2011] [Revised: 04/17/2012] [Accepted: 04/17/2012] [Indexed: 06/01/2023]
Abstract
Prostate segmentation is a challenging task, and the challenges significantly differ from one imaging modality to another. Low contrast, speckle, micro-calcifications and imaging artifacts like shadow poses serious challenges to accurate prostate segmentation in transrectal ultrasound (TRUS) images. However in magnetic resonance (MR) images, superior soft tissue contrast highlights large variability in shape, size and texture information inside the prostate. In contrast poor soft tissue contrast between prostate and surrounding tissues in computed tomography (CT) images pose a challenge in accurate prostate segmentation. This article reviews the methods developed for prostate gland segmentation TRUS, MR and CT images, the three primary imaging modalities that aids prostate cancer diagnosis and treatment. The objective of this work is to study the key similarities and differences among the different methods, highlighting their strengths and weaknesses in order to assist in the choice of an appropriate segmentation methodology. We define a new taxonomy for prostate segmentation strategies that allows first to group the algorithms and then to point out the main advantages and drawbacks of each strategy. We provide a comprehensive description of the existing methods in all TRUS, MR and CT modalities, highlighting their key-points and features. Finally, a discussion on choosing the most appropriate segmentation strategy for a given imaging modality is provided. A quantitative comparison of the results as reported in literature is also presented.
Collapse
Affiliation(s)
- Soumya Ghose
- Computer Vision and Robotics Group, University of Girona, Campus Montilivi, Edifici P-IV, 17071 Girona, Spain.
| | | | | | | | | | | | | | | | | |
Collapse
|
26
|
Rotational-Slice-Based Prostate Segmentation Using Level Set with Shape Constraint for 3D End-Firing TRUS Guided Biopsy. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2012 2012; 15:537-44. [DOI: 10.1007/978-3-642-33415-3_66] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|