1
|
Jiang J, Guo Y, Bi Z, Huang Z, Yu G, Wang J. Segmentation of prostate ultrasound images: the state of the art and the future directions of segmentation algorithms. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10179-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
2
|
Albayrak NB, Akgul YS. Estimation of the Prostate Volume from Abdominal Ultrasound Images by Image-Patch Voting. Applied Sciences 2022; 12:1390. [DOI: 10.3390/app12031390] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Estimation of the prostate volume with ultrasound offers many advantages such as portability, low cost, harmlessness, and suitability for real-time operation. Abdominal Ultrasound (AUS) is a practical procedure that deserves more attention in automated prostate-volume-estimation studies. As the experts usually consider automatic end-to-end volume-estimation procedures as non-transparent and uninterpretable systems, we proposed an expert-in-the-loop automatic system that follows the classical prostate-volume-estimation procedures. Our system directly estimates the diameter parameters of the standard ellipsoid formula to produce the prostate volume. To obtain the diameters, our system detects four diameter endpoints from the transverse and two diameter endpoints from the sagittal AUS images as defined by the classical procedure. These endpoints are estimated using a new image-patch voting method to address characteristic problems of AUS images. We formed a novel prostate AUS data set from 305 patients with both transverse and sagittal planes. The data set includes MRI images for 75 of these patients. At least one expert manually marked all the data. Extensive experiments performed on this data set showed that the proposed system results ranged among experts’ volume estimations, and our system can be used in clinical practice.
Collapse
|
3
|
Wang Y, Dou H, Hu X, Zhu L, Yang X, Xu M, Qin J, Heng PA, Wang T, Ni D. Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE Trans Med Imaging 2019; 38:2768-2778. [PMID: 31021793 DOI: 10.1109/tmi.2019.2913184] [Citation(s) in RCA: 78] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Automatic prostate segmentation in transrectal ultrasound (TRUS) images is of essential importance for image-guided prostate interventions and treatment planning. However, developing such automatic solutions remains very challenging due to the missing/ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS, as well as the large variability in prostate shapes. This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our attention module utilizes the attention mechanism to selectively leverage the multi-level features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance. The proposed attention mechanism is a general strategy to aggregate multi-level deep features and has the potential to be used for other medical image segmentation tasks. The code is publicly available at https://github.com/wulalago/DAF3D.
Collapse
|
4
|
Jaouen V, Bert J, Mountris KA, Boussion N, Schick U, Pradier O, Valeri A, Visvikis D. Prostate Volume Segmentation in TRUS Using Hybrid Edge-Bhattacharyya Active Surfaces. IEEE Trans Biomed Eng 2019; 66:920-933. [DOI: 10.1109/tbme.2018.2865428] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
5
|
Mason SA, O’Shea TP, White IM, Lalondrelle S, Downey K, Baker M, Behrens CF, Bamber JC, Harris EJ. Towards ultrasound-guided adaptive radiotherapy for cervical cancer: Evaluation of Elekta's semiautomated uterine segmentation method on 3D ultrasound images. Med Phys 2017; 44:3630-3638. [PMID: 28493295 PMCID: PMC5575494 DOI: 10.1002/mp.12325] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 02/10/2017] [Accepted: 03/29/2017] [Indexed: 11/06/2022] Open
Abstract
PURPOSE 3D ultrasound (US) images of the uterus may be used to adapt radiotherapy (RT) for cervical cancer patients based on changes in daily anatomy. This requires accurate on-line segmentation of the uterus. The aim of this work was to assess the accuracy of Elekta's "Assisted Gyne Segmentation" (AGS) algorithm in semi-automatically segmenting the uterus on 3D transabdominal ultrasound images by comparison with manual contours. MATERIALS & METHODS Nine patients receiving RT for cervical cancer were imaged with the 3D Clarity® transabdominal probe at RT planning, and 1 to 7 times during treatment. Image quality was rated from unusable (0)-excellent (3). Four experts segmented the uterus (defined as the uterine body and cervix) manually and using AGS on images with a ranking > 0. Pairwise analysis between manual contours was evaluated to determine interobserver variability. The accuracy of the AGS method was assessed by measuring its agreement with manual contours via pairwise analysis. RESULTS 35/44 images acquired (79.5%) received a ranking > 0. For the manual contour variation, the median [interquartile range (IQR)] distance between centroids (DC) was 5.41 [5.0] mm, the Dice similarity coefficient (DSC) was 0.78 [0.11], the mean surface-to-surface distance (MSSD) was 3.20 [1.8] mm, and the uniform margin of 95% (UM95) was 4.04 [5.8] mm. There was no correlation between image quality and manual contour agreement. AGS failed to give a result in 19.3% of cases. For the remaining cases, the level of agreement between AGS contours and manual contours depended on image quality. There were no significant differences between the AGS segmentations and the manual segmentations on the images that received a quality rating of 3. However, the AGS algorithm had significantly worse agreement with manual contours on images with quality ratings of 1 and 2 compared with the corresponding interobserver manual variation. The overall median [IQR] DC, DSC, MSSD, and UM95 between AGS and manual contours was 5.48 [5.45] mm, 0.77 [0.14], 3.62 [2.7] mm, and 5.19 [8.1] mm, respectively. CONCLUSIONS The AGS tool was able to represent uterine shape of cervical cancer patients in agreement with manual contouring in cases where the image quality was excellent, but not in cases where image quality was degraded by common artifacts such as shadowing and signal attenuation. The AGS tool should be used with caution for adaptive RT purposes, as it is not reliable in accurately segmenting the uterus on 'good' or 'poor' quality images. The interobserver agreement between manual contours of the uterus drawn on 3D US was consistent with results of similar studies performed on CT and MRI images.
Collapse
Affiliation(s)
- Sarah A. Mason
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Tuathan P. O’Shea
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Ingrid M. White
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Susan Lalondrelle
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Kate Downey
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Mariwan Baker
- Department of OncologyHerlev Hospital, University of CopenhagenHerlevDenmark
| | - Claus F. Behrens
- Department of OncologyHerlev Hospital, University of CopenhagenHerlevDenmark
| | - Jeffrey C. Bamber
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| | - Emma J. Harris
- Joint Department of Physics at the Institute of Cancer Research and Royal Marsden NHS Foundation TrustSutton and LondonUK
| |
Collapse
|
6
|
Ma L, Guo R, Tian Z, Fei B. A random walk-based segmentation framework for 3D ultrasound images of the prostate. Med Phys 2017; 44:5128-5142. [PMID: 28582803 DOI: 10.1002/mp.12396] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Revised: 05/09/2017] [Accepted: 05/19/2017] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Accurate segmentation of the prostate on ultrasound images has many applications in prostate cancer diagnosis and therapy. Transrectal ultrasound (TRUS) has been routinely used to guide prostate biopsy. This manuscript proposes a semiautomatic segmentation method for the prostate on three-dimensional (3D) TRUS images. METHODS The proposed segmentation method uses a context-classification-based random walk algorithm. Because context information reflects patient-specific characteristics and prostate changes in the adjacent slices, and classification information reflects population-based prior knowledge, we combine the context and classification information at the same time in order to define the applicable population and patient-specific knowledge so as to more accurately determine the seed points for the random walk algorithm. The method is initialized with the user drawing the prostate and non-prostate circles on the mid-gland slice and then automatically segments the prostate on other slices. To achieve reliable classification, we use a new adaptive k-means algorithm to cluster the training data and train multiple decision-tree classifiers. According to the patient-specific characteristics, the most suitable classifier is selected and combined with the context information in order to locate the seed points. By providing accuracy locations of the seed points, the random walk algorithm improves segmentation performance. RESULTS We evaluate the proposed segmentation approach on a set of 3D TRUS volumes of prostate patients. The experimental results show that our method achieved a Dice similarity coefficient of 91.0% ± 1.6% as compared to manual segmentation by clinically experienced radiologist. CONCLUSIONS The random walk-based segmentation framework, which combines patient-specific characteristics and population information, is effective for segmenting the prostate on ultrasound images. The segmentation method can have various applications in ultrasound-guided prostate procedures.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30329, USA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30329, USA
| | - Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30329, USA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30329, USA.,The Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, 30329, USA.,Winship Cancer Institute of Emory University, Atlanta, GA, 30329, USA.,Department of Mathematics and Computer Science, Emory College of Emory University, Atlanta, GA, 30329, USA
| |
Collapse
|
7
|
Nouranian S, Ramezani M, Spadinger I, Morris WJ, Salcudean SE, Abolmaesumi P. Learning-Based Multi-Label Segmentation of Transrectal Ultrasound Images for Prostate Brachytherapy. IEEE Trans Med Imaging 2016; 35:921-932. [PMID: 26599701 DOI: 10.1109/tmi.2015.2502540] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Low-dose-rate prostate brachytherapy treatment takes place by implantation of small radioactive seeds in and sometimes adjacent to the prostate gland. A patient specific target anatomy for seed placement is usually determined by contouring a set of collected transrectal ultrasound images prior to implantation. Standard-of-care in prostate brachytherapy is to delineate the clinical target anatomy, which closely follows the real prostate boundary. Subsequently, the boundary is dilated with respect to the clinical guidelines to determine a planning target volume. Manual contouring of these two anatomical targets is a tedious task with relatively high observer variability. In this work, we aim to reduce the segmentation variability and planning time by proposing an efficient learning-based multi-label segmentation algorithm. We incorporate a sparse representation approach in our methodology to learn a dictionary of sparse joint elements consisting of images, and clinical and planning target volume segmentation. The generated dictionary inherently captures the relationships among elements, which also incorporates the institutional clinical guidelines. The proposed multi-label segmentation method is evaluated on a dataset of 590 brachytherapy treatment records by 5-fold cross validation. We show clinically acceptable instantaneous segmentation results for both target volumes.
Collapse
|
8
|
Abstract
This paper presents a two-step, semi-automated method for reconstructing a three-dimensional (3D) shape of the prostate from a 3D transrectal ultrasound (TRUS) image. While the method has been developed for prostate ultrasound imaging, it can potentially be applicable to any other organ of the body and other imaging modalities. The proposed method takes as input a 3D TRUS image and generates a watertight 3D surface model of the prostate. In the first step, the system lets the user visualize and navigate through the input volumetric image by displaying cross sectional views oriented in arbitrary directions. The user then draws partial/full contours on selected cross sectional views. In the second step, the method automatically generates a watertight 3D surface of the prostate by fitting a deformable spherical template to the set of user-specified contours. Since the method allows the user to select the best cross-sectional directions and draw only clearly recognizable partial or full contours, the user can avoid time-consuming and inaccurate guesswork on where prostate contours are located. By avoiding the usage of noisy, incomprehensible portions of the TRUS image, the proposed method yields more accurate prostate shapes than conventional methods that demand complete cross-sectional contours selected manually, or automatically using an image processing tool. Our experiments confirmed that a 3D watertight surface of the prostate can be generated within five minutes even from a volumetric image with a high level of speckles and shadow noises.
Collapse
Affiliation(s)
| | | | | | | | - Kenji Shimada
- Corresponding Author:, Kenji Shimada, Ph.D., Department of Mechanical Engineering, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, , Tel: 412.268.3614, Fax: 412.268.2908
| |
Collapse
|
9
|
Chilali O, Ouzzane A, Diaf M, Betrouni N. A survey of prostate modeling for image analysis. Comput Biol Med 2014; 53:190-202. [PMID: 25156801 DOI: 10.1016/j.compbiomed.2014.07.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2013] [Revised: 06/22/2014] [Accepted: 07/23/2014] [Indexed: 11/18/2022]
Affiliation(s)
- O Chilali
- Inserm U703, 152, rue du Docteur Yersin, Lille University Hospital, 59120 Loos, France; Automatic Department, Mouloud Mammeri University, Tizi-Ouzou, Algeria
| | - A Ouzzane
- Inserm U703, 152, rue du Docteur Yersin, Lille University Hospital, 59120 Loos, France; Urology Department, Claude Huriez Hospital, Lille University Hospital, France
| | - M Diaf
- Automatic Department, Mouloud Mammeri University, Tizi-Ouzou, Algeria
| | - N Betrouni
- Inserm U703, 152, rue du Docteur Yersin, Lille University Hospital, 59120 Loos, France.
| |
Collapse
|
10
|
Wu Y, Liu G, Huang M, Guo J, Jiang J, Yang W, Chen W, Feng Q. Prostate segmentation based on variant scale patch and local independent projection. IEEE Trans Med Imaging 2014; 33:1290-1303. [PMID: 24893258 DOI: 10.1109/tmi.2014.2308901] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Accurate segmentation of the prostate in computed tomography (CT) images is important in image-guided radiotherapy; however, difficulties remain associated with this task. In this study, an automatic framework is designed for prostate segmentation in CT images. We propose a novel image feature extraction method, namely, variant scale patch, which can provide rich image information in a low dimensional feature space. We assume that the samples from different classes lie on different nonlinear submanifolds and design a new segmentation criterion called local independent projection (LIP). In our method, a dictionary containing training samples is constructed. To utilize the latest image information, we use an online updated strategy to construct this dictionary. In the proposed LIP, locality is emphasized rather than sparsity; local anchor embedding is performed to determine the dictionary coefficients. Several morphological operations are performed to improve the achieved results. The proposed method has been evaluated based on 330 3-D images of 24 patients. Results show that the proposed method is robust and effective in segmenting prostate in CT images.
Collapse
|
11
|
Kwak K, Yoon U, Lee DK, Kim GH, Seo SW, Na DL, Shim HJ, Lee JM. Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening. Magn Reson Imaging 2013; 31:1190-6. [DOI: 10.1016/j.mri.2013.04.008] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2012] [Revised: 02/12/2013] [Accepted: 04/13/2013] [Indexed: 10/26/2022]
|
12
|
Ghose S, Oliver A, Mitra J, Martí R, Lladó X, Freixenet J, Sidibé D, Vilanova JC, Comet J, Meriaudeau F. A supervised learning framework of statistical shape and probability priors for automatic prostate segmentation in ultrasound images. Med Image Anal 2013; 17:587-600. [DOI: 10.1016/j.media.2013.04.001] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2012] [Revised: 02/05/2013] [Accepted: 04/01/2013] [Indexed: 11/21/2022]
|
13
|
Kim SG, Seo YG. A TRUS Prostate Segmentation using Gabor Texture Features and Snake-like Contour. Journal of Information Processing Systems 2013. [DOI: 10.3745/jips.2013.9.1.103] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
14
|
Abstract
In this paper, we propose a new prostate computed tomography (CT) segmentation method for image guided radiation therapy. The main contributions of our method lie in the following aspects. 1) Instead of using voxel intensity information alone, patch-based representation in the discriminative feature space with logistic sparse LASSO is used as anatomical signature to deal with low contrast problem in prostate CT images. 2) Based on the proposed patch-based signature, a new multi-atlases label fusion method formulated under sparse representation framework is designed to segment prostate in the new treatment images, with guidance from the previous segmented images of the same patient. This method estimates the prostate likelihood of each voxel in the new treatment image from its nearby candidate voxels in the previous segmented images, based on the nonlocal mean principle and sparsity constraint. 3) A hierarchical labeling strategy is further designed to perform label fusion, where voxels with high confidence are first labeled for providing useful context information in the same image for aiding the labeling of the remaining voxels. 4) An online update mechanism is finally adopted to progressively collect more patient-specific information from newly segmented treatment images of the same patient, for adaptive and more accurate segmentation. The proposed method has been extensively evaluated on a prostate CT image database consisting of 24 patients where each patient has more than 10 treatment images, and further compared with several state-of-the-art prostate CT segmentation algorithms using various evaluation metrics. Experimental results demonstrate that the proposed method consistently achieves higher segmentation accuracy than any other methods under comparison.
Collapse
Affiliation(s)
- Shu Liao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), Chapel Hill, NC 27599, USA.
| | | | | | | |
Collapse
|
15
|
Ghose S, Oliver A, Martí R, Lladó X, Vilanova JC, Freixenet J, Mitra J, Sidibé D, Meriaudeau F. A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images. Comput Methods Programs Biomed 2012; 108:262-287. [PMID: 22739209 DOI: 10.1016/j.cmpb.2012.04.006] [Citation(s) in RCA: 88] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2011] [Revised: 04/17/2012] [Accepted: 04/17/2012] [Indexed: 06/01/2023]
Abstract
Prostate segmentation is a challenging task, and the challenges significantly differ from one imaging modality to another. Low contrast, speckle, micro-calcifications and imaging artifacts like shadow poses serious challenges to accurate prostate segmentation in transrectal ultrasound (TRUS) images. However in magnetic resonance (MR) images, superior soft tissue contrast highlights large variability in shape, size and texture information inside the prostate. In contrast poor soft tissue contrast between prostate and surrounding tissues in computed tomography (CT) images pose a challenge in accurate prostate segmentation. This article reviews the methods developed for prostate gland segmentation TRUS, MR and CT images, the three primary imaging modalities that aids prostate cancer diagnosis and treatment. The objective of this work is to study the key similarities and differences among the different methods, highlighting their strengths and weaknesses in order to assist in the choice of an appropriate segmentation methodology. We define a new taxonomy for prostate segmentation strategies that allows first to group the algorithms and then to point out the main advantages and drawbacks of each strategy. We provide a comprehensive description of the existing methods in all TRUS, MR and CT modalities, highlighting their key-points and features. Finally, a discussion on choosing the most appropriate segmentation strategy for a given imaging modality is provided. A quantitative comparison of the results as reported in literature is also presented.
Collapse
Affiliation(s)
- Soumya Ghose
- Computer Vision and Robotics Group, University of Girona, Campus Montilivi, Edifici P-IV, 17071 Girona, Spain.
| | | | | | | | | | | | | | | | | |
Collapse
|
16
|
Abstract
PURPOSE Transrectal ultrasound (TRUS) imaging is clinically used in prostate biopsy and therapy. Segmentation of the prostate on TRUS images has many applications. In this study, a three-dimensional (3D) segmentation method for TRUS images of the prostate is presented for 3D ultrasound-guided biopsy. METHODS This segmentation method utilizes a statistical shape, texture information, and intensity profiles. A set of wavelet support vector machines (W-SVMs) is applied to the images at various subregions of the prostate. The W-SVMs are trained to adaptively capture the features of the ultrasound images in order to differentiate the prostate and nonprostate tissue. This method consists of a set of wavelet transforms for extraction of prostate texture features and a kernel-based support vector machine to classify the textures. The voxels around the surface of the prostate are labeled in sagittal, coronal, and transverse planes. The weight functions are defined for each labeled voxel on each plane and on the model at each region. In the 3D segmentation procedure, the intensity profiles around the boundary between the tentatively labeled prostate and nonprostate tissue are compared to the prostate model. Consequently, the surfaces are modified based on the model intensity profiles. The segmented prostate is updated and compared to the shape model. These two steps are repeated until they converge. Manual segmentation of the prostate serves as the gold standard and a variety of methods are used to evaluate the performance of the segmentation method. RESULTS The results from 40 TRUS image volumes of 20 patients show that the Dice overlap ratio is 90.3% ± 2.3% and that the sensitivity is 87.7% ± 4.9%. CONCLUSIONS The proposed method provides a useful tool in our 3D ultrasound image-guided prostate biopsy and can also be applied to other applications in the prostate.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329, USA
| | | |
Collapse
|
17
|
Abstract
Automatic segmentation of prostate in CT images plays an important role in medical image analysis and image guided radiation therapy. It remains as a challenging problem mainly due to three issues: First, the image contrast between the prostate and its surrounding tissues is low in prostate CT images and no obvious boundaries can be observed. Second, the unpredictable prostate motion causes large position variations of the prostate in the treatment images scanned at different treatment days. Third, the uncertainty of the existence of bowel gas in treatment images significantly changes the image appearance even for images taken from the same patient. To address these issues, in this paper we are motivated to propose a feature based learning framework for accurate prostate localization in CT images. The main contributions of the proposed method lie in the following aspects: (1) Anatomical features are extracted from input images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to help localize the prostate. (2) Regions with salient features but irrelevant to the localization of prostate, such as regions filled with bowel gas are automatically filtered out by the proposed method. (3) An online update mechanism is adopted in this paper to adaptively combine both population information and patient-specific information to localize the prostate. The proposed method is evaluated on a CT prostate dataset of 24 patients to localize the prostate, where each patient has more than 10 longitudinal images scanned at different treatment times. It is also compared with several state-of- the-art prostate localization algorithms in CT images, and the experimental results demonstrate that the proposed method achieves the highest localization accuracy among all the methods under comparison.
Collapse
|
18
|
Yang X, Fei B. 3D Prostate Segmentation of Ultrasound Images Combining Longitudinal Image Registration and Machine Learning. Proc SPIE Int Soc Opt Eng 2012; 8316:83162O. [PMID: 24027622 DOI: 10.1117/12.912188] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 ± 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | |
Collapse
|
19
|
Fei B, Schuster DM, Master V, Akbari H, Fenster A, Nieh P. A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate. Proc SPIE Int Soc Opt Eng 2012; 2012. [PMID: 22708023 DOI: 10.1117/12.912182] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.
Collapse
Affiliation(s)
- Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30329
| | | | | | | | | | | |
Collapse
|
20
|
Abstract
Prostate surface detection from ultrasound images plays a key role in our recently developed ultrasound guided robotic biopsy system. However, due to the low contrast, speckle noise and shadowing in ultrasound images, this still remains a difficult task. In the current system, a 3D prostate surface is reconstructed from a sequence of 2D outlines, which are performed manually. This is arduous and the results depend heavily on the user's expertise. This paper presents a new practical method, called Evolving Bubbles, based on the level set method to semi-automatically detect the prostate surface from transrectal ultrasound (TRUS) images. To produce good results, a few initial bubbles are simply specified by the user from five particular slices based on the prostate shape. When the initial bubbles evolve along their normal directions, they expand, shrink, merge and split, and finally are attracted to the desired prostate surface. Meanwhile, to remedy the boundary leaking problem caused by gaps or weak boundaries, domain specific knowledge of the prostate and statistical information are incorporated into the Evolving Bubbles. We apply the bubbles model to eight 3D and four stacks of 2D TRUS images and the results show its effectiveness.
Collapse
Affiliation(s)
- FAN SHAO
- Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, 308433, Singapore
| | - KECK VOON LING
- School of Electrical and Electronic Engineering, Nanyang Technological University, 50, Nanyang Avenue, 639798, Singapore
| | - LOUIS PHEE
- School of Mechanical and Production Engineering, Nanyang Technological University, 50, Nanyang Avenue, 639798, Singapore
| | - WAN SING NG
- School of Mechanical and Production Engineering, Nanyang Technological University, 50, Nanyang Avenue, 639798, Singapore
| | - DI XIAO
- Singapore General Hospital, Outram Road, 169608, Singapore
| |
Collapse
|
21
|
Wong A, Scharcanski J. Fisher-Tippett region-merging approach to transrectal ultrasound prostate lesion segmentation. IEEE Trans Inf Technol Biomed 2011; 15:900-7. [PMID: 21824854 DOI: 10.1109/titb.2011.2163724] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this paper, a computerized approach to segmenting prostate lesions in transrectal ultrasound (TRUS) images is presented. The segmentation of prostate lesions from TRUS images is very challenging due to issues, such as poor contrast, low SNRs, and irregular shape variations. To address these issues, a novel approach is employed to segment the lesions from the surrounding prostate, where region merging is performed via a region-merging likelihood function based on regional statistics, as well as Fisher-Tippett statistics. Experimental results using TRUS prostate images demonstrate that the proposed Fisher-Tippett region-merging approach achieves more accurate segmentation of prostate lesions when compared to other segmentation methods.
Collapse
Affiliation(s)
- Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.
| | | |
Collapse
|
22
|
Akbari H, Yang X, Halig LV, Fei B. 3D Segmentation of Prostate Ultrasound images Using Wavelet Transform. Proc SPIE Int Soc Opt Eng 2011; 7962:79622K. [PMID: 22468205 DOI: 10.1117/12.878072] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The current definitive diagnosis of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. However, the current procedure is limited by using 2D biopsy tools to target 3D biopsy locations. This paper presents a new method for automatic segmentation of the prostate in three-dimensional transrectal ultrasound images, by extracting texture features and by statistically matching geometrical shape of the prostate. A set of Wavelet-based support vector machines (W-SVMs) are located and trained at different regions of the prostate surface. The WSVMs capture texture priors of ultrasound images for classification of the prostate and non-prostate tissues in different zones around the prostate boundary. In the segmentation procedure, these W-SVMs are trained in three sagittal, coronal, and transverse planes. The pre-trained W-SVMs are employed to tentatively label each voxel around the surface of the model as a prostate or non-prostate voxel by the texture matching. The labeled voxels in three planes after post-processing is overlaid on a prostate probability model. The probability prostate model is created using 10 segmented prostate data. Consequently, each voxel has four labels: sagittal, coronal, and transverse planes and one probability label. By defining a weight function for each labeling in each region, each voxel is labeled as a prostate or non-prostate voxel. Experimental results by using real patient data show the good performance of the proposed model in segmenting the prostate from ultrasound images.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology, Emory University, 1841 Clifton Rd, NE, Atlanta, GA, USA 30329
| | | | | | | |
Collapse
|
23
|
Abstract
Prostate segmentation in 3-D transrectal ultrasound images is an important step in the definition of the intra-operative planning of high intensity focused ultrasound (HIFU) therapy. This paper presents two main approaches for the semi-automatic methods based on discrete dynamic contour and optimal surface detection. They operate in 3-D and require a minimal user interaction. They are considered both alone or sequentially combined, with and without postregularization, and applied on anisotropic and isotropic volumes. Their performance, using different metrics, has been evaluated on a set of 28 3-D images by comparison with two expert delineations. For the most efficient algorithm, the symmetric average surface distance was found to be 0.77 mm.
Collapse
Affiliation(s)
- Carole Garnier
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
| | - Jean-Jacques Bellanger
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
| | - Ke Wu
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- LIST, Laboratory of Image Science and Technology
SouthEast UniversitySi Pai Lou 2, Nanjing, 210096,CN
| | - Huazhong Shu
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- LIST, Laboratory of Image Science and Technology
SouthEast UniversitySi Pai Lou 2, Nanjing, 210096,CN
| | - Nathalie Costet
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
| | - Romain Mathieu
- Service d'urologie
CHU RennesHôpital PontchaillouUniversité de Rennes I2 rue Henri Le Guilloux 35033 Rennes cedex 9,FR
| | - Renaud De Crevoisier
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
- Département de radiothérapie
CRLCC Eugène Marquis35000 Rennes,FR
| | - Jean-Louis Coatrieux
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- * Correspondence should be adressed to: Jean-Louis Coatrieux
| |
Collapse
|
24
|
Pasquier D, Peyrodie L, Denis F, Pointreau Y, Béra G, Lartigau E. [Automatic image segmentation for treatment planning in radiotherapy]. Cancer Radiother 2010; 14 Suppl 1:S6-13. [PMID: 21129671 DOI: 10.1016/S1278-3218(10)70003-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
One drawback of the growth in conformal radiotherapy and image-guided radiotherapy is the increased time needed to define the volumes of interest. This also results in inter- and intra-observer variability. However, developments in computing and image processing have enabled these tasks to be partially or totally automated. This article will provide a detailed description of the main principles of image segmentation in radiotherapy, its applications and the most recent results in a clinical context.
Collapse
|
25
|
Feng Q, Foskey M, Chen W, Shen D. Segmenting CT prostate images using population and patient-specific statistics for radiotherapy. Med Phys 2010; 37:4121-32. [PMID: 20879572 DOI: 10.1118/1.3464799] [Citation(s) in RCA: 63] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In the segmentation of sequential treatment-time CT prostate images acquired in image-guided radiotherapy, accurately capturing the intrapatient variation of the patient under therapy is more important than capturing interpatient variation. However, using the traditional deformable-model-based segmentation methods, it is difficult to capture intrapatient variation when the number of samples from the same patient is limited. This article presents a new deformable model, designed specifically for segmenting sequential CT images of the prostate, which leverages both population and patient-specific statistics to accurately capture the intrapatient variation of the patient under therapy. METHODS The novelty of the proposed method is twofold: First, a weighted combination of gradient and probability distribution function (PDF) features is used to build the appearance model to guide model deformation. The strengths of each feature type are emphasized by dynamically adjusting the weight between the profile-based gradient features and the local-region-based PDF features during the optimization process. An additional novel aspect of the gradient-based features is that, to alleviate the effect of feature inconsistency in the regions of gas and bone adjacent to the prostate, the optimal profile length at each landmark is calculated by statistically investigating the intensity profile in the training set. The resulting gradient-PDF combined feature produces more accurate and robust segmentations than general gradient features. Second, an online learning mechanism is used to build shape and appearance statistics for accurately capturing intrapatient variation. RESULTS The performance of the proposed method was evaluated on 306 images of the 24 patients. Compared to traditional gradient features, the proposed gradient-PDF combination features brought 5.2% increment in the success ratio of segmentation (from 94.1% to 99.3%). To evaluate the effectiveness of online learning mechanism, the authors carried out a comparison between partial online update strategy and full online update strategy. Using the full online update strategy, the mean DSC was improved from 86.6% to 89.3% with 2.8% gain. On the basis of full online update strategy, the manual modification before online update strategy was introduced and tested, the best performance was obtained; here, the mean DSC and the mean ASD achieved 92.4% and 1.47 mm, respectively. CONCLUSIONS The proposed prostate segmentation method provided accurate and robust segmentation results for CT images even under the situation where the samples of patient under radiotherapy were limited. A conclusion that the proposed method is suitable for clinical application can be drawn.
Collapse
Affiliation(s)
- Qianjin Feng
- Biomedical Engineering College, South Medical University, Guangzhou, China.
| | | | | | | |
Collapse
|
26
|
Gao Y, Sandhu R, Fichtinger G, Tannenbaum AR. A coupled global registration and segmentation framework with application to magnetic resonance prostate imagery. IEEE Trans Med Imaging 2010; 29:1781-94. [PMID: 20529727 PMCID: PMC2988404 DOI: 10.1109/tmi.2010.2052065] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Extracting the prostate from magnetic resonance (MR) imagery is a challenging and important task for medical image analysis and surgical planning. We present in this work a unified shape-based framework to extract the prostate from MR prostate imagery. In many cases, shape-based segmentation is a two-part problem. First, one must properly align a set of training shapes such that any variation in shape is not due to pose. Then segmentation can be performed under the constraint of the learnt shape. However, the general registration task of prostate shapes becomes increasingly difficult due to the large variations in pose and shape in the training sets, and is not readily handled through existing techniques. Thus, the contributions of this paper are twofold. We first explicitly address the registration problem by representing the shapes of a training set as point clouds. In doing so, we are able to exploit the more global aspects of registration via a certain particle filtering based scheme. In addition, once the shapes have been registered, a cost functional is designed to incorporate both the local image statistics as well as the learnt shape prior. We provide experimental results, which include several challenging clinical data sets, to highlight the algorithm's capability of robustly handling supine/prone prostate registration and the overall segmentation task.
Collapse
Affiliation(s)
- Yi Gao
- Schools of Electrical and Computer Engineering and Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, 30332 USA
| | - Romeil Sandhu
- Schools of Electrical and Computer Engineering and Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA
| | - Gabor Fichtinger
- School of Computing, Queens University, Kingston, ON K7L 3N6, Canada
| | - Allen Robert Tannenbaum
- Schools of Electrical and Computer Engineering and Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, 30332 USA and also with the Department of Electrical Engineering, Technion-IIT, Haifa 32000, Israel
| |
Collapse
|
27
|
Su Y, Davis BJ, Furutani KM, Herman MG, Robb RA. Seed localization and TRUS-fluoroscopy fusion for intraoperative prostate brachytherapy dosimetry. ACTA ACUST UNITED AC 2010; 12:25-34. [PMID: 17364656 DOI: 10.3109/10929080601168239] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE To develop and evaluate an integrated approach to intra-operative dosimetry for permanent prostate brachytherapy (PPB) by combining a fluoroscopy-based seed localization routine with a transrectal ultrasound (TRUS)-to-fluoroscopy fusion technique. MATERIALS AND METHODS Three-dimensional seed coordinates are reconstructed based on the two-dimensional seed locations identified from three fluoroscopic images acquired at different angles. A seed-based registration approach was examined in both simulation and phantom studies to register the seed locations identified from the fluoroscopic images to the TRUS images. Dose parameters were then evaluated and compared to CT-based dosimetry from a patient dataset. RESULTS Less than 0.2% error in the D90 value was observed using the TRUS-fluoroscopy image-fusion-based method relative to the CT-based post-implantation dosimetry. In the phantom study, an average distance of 3 mm was observed between the seeds identified from TRUS and the reconstructed seeds at registration. Isodose contours were displayed superimposed on the TRUS images. CONCLUSIONS Promising results were observed in this preliminary study of a TRUS-fluoroscopy fusion-based brachytherapy dosimetry analysis method, suggesting that the method is highly sensitive and calculates clinically relevant dosimetry, including the prostate D90. Further validation of the method is required for eventual clinical application.
Collapse
Affiliation(s)
- Yi Su
- Biomedical Imaging Resource, Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine, Rochester, Minnesota 55905, USA
| | | | | | | | | |
Collapse
|
28
|
Ghose S, Oliver A, Martí R, Lladó X, Freixenet J, Vilanova JC, Meriaudeau F. Texture Guided Active Appearance Model Propagation for Prostate Segmentation. In: Madabhushi A, Dowling J, Yan P, Fenster A, Abolmaesumi P, Hata N, editors. Prostate Cancer Imaging. Computer-Aided Diagnosis, Prognosis, and Intervention. Berlin: Springer Berlin Heidelberg; 2010. pp. 111-20. [DOI: 10.1007/978-3-642-15989-3_13] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
29
|
Brock KK, Nichol AM, Ménard C, Moseley JL, Warde PR, Catton CN, Jaffray DA. Accuracy and sensitivity of finite element model-based deformable registration of the prostate. Med Phys 2008; 35:4019-25. [PMID: 18841853 DOI: 10.1118/1.2965263] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Affiliation(s)
- Kristy K Brock
- Radiation Medicine Program, Princess Margaret Hospital, University Health Network, and the University of Toronto, Toronto, Ontario M5G 2M9, Canada.
| | | | | | | | | | | | | |
Collapse
|
30
|
Abstract
Prostate cancer diagnosis and treatment rely on segmentation of Transrectal Ultrasound (TRUS) prostate images. This is a challenging and difficult task dut to weak prostate boundaries, speckle noise and the short range of gray levels. Advances in digital imaging techniques have made it possible the acquisition of large volumes of TRUS prostate images so that there is considerable demand for automated segmentation systems. Local Binary Pattern (LBP) has been used for texture segmentation and analysis. Despite its promising performance for texture classification it has not yet been considered for TRUS prostate segmentation. In this paper we introduce a medical texture local binary pattern operator designed for applications of medical imaging where different tissues or micro organisms might maintain extremely weak underlying textures that make it impossible or very difficult for ordinary texture analysis approaches to classify them. In the proposed method the deformations of a level set contour are controlled based on the medical texture local binary pattern operator.
Collapse
Affiliation(s)
- Nezamoddin N Kachouie
- Department of Systems Design Engineering, University of Waterloo, 200 University Ave. West, Waterloo, Ontario, Canada
| | | |
Collapse
|
31
|
Yang F, Suri J, Fenster A. Segmentation of prostate from 3-D ultrasound volumes using shape and intensity priors in level set framework. Conf Proc IEEE Eng Med Biol Soc 2008; 2006:2341-4. [PMID: 17945709 DOI: 10.1109/iembs.2006.260000] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents a fully automatic prostate segmentation system in transrectal ultrasound images based on 3-D shape and intensity priors. 2-D manual segmentations from training image data are stacked to create the coarse 3-D shape. Min/Max flow is used to transform each coarse shape into smooth 3-D surface. Principle component analysis method is utilized to extract the 3-D shape mode from the training data sets. In a Bayesian inference, the nonlinear shape model is integrated with a nonparametric intensity prior and define a region based energy function. The energy is minimized in a level set frameworks and the control parameters of the convergence lead to the final segmentation. The developed method was tested on 3-D transrectal ultrasound images and its performance compared with manually-defined ground truth. The correct segmentation rate is 0.82.
Collapse
Affiliation(s)
- Fuxing Yang
- Diagnostic Ultrasound, Bothell, WA 98021, USA.
| | | | | |
Collapse
|
32
|
Ding M, Chiu B, Gyacskov I, Yuan X, Drangova M, Downey DB, Fenster A. Fast prostate segmentation in 3D TRUS images based on continuity constraint using an autoregressive model. Med Phys 2008; 34:4109-25. [PMID: 18072477 DOI: 10.1118/1.2777005] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
In this article a new slice-based 3D prostate segmentation method based on a continuity constraint, implemented as an autoregressive (AR) model is described. In order to decrease the propagated segmentation error produced by the slice-based 3D segmentation method, a continuity constraint was imposed in the prostate segmentation algorithm. A 3D ultrasound image was segmented using the slice-based segmentation method. Then, a cross-sectional profile of the resulting contours was obtained by intersecting the 2D segmented contours with a coronal plane passing through the midpoint of the manually identified rotational axis, which is considered to be the approximate center of the prostate. On the coronal cross-sectional plane, these intersections form a set of radial lines directed from the center of the prostate. The lengths of these radial lines were smoothed using an AR model. Slice-based 3D segmentations were performed in the clockwise and in the anticlockwise directions, where clockwise and anticlockwise are defined with respect to the propagation directions on the coronal view. This resulted in two different segmentations for each 2D slice. For each pair of unmatched segments, in which the distance between the contour generated clockwise and that generated anticlockwise was greater than 4 mm, a method was used to select the optimal contour. Experiments performed using 3D prostate ultrasound images of nine patients demonstrated that the proposed method produced accurate 3D prostate boundaries without manual editing. The average distance between the proposed method and manual segmentation was 1.29 mm. The average intraobserver coefficient of variation (i.e., the standard deviation divided by the average volume) of the boundaries segmented by the proposed method was 1.6%. The average segmentation time of a 352 x 379 x 704 image on a Pentium IV 2.8 GHz PC was 10 s.
Collapse
Affiliation(s)
- Mingyue Ding
- Imaging Research Laboratories, Robarts Research Institute, 100 Perth Drive, London, Ontario, Canada
| | | | | | | | | | | | | |
Collapse
|
33
|
Zaim A, Yi T, Keck R. Feature-Based Classification of Prostate Ultrasound Images using Multiwavelet and Kernel Support Vector Machines. ACTA ACUST UNITED AC 2007. [DOI: 10.1109/ijcnn.2007.4370968] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
34
|
Moradi M, Mousavi P, Abolmaesumi P. Computer-aided diagnosis of prostate cancer with emphasis on ultrasound-based approaches: a review. Ultrasound Med Biol 2007; 33:1010-28. [PMID: 17482752 DOI: 10.1016/j.ultrasmedbio.2007.01.008] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2006] [Revised: 12/28/2006] [Accepted: 01/14/2007] [Indexed: 05/15/2023]
Abstract
This paper reviews the state of the art in computer-aided diagnosis of prostate cancer and focuses, in particular, on ultrasound-based techniques for detection of cancer in prostate tissue. The current standard procedure for diagnosis of prostate cancer, i.e., ultrasound-guided biopsy followed by histopathological analysis of tissue samples, is invasive and produces a high rate of false negatives resulting in the need for repeated trials. It is against these backdrops that the search for new methods to diagnose prostate cancer continues. Image-based approaches (such as MRI, ultrasound and elastography) represent a major research trend for diagnosis of prostate cancer. Due to the integration of ultrasound imaging in the current clinical procedure for detection of prostate cancer, we specifically provide a more detailed review of methodologies that use ultrasound RF-spectrum parameters, B-scan texture features and Doppler measures for prostate tissue characterization. We present current and future directions of research aimed at computer-aided detection of prostate cancer and conclude that ultrasound is likely to play an important role in the field.
Collapse
Affiliation(s)
- Mehdi Moradi
- School of Computing, Queen's University, Kingston, Ontario, Canada
| | | | | |
Collapse
|
35
|
Siadat MR, Soltanian-Zadeh H, Elisevich KV. Knowledge-based localization of hippocampus in human brain MRI. Comput Biol Med 2007; 37:1342-60. [PMID: 17339035 PMCID: PMC4502929 DOI: 10.1016/j.compbiomed.2006.12.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2006] [Revised: 12/13/2006] [Accepted: 12/15/2006] [Indexed: 10/23/2022]
Abstract
We present a novel and efficient method for localization of human brain structures such as hippocampus. Landmark localization is important for segmentation and registration. This method follows a statistical roadmap, consisting of anatomical landmarks, to reach the desired structures. Using a set of desired and undesired landmarks, identified on a training set, we estimate Gaussian models and determine optimal search areas for desired landmarks. The statistical models form a set of rules to evaluate the extracted landmarks during the search procedure. When applied on 900 MR images of 10 epileptic patients, this method demonstrated an overall success rate of 83%.
Collapse
Affiliation(s)
- Mohammad-Reza Siadat
- Radiology Image Analysis Laboratory, Department of Diagnostic Radiology, Henry Ford Health System, One Ford Place, Detroit, MI 48202, USA.
| | | | | |
Collapse
|
36
|
Penna MA, Dines KA, Seip R, Carlson RF, Sanghvi NT. Modeling prostate anatomy from multiple view TRUS images for image-guided HIFU therapy. IEEE Trans Ultrason Ferroelectr Freq Control 2007; 54:52-69. [PMID: 17225800 DOI: 10.1109/tuffc.2007.211] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Current planning methods for transrectal high-intensity focused ultrasound treatment of prostate cancer rely on manually defining treatment regions in 15-20 sector transrectal ultrasound (TRUS) images of the prostate. Although effective, it is desirable to reduce user interaction time by identifying functionally related anatomic structures (segmenting), then automatically laying out treatment sites using these structures as a guide. Accordingly, a method has been developed to effectively generate solid three-dimensional (3-D) models of the prostate, urethra, and rectal wall from boundary trace data. Modeling the urethra and rectal wall are straightforward, but modeling the prostate is more difficult and has received much attention in the literature. New results presented here are aimed at overcoming many of the limitations of previous approaches to modeling the prostate while using boundary traces obtained via manual tracing in as few as 5 sector and 3 linear images. The results presented here are based on a new type of surface, the Fourier ellipsoid, and the use of sector and linear TRUS images. Tissue-specific 3-D models will ultimately permit finer control of energy deposition and more selective destruction of cancerous regions while sparing critical neighboring structures.
Collapse
Affiliation(s)
- Michael A Penna
- Department of Mathematics, Indiana University-Purdue University, Indianapolis, IN, USA.
| | | | | | | | | |
Collapse
|
37
|
Tutar IB, Pathak SD, Gong L, Cho PS, Wallner K, Kim Y. Semiautomatic 3-D prostate segmentation from TRUS images using spherical harmonics. IEEE Trans Med Imaging 2006; 25:1645-54. [PMID: 17167999 DOI: 10.1109/tmi.2006.884630] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Prostate brachytherapy quality assessment procedure should be performed while the patient is still on the operating table since this would enable physicians to implant additional seeds immediately into the prostate if necessary thus reducing the costs and increasing patient outcome. Seed placement procedure is readily performed under fluoroscopy and ultrasound guidance. Therefore, it has been proposed that seed locations be reconstructed from fluoroscopic images and prostate boundaries be identified in ultrasound images to perform dosimetry in the operating room. However, there is a key hurdle that needs to be overcome to perform the ultrasound and fluoroscopy-based dosimetry: it is highly time-consuming for physicians to outline prostate boundaries in ultrasound images manually, and there is no method that enables physicians to identify three-dimensional (3-D) prostate boundaries in postimplant ultrasound images in a fast and robust fashion. In this paper, we propose a new method where the segmentation is defined in an optimization framework as fitting the best surface to the underlying images under shape constraints. To derive these constraints, we modeled the shape of the prostate using spherical harmonics of degree eight and performed statistical analysis on the shape parameters. After user initialization, our algorithm identifies the prostate boundaries on the average in 2 min. For algorithm validation, we collected 30 postimplant prostate volume sets, each consisting of axial transrectal ultrasound images acquired at 1-mm increments. For each volume set, three experts outlined the prostate boundaries first manually and then using our algorithm. By treating the average of manual boundaries as the ground truth, we computed the segmentation error. The overall mean absolute distance error was 1.26 +/- 0.41 mm while the percent volume overlap was 83.5 +/- 4.2. We found the segmentation error to be slightly less than the clinically-observed interobserver variability.
Collapse
Affiliation(s)
- Ismail B Tutar
- Image Computing Systems Laboratory, Departments of Electrical Engineering and Bioengineering, University of Washington, Seattle, WA 98195, USA
| | | | | | | | | | | |
Collapse
|
38
|
Hodge AC, Fenster A, Downey DB, Ladak HM. Prostate boundary segmentation from ultrasound images using 2D active shape models: optimisation and extension to 3D. Comput Methods Programs Biomed 2006; 84:99-113. [PMID: 16930764 DOI: 10.1016/j.cmpb.2006.07.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2006] [Revised: 06/28/2006] [Accepted: 07/07/2006] [Indexed: 05/11/2023]
Abstract
Boundary outlining, or segmentation, of the prostate is an important task in diagnosis and treatment planning for prostate cancer. This paper describes an algorithm based on two-dimensional (2D) active shape models (ASM) for semi-automatic segmentation of the prostate boundary from ultrasound images. Optimisation of the 2D ASM for prostatic ultrasound was done first by examining ASM construction and image search parameters. Extension of the algorithm to three-dimensional (3D) segmentation was then done using rotational-based slicing. Evaluation of the 3D segmentation algorithm used distance- and volume-based error metrics to compare algorithm generated boundary outlines to gold standard (manually generated) boundary outlines. Minimum description length landmark placement for ASM construction, and specific values for constraints and image search were found to be optimal. Evaluation of the algorithm versus gold standard boundaries found an average mean absolute distance of 1.09+/-0.49 mm, an average percent absolute volume difference of 3.28+/-3.16%, and a 5x speed increase versus manual segmentation.
Collapse
Affiliation(s)
- Adam C Hodge
- Department of Medical Biophysics, The University of Western Ontario, London, Ontario, Canada
| | | | | | | |
Collapse
|
39
|
Abstract
This paper reviews ultrasound segmentation paper methods, in a broad sense, focusing on techniques developed for medical B-mode ultrasound images. First, we present a review of articles by clinical application to highlight the approaches that have been investigated and degree of validation that has been done in different clinical domains. Then, we present a classification of methodology in terms of use of prior information. We conclude by selecting ten papers which have presented original ideas that have demonstrated particular clinical usefulness or potential specific to the ultrasound segmentation problem.
Collapse
Affiliation(s)
- J Alison Noble
- Department of Engineering Science, University of Oxford, UK.
| | | |
Collapse
|
40
|
Zhu Y, Williams S, Zwiggelaar R. Computer technology in detection and staging of prostate carcinoma: A review. Med Image Anal 2006; 10:178-99. [PMID: 16150630 DOI: 10.1016/j.media.2005.06.003] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2004] [Revised: 02/02/2005] [Accepted: 06/22/2005] [Indexed: 11/20/2022]
Abstract
After two decades of increasing interest and research activity, computer-assisted diagnostic approaches are reaching the stage where more routine deployment in clinical practice is becoming a possibility [Kruppinski, E.A., 2004. Computer-aided detection in clinical environment: Benefits and challenges for radiologists. Radiology 231, 7-9]. This is particularly the case in the analysis of mammographic images [Helvie, M.A., Hadjiiski, L., Makariou, E., Chan, H.P., Petrick, N., Sahiner, B., Lo, S.C., Freedman, M., Adler, D., Bailey, J., Blane, C., Hoff, D., Hunt, K., Joynt, L., Klein, K., Paramagul, C., Patterson, S.K., Roubidoux, M.A., 2004. Sensitivity of noncommercial computer-aided detection system for mammographic breast cancer detection: pilot clinical trial. Radiology 231, 208-214] and in the detection of pulmonary nodules [Reeves, A.P., Kostis, W.J., 2000. Computer-aided diagnosis for lung cancer. Radiol. Clin. North Am. 38, 497-509]. However, similar approaches can be applied more widely with the promise of increasing clinical utility in other areas. We review how computer-aided approaches may be applied in the diagnosis and staging of prostatic cancer. The current status of computer technology is reviewed, covering artificial neural networks for detection and staging, computerised biopsy simulation and computer-assisted analysis of ultrasound and magnetic resonance images.
Collapse
Affiliation(s)
- Yanong Zhu
- School of Computing Sciences, University of East Anglia, Norwich, Norfolk NR4 7TJ, UK
| | | | | |
Collapse
|
41
|
Abstract
Estimation of prostate location and volume is essential in determining a dose plan for ultrasound-guided brachytherapy, a common prostate cancer treatment. However, manual segmentation is difficult, time consuming and prone to variability. In this paper, we present a semi-automatic discrete dynamic contour (DDC) model based image segmentation algorithm, which effectively combines a multi-resolution model refinement procedure together with the domain knowledge of the image class. The segmentation begins on a low-resolution image by defining a closed DDC model by the user. This contour model is then deformed progressively towards higher resolution images. We use a combination of a domain knowledge based fuzzy inference system (FIS) and a set of adaptive region based operators to enhance the edges of interest and to govern the model refinement using a DDC model. The automatic vertex relocation process, embedded into the algorithm, relocates deviated contour points back onto the actual prostate boundary, eliminating the need of user interaction after initialization. The accuracy of the prostate boundary produced by the proposed algorithm was evaluated by comparing it with a manually outlined contour by an expert observer. We used this algorithm to segment the prostate boundary in 114 2D transrectal ultrasound (TRUS) images of six patients scheduled for brachytherapy. The mean distance between the contours produced by the proposed algorithm and the manual outlines was 2.70 +/- 0.51 pixels (0.54 +/- 0.10 mm). We also showed that the algorithm is insensitive to variations of the initial model and parameter values, thus increasing the accuracy and reproducibility of the resulting boundaries in the presence of noise and artefacts.
Collapse
Affiliation(s)
- Nuwan D Nanayakkara
- Department of Electrical and Computer Engineering, University of Western Ontario, London, Ontario N6A5B9, Canada.
| | | | | |
Collapse
|
42
|
Abstract
This paper presents a novel deformable model for automatic segmentation of prostates from three-dimensional ultrasound images, by statistical matching of both shape and texture. A set of Gabor-support vector machines (G-SVMs) are positioned on different patches of the model surface, and trained to adaptively capture texture priors of ultrasound images for differentiation of prostate and nonprostate tissues in different zones around prostate boundary. Each G-SVM consists of a Gabor filter bank for extraction of rotation-invariant texture features and a kernel support vector machine for robust differentiation of textures. In the deformable segmentation procedure, these pretrained G-SVMs are used to tentatively label voxels around the surface of deformable model as prostate or nonprostate tissues by a statistical texture matching. Subsequently, the surface of deformable model is driven to the boundary between the tentatively labeled prostate and non-prostate tissues. Since the step of tissue labeling and the step of label-based surface deformation are dependent on each other, these two steps are repeated until they converge. Experimental results by using both synthesized and real data show the good performance of the proposed model in segmenting prostates from ultrasound images.
Collapse
Affiliation(s)
- Yiqiang Zhan
- Section of Biomedical Image Analysis, Department of Radiology, University of Pennsylvania, Philadelphia 19104, USA.
| | | |
Collapse
|
43
|
Badiei S, Salcudean SE, Varah J, Morris WJ. Prostate segmentation in 2D ultrasound images using image warping and ellipse fitting. Med Image Comput Comput Assist Interv 2006; 9:17-24. [PMID: 17354751 DOI: 10.1007/11866763_3] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
This paper presents a new algorithm for the semi-automatic segmentation of the prostate from B-mode trans-rectal ultrasound (TRUS) images. The segmentation algorithm first uses image warping to make the prostate shape elliptical. Measurement points along the prostate boundary, obtained from an edge-detector, are then used to find the best elliptical fit to the warped prostate. The final segmentation result is obtained by applying a reverse warping algorithm to the elliptical fit. This algorithm was validated using manual segmentation by an expert observer on 17 midgland, pre-operative, TRUS images. Distance-based metrics between the manual and semi-automatic contours showed a mean absolute difference of 0.67 +/- 0.18 mm, which is significantly lower than inter-observer variability. Area-based metrics showed an average sensitivity greater than 97% and average accuracy greater than 93%. The proposed algorithm was almost two times faster than manual segmentation and has potential for real-time applications.
Collapse
Affiliation(s)
- Sara Badiei
- Department of Electrical and Computer Engineering, University of British Columbia, 2356 Main Mall, Vancouver, BC, V6T 1Z4, Canada.
| | | | | | | |
Collapse
|
44
|
Sahba F, Tizhoosh HR, Salama MM. A coarse-to-fine approach to prostate boundary segmentation in ultrasound images. Biomed Eng Online 2005; 4:58. [PMID: 16219098 PMCID: PMC1266388 DOI: 10.1186/1475-925x-4-58] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2005] [Accepted: 10/11/2005] [Indexed: 11/13/2022] Open
Abstract
Background In this paper a novel method for prostate segmentation in transrectal ultrasound images is presented. Methods A segmentation procedure consisting of four main stages is proposed. In the first stage, a locally adaptive contrast enhancement method is used to generate a well-contrasted image. In the second stage, this enhanced image is thresholded to extract an area containing the prostate (or large portions of it). Morphological operators are then applied to obtain a point inside of this area. Afterwards, a Kalman estimator is employed to distinguish the boundary from irrelevant parts (usually caused by shadow) and generate a coarsely segmented version of the prostate. In the third stage, dilation and erosion operators are applied to extract outer and inner boundaries from the coarsely estimated version. Consequently, fuzzy membership functions describing regional and gray-level information are employed to selectively enhance the contrast within the prostate region. In the last stage, the prostate boundary is extracted using strong edges obtained from selectively enhanced image and information from the vicinity of the coarse estimation. Results A total average similarity of 98.76%(± 0.68) with gold standards was achieved. Conclusion The proposed approach represents a robust and accurate approach to prostate segmentation.
Collapse
Affiliation(s)
- Farhang Sahba
- Medical Instrument Analysis and Machine Intelligence Group, University of Waterloo, Waterloo, Canada
- Department of Systems Design Engineering, 200 University Avenue West, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
| | - Hamid R Tizhoosh
- Medical Instrument Analysis and Machine Intelligence Group, University of Waterloo, Waterloo, Canada
- Department of Systems Design Engineering, 200 University Avenue West, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
| | - Magdy M Salama
- Medical Instrument Analysis and Machine Intelligence Group, University of Waterloo, Waterloo, Canada
- Department of Electrical and Computer Engineering, 200 University Avenue West, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
| |
Collapse
|
45
|
Siadat MR, Soltanian-Zadeh H, Fotouhi F, Elisevich K. Content-based image database system for epilepsy. Comput Methods Programs Biomed 2005; 79:209-26. [PMID: 15955590 DOI: 10.1016/j.cmpb.2005.03.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2004] [Revised: 01/02/2005] [Accepted: 03/28/2005] [Indexed: 05/03/2023]
Abstract
We have designed and implemented a human brain multi-modality database system with content-based image management, navigation and retrieval support for epilepsy. The system consists of several modules including a database backbone, brain structure identification and localization, segmentation, registration, visual feature extraction, clustering/classification and query modules. Our newly developed anatomical landmark localization and brain structure identification method facilitates navigation through an image data and extracts useful information for segmentation, registration and query modules. The database stores T1-, T2-weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data. We confine the visual feature extractors within anatomical structures to support semantically rich content-based procedures. The proposed system serves as a research tool to evaluate a vast number of hypotheses regarding the condition such as resection of the hippocampus with a relatively small volume and high average signal intensity on FLAIR. Once the database is populated, using data mining tools, partially invisible correlations between different modalities of data, modeled in database schema, can be discovered. The design and implementation aspects of the proposed system are the main focus of this paper.
Collapse
Affiliation(s)
- Mohammad-Reza Siadat
- Radiology Image Analysis Laboratory, Department of Diagnostic Radiology, Henry Ford Health System, Detroit, MI 48202, USA.
| | | | | | | |
Collapse
|
46
|
Abstract
Knowing the location and the volume of the prostate is important for ultrasound-guided prostate brachytherapy, a commonly used prostate cancer treatment method. The prostate boundary must be segmented before a dose plan can be obtained. However, manual segmentation is arduous and time consuming. This paper introduces a semi-automatic segmentation algorithm based on the dyadic wavelet transform (DWT) and the discrete dynamic contour (DDC). A spline interpolation method is used to determine the initial contour based on four user-defined initial points. The DDC model then refines the initial contour based on the approximate coefficients and the wavelet coefficients generated using the DWT. The DDC model is executed under two settings. The coefficients used in these two settings are derived using smoothing functions with different sizes. A selection rule is used to choose the best contour based on the contours produced in these two settings. The accuracy of the final contour produced by the proposed algorithm is evaluated by comparing it with the manual contour outlined by an expert observer. A total of 114 2D TRUS images taken for six different patients scheduled for brachytherapy were segmented using the proposed algorithm. The average difference between the contour segmented using the proposed algorithm and the manually outlined contour is less than 3 pixels.
Collapse
Affiliation(s)
- Bernard Chiu
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada.
| | | | | | | |
Collapse
|
47
|
Freedman D, Radke RJ, Zhang T, Jeong Y, Lovelock DM, Chen GTY. Model-based segmentation of medical imagery by matching distributions. IEEE Trans Med Imaging 2005; 24:281-292. [PMID: 15754979 DOI: 10.1109/tmi.2004.841228] [Citation(s) in RCA: 56] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The segmentation of deformable objects from three-dimensional (3-D) images is an important and challenging problem, especially in the context of medical imagery. We present a new segmentation algorithm based on matching probability distributions of photometric variables that incorporates learned shape and appearance models for the objects of interest. The main innovation over similar approaches is that there is no need to compute a pixelwise correspondence between the model and the image. This allows for a fast, principled algorithm. We present promising results on difficult imagery for 3-D computed tomography images of the male pelvis for the purpose of image-guided radiotherapy of the prostate.
Collapse
Affiliation(s)
- Daniel Freedman
- Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| | | | | | | | | | | |
Collapse
|
48
|
Betrouni N, Vermandel M, Pasquier D, Maouche S, Rousseau J. Segmentation of abdominal ultrasound images of the prostate using a priori information and an adapted noise filter. Comput Med Imaging Graph 2005; 29:43-51. [PMID: 15710540 DOI: 10.1016/j.compmedimag.2004.07.007] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2004] [Accepted: 07/15/2004] [Indexed: 11/18/2022]
Abstract
This article discusses a method for the automatic segmentation of trans-abdominal ultrasound images of the prostate. Segmentation begins with the application of a filter to enhance the contours without modifying the image information. It combines adaptive morphological filtering and median filtering to detect the noise-containing regions and smooth them. A heuristic optimization algorithm searches for the contour initialized from a prostate model. The performance of the algorithm was tested by comparing the resulting contours with those obtained by manual segmentation. The average distance between the contours was 2.5 mm and the average coverage index was 93%.
Collapse
Affiliation(s)
- Nacim Betrouni
- Laboratoire de Biophysique, Centre Hospitalier Universitaire de Lille, Institut de Technologie Médicale, UPRES EA 1049, Pavillon Vancostenobel, CHRU 59037 Lille, France
| | | | | | | | | |
Collapse
|
49
|
Zaim A. Automatic Segmentation of the Prostate from Ultrasound Data Using Feature-Based Self Organizing Map. Image Analysis 2005. [DOI: 10.1007/11499145_127] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
50
|
Petersch B, Hadwiger M, Hauser H, Hönigmann D. Real time computation and temporal coherence of opacity transfer functions for direct volume rendering of ultrasound data. Comput Med Imaging Graph 2004; 29:53-63. [PMID: 15710541 DOI: 10.1016/j.compmedimag.2004.09.013] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2004] [Accepted: 09/22/2004] [Indexed: 11/20/2022]
Abstract
Opacity transfer function (OTF) generation for direct volume rendering of medical image data is an intensely discussed subject. Several automatic methods exist for CT and MRI data, which are not apt for ultrasound data, mainly due to its low signal-to-noise ratio. Furthermore, ultrasound (US) imaging is able to produce time-varying 3D datasets in real time thus opening the door to 4D visualization. However, OTF design for 4D datasets has not been exhaustively discussed until now. We present an efficient solution to generate an optimized OTF for a given 3DUS dataset in real time. Our method results in excellent visualization which we demonstrate using 3D fetus datasets. Finally, we discuss the applicability of our method to 4DUS visualization.
Collapse
Affiliation(s)
- Bernhard Petersch
- Advanced Computer Vision GmbH (ACV), Tech Gate Vienna, Donau-City-Strasse 1, A-1220 Vienna, Austria.
| | | | | | | |
Collapse
|