1
|
Wang H, Wu H, Wang Z, Yue P, Ni D, Heng PA, Wang Y. A Narrative Review of Image Processing Techniques Related to Prostate Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:189-209. [PMID: 39551652 DOI: 10.1016/j.ultrasmedbio.2024.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 09/15/2024] [Accepted: 10/06/2024] [Indexed: 11/19/2024]
Abstract
Prostate cancer (PCa) poses a significant threat to men's health, with early diagnosis being crucial for improving prognosis and reducing mortality rates. Transrectal ultrasound (TRUS) plays a vital role in the diagnosis and image-guided intervention of PCa. To facilitate physicians with more accurate and efficient computer-assisted diagnosis and interventions, many image processing algorithms in TRUS have been proposed and achieved state-of-the-art performance in several tasks, including prostate gland segmentation, prostate image registration, PCa classification and detection and interventional needle detection. The rapid development of these algorithms over the past 2 decades necessitates a comprehensive summary. As a consequence, this survey provides a narrative review of this field, outlining the evolution of image processing methods in the context of TRUS image analysis and meanwhile highlighting their relevant contributions. Furthermore, this survey discusses current challenges and suggests future research directions to possibly advance this field further.
Collapse
Affiliation(s)
- Haiqiao Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hong Wu
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Zhuoyuan Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Peiyan Yue
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Dong Ni
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yi Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China.
| |
Collapse
|
2
|
Goryachev I, Tresansky AP, Ely GT, Chrzanowski SM, Nagy JA, Rutkove SB, Anthony BW. Comparison of Quantitative Ultrasound Methods to Classify Dystrophic and Obese Models of Skeletal Muscle. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:1918-1932. [PMID: 35811236 DOI: 10.1016/j.ultrasmedbio.2022.05.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 05/15/2022] [Accepted: 05/17/2022] [Indexed: 06/15/2023]
Abstract
In this study, we compared multiple quantitative ultrasound metrics for the purpose of differentiating muscle in 20 healthy, 10 dystrophic and 10 obese mice. High-frequency ultrasound scans were acquired on dystrophic (D2-mdx), obese (db/db) and control mouse hindlimbs. A total of 248 image features were extracted from each scan, using brightness-mode statistics, Canny edge detection metrics, Haralick features, envelope statistics and radiofrequency statistics. Naïve Bayes and other classifiers were trained on single and pairs of features. The a parameter from the Homodyned K distribution at 40 MHz achieved the best univariate classification (accuracy = 85.3%). Maximum classification accuracy of 97.7% was achieved using a logistic regression classifier on the feature pair of a2 (K distribution) at 30 MHz and brightness-mode variance at 40MHz. Dystrophic and obese mice have muscle with distinct acoustic properties and can be classified to a high level of accuracy using a combination of multiple features.
Collapse
Affiliation(s)
- Ivan Goryachev
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Anne Pigula Tresansky
- Harvard-MIT Program in Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Gregory Tsiang Ely
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Stephen M Chrzanowski
- Department of Neurology, Boston Children's Hospital, Boston, Massachusetts, USA; Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Janice A Nagy
- Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Seward B Rutkove
- Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Brian W Anthony
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.
| |
Collapse
|
3
|
Jiang J, Guo Y, Bi Z, Huang Z, Yu G, Wang J. Segmentation of prostate ultrasound images: the state of the art and the future directions of segmentation algorithms. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10179-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
4
|
Autonomous Prostate Segmentation in 2D B-Mode Ultrasound Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12062994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Prostate brachytherapy is a treatment for prostate cancer; during the planning of the procedure, ultrasound images of the prostate are taken. The prostate must be segmented out in each of the ultrasound images, and to assist with the procedure, an autonomous prostate segmentation algorithm is proposed. The prostate contouring system presented here is based on a novel superpixel algorithm, whereby pixels in the ultrasound image are grouped into superpixel regions that are optimized based on statistical similarity measures, so that the various structures within the ultrasound image can be differentiated. An active shape prostate contour model is developed and then used to delineate the prostate within the image based on the superpixel regions. Before segmentation, this contour model was fit to a series of point-based clinician-segmented prostate contours exported from conventional prostate brachytherapy planning software to develop a statistical model of the shape of the prostate. The algorithm was evaluated on nine sets of in vivo prostate ultrasound images and compared with manually segmented contours from a clinician, where the algorithm had an average volume difference of 4.49 mL or 10.89%.
Collapse
|
5
|
Estimation of the Prostate Volume from Abdominal Ultrasound Images by Image-Patch Voting. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031390] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Estimation of the prostate volume with ultrasound offers many advantages such as portability, low cost, harmlessness, and suitability for real-time operation. Abdominal Ultrasound (AUS) is a practical procedure that deserves more attention in automated prostate-volume-estimation studies. As the experts usually consider automatic end-to-end volume-estimation procedures as non-transparent and uninterpretable systems, we proposed an expert-in-the-loop automatic system that follows the classical prostate-volume-estimation procedures. Our system directly estimates the diameter parameters of the standard ellipsoid formula to produce the prostate volume. To obtain the diameters, our system detects four diameter endpoints from the transverse and two diameter endpoints from the sagittal AUS images as defined by the classical procedure. These endpoints are estimated using a new image-patch voting method to address characteristic problems of AUS images. We formed a novel prostate AUS data set from 305 patients with both transverse and sagittal planes. The data set includes MRI images for 75 of these patients. At least one expert manually marked all the data. Extensive experiments performed on this data set showed that the proposed system results ranged among experts’ volume estimations, and our system can be used in clinical practice.
Collapse
|
6
|
Wang Y, Dou H, Hu X, Zhu L, Yang X, Xu M, Qin J, Heng PA, Wang T, Ni D. Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2768-2778. [PMID: 31021793 DOI: 10.1109/tmi.2019.2913184] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Automatic prostate segmentation in transrectal ultrasound (TRUS) images is of essential importance for image-guided prostate interventions and treatment planning. However, developing such automatic solutions remains very challenging due to the missing/ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS, as well as the large variability in prostate shapes. This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our attention module utilizes the attention mechanism to selectively leverage the multi-level features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance. The proposed attention mechanism is a general strategy to aggregate multi-level deep features and has the potential to be used for other medical image segmentation tasks. The code is publicly available at https://github.com/wulalago/DAF3D.
Collapse
|
7
|
Hambarde P, Talbar SN, Sable N, Mahajan A, Chavan SS, Thakur M. Radiomics for peripheral zone and intra-prostatic urethra segmentation in MR imaging. Biomed Signal Process Control 2019; 51:19-29. [DOI: 10.1016/j.bspc.2019.01.024] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
8
|
van Sloun RJG, Wildeboer RR, Mannaerts CK, Postema AW, Gayet M, Beerlage HP, Salomon G, Wijkstra H, Mischi M. Deep Learning for Real-time, Automatic, and Scanner-adapted Prostate (Zone) Segmentation of Transrectal Ultrasound, for Example, Magnetic Resonance Imaging-transrectal Ultrasound Fusion Prostate Biopsy. Eur Urol Focus 2019; 7:78-85. [PMID: 31028016 DOI: 10.1016/j.euf.2019.04.009] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 03/25/2019] [Accepted: 04/10/2019] [Indexed: 02/06/2023]
Abstract
BACKGROUND Although recent advances in multiparametric magnetic resonance imaging (MRI) led to an increase in MRI-transrectal ultrasound (TRUS) fusion prostate biopsies, these are time consuming, laborious, and costly. Introduction of deep-learning approach would improve prostate segmentation. OBJECTIVE To exploit deep learning to perform automatic, real-time prostate (zone) segmentation on TRUS images from different scanners. DESIGN, SETTING, AND PARTICIPANTS Three datasets with TRUS images were collected at different institutions, using an iU22 (Philips Healthcare, Bothell, WA, USA), a Pro Focus 2202a (BK Medical), and an Aixplorer (SuperSonic Imagine, Aix-en-Provence, France) ultrasound scanner. The datasets contained 436 images from 181 men. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Manual delineations from an expert panel were used as ground truth. The (zonal) segmentation performance was evaluated in terms of the pixel-wise accuracy, Jaccard index, and Hausdorff distance. RESULTS AND LIMITATIONS The developed deep-learning approach was demonstrated to significantly improve prostate segmentation compared with a conventional automated technique, reaching median accuracy of 98% (95% confidence interval 95-99%), a Jaccard index of 0.93 (0.80-0.96), and a Hausdorff distance of 3.0 (1.3-8.7) mm. Zonal segmentation yielded pixel-wise accuracy of 97% (95-99%) and 98% (96-99%) for the peripheral and transition zones, respectively. Supervised domain adaptation resulted in retainment of high performance when applied to images from different ultrasound scanners (p > 0.05). Moreover, the algorithm's assessment of its own segmentation performance showed a strong correlation with the actual segmentation performance (Pearson's correlation 0.72, p < 0.001), indicating that possible incorrect segmentations can be identified swiftly. CONCLUSIONS Fusion-guided prostate biopsies, targeting suspicious lesions on MRI using TRUS are increasingly performed. The requirement for (semi)manual prostate delineation places a substantial burden on clinicians. Deep learning provides a means for fast and accurate (zonal) prostate segmentation of TRUS images that translates to different scanners. PATIENT SUMMARY Artificial intelligence for automatic delineation of the prostate on ultrasound was shown to be reliable and applicable to different scanners. This method can, for example, be applied to speed up, and possibly improve, guided prostate biopsies using magnetic resonance imaging-transrectal ultrasound fusion.
Collapse
Affiliation(s)
- Ruud J G van Sloun
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Rogier R Wildeboer
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Christophe K Mannaerts
- Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Arnoud W Postema
- Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Maudy Gayet
- Department of Urology, Jeroen Bosch Hospital, 's-Hertogenbosch, The Netherlands
| | - Harrie P Beerlage
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Georg Salomon
- Martini Klinik-Prostate Cancer Center, University Hospital Hamburg Eppendorf, Hamburg, Germany
| | - Hessel Wijkstra
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Massimo Mischi
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
9
|
Kim B, Kim KC, Park Y, Kwon JY, Jang J, Seo JK. Machine-learning-based automatic identification of fetal abdominal circumference from ultrasound images. Physiol Meas 2018; 39:105007. [PMID: 30226815 DOI: 10.1088/1361-6579/aae255] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Obstetricians mainly use ultrasound imaging for fetal biometric measurements. However, such measurements are cumbersome. Hence, there is urgent need for automatic biometric estimation. Automated analysis of ultrasound images is complicated owing to the patient-specific, operator-dependent, and machine-specific characteristics of such images. APPROACH This paper proposes a method for the automatic fetal biometry estimation from 2D ultrasound data through several processes consisting of a specially designed convolutional neural network (CNN) and U-Net for each process. These machine learning techniques take clinicians' decisions, anatomical structures, and the characteristics of ultrasound images into account. The proposed method is divided into three steps: initial abdominal circumference (AC) estimation, AC measurement, and plane acceptance checking. MAIN RESULTS A CNN is used to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical vein), and a Hough transform is used to obtain an initial estimate of the AC. These data are applied to other CNNs to estimate the spine position and bone regions. Then, the obtained information is used to determine the final AC. After determining the AC, a U-Net and a classification CNN are used to check whether the image is suitable for AC measurement. Finally, the efficacy of the proposed method is validated by clinical data. SIGNIFICANCE Our method achieved a Dice similarity metric of [Formula: see text] for AC measurement and an accuracy of 87.10% for our acceptance check of the fetal abdominal standard plane.
Collapse
Affiliation(s)
- Bukweon Kim
- Department of Computational Science and Engineering, Yonsei University, Seoul 03722, Republic of Korea
| | | | | | | | | | | |
Collapse
|
10
|
A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy. Med Image Anal 2018; 48:107-116. [PMID: 29886268 DOI: 10.1016/j.media.2018.05.010] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Revised: 05/30/2018] [Accepted: 05/31/2018] [Indexed: 12/14/2022]
Abstract
Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach.
Collapse
|
11
|
Puchalski RB, Shah N, Miller J, Dalley R, Nomura SR, Yoon JG, Smith KA, Lankerovich M, Bertagnolli D, Bickley K, Boe AF, Brouner K, Butler S, Caldejon S, Chapin M, Datta S, Dee N, Desta T, Dolbeare T, Dotson N, Ebbert A, Feng D, Feng X, Fisher M, Gee G, Goldy J, Gourley L, Gregor BW, Gu G, Hejazinia N, Hohmann J, Hothi P, Howard R, Joines K, Kriedberg A, Kuan L, Lau C, Lee F, Lee H, Lemon T, Long F, Mastan N, Mott E, Murthy C, Ngo K, Olson E, Reding M, Riley Z, Rosen D, Sandman D, Shapovalova N, Slaughterbeck CR, Sodt A, Stockdale G, Szafer A, Wakeman W, Wohnoutka PE, White SJ, Marsh D, Rostomily RC, Ng L, Dang C, Jones A, Keogh B, Gittleman HR, Barnholtz-Sloan JS, Cimino PJ, Uppin MS, Keene CD, Farrokhi FR, Lathia JD, Berens ME, Iavarone A, Bernard A, Lein E, Phillips JW, Rostad SW, Cobbs C, Hawrylycz MJ, Foltz GD. An anatomic transcriptional atlas of human glioblastoma. Science 2018; 360:660-663. [PMID: 29748285 PMCID: PMC6414061 DOI: 10.1126/science.aaf2666] [Citation(s) in RCA: 384] [Impact Index Per Article: 54.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Accepted: 03/30/2018] [Indexed: 12/20/2022]
Abstract
Glioblastoma is an aggressive brain tumor that carries a poor prognosis. The tumor's molecular and cellular landscapes are complex, and their relationships to histologic features routinely used for diagnosis are unclear. We present the Ivy Glioblastoma Atlas, an anatomically based transcriptional atlas of human glioblastoma that aligns individual histologic features with genomic alterations and gene expression patterns, thus assigning molecular information to the most important morphologic hallmarks of the tumor. The atlas and its clinical and genomic database are freely accessible online data resources that will serve as a valuable platform for future investigations of glioblastoma pathogenesis, diagnosis, and treatment.
Collapse
Affiliation(s)
- Ralph B Puchalski
- Allen Institute for Brain Science, Seattle, WA 98109, USA.
- Ben and Catherine Ivy Center for Advanced Brain Tumor Treatment, Swedish Neuroscience Institute, Seattle, WA 98122, USA
| | - Nameeta Shah
- Ben and Catherine Ivy Center for Advanced Brain Tumor Treatment, Swedish Neuroscience Institute, Seattle, WA 98122, USA.
- Mazumdar Shaw Center for Translational Research, Bangalore 560099, India
| | - Jeremy Miller
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Rachel Dalley
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Steve R Nomura
- Ben and Catherine Ivy Center for Advanced Brain Tumor Treatment, Swedish Neuroscience Institute, Seattle, WA 98122, USA
| | - Jae-Guen Yoon
- Ben and Catherine Ivy Center for Advanced Brain Tumor Treatment, Swedish Neuroscience Institute, Seattle, WA 98122, USA
| | | | - Michael Lankerovich
- Ben and Catherine Ivy Center for Advanced Brain Tumor Treatment, Swedish Neuroscience Institute, Seattle, WA 98122, USA
| | | | - Kris Bickley
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Andrew F Boe
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Krissy Brouner
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | | | - Mike Chapin
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Suvro Datta
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Nick Dee
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Tsega Desta
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Tim Dolbeare
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | - Amanda Ebbert
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - David Feng
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Xu Feng
- Radia Inc., Lynnwood, WA 98036, USA
| | - Michael Fisher
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Garrett Gee
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Jeff Goldy
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | | | - Guangyu Gu
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Nika Hejazinia
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - John Hohmann
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Parvinder Hothi
- Ben and Catherine Ivy Center for Advanced Brain Tumor Treatment, Swedish Neuroscience Institute, Seattle, WA 98122, USA
| | - Robert Howard
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Kevin Joines
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Ali Kriedberg
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Leonard Kuan
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Chris Lau
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Felix Lee
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Hwahyung Lee
- Ben and Catherine Ivy Center for Advanced Brain Tumor Treatment, Swedish Neuroscience Institute, Seattle, WA 98122, USA
| | - Tracy Lemon
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Fuhui Long
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Naveed Mastan
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Erika Mott
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Chantal Murthy
- Ben and Catherine Ivy Center for Advanced Brain Tumor Treatment, Swedish Neuroscience Institute, Seattle, WA 98122, USA
| | - Kiet Ngo
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Eric Olson
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Melissa Reding
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Zack Riley
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - David Rosen
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - David Sandman
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | | | - Andrew Sodt
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | - Aaron Szafer
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Wayne Wakeman
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | | | - Don Marsh
- White Marsh Forests, Seattle, WA 98119, USA
| | - Robert C Rostomily
- Department of Neurosurgery, Institute for Stem Cell and Regenerative Medicine, University of Washington School of Medicine, Seattle, WA 98195, USA
- Department of Neurological Surgery, Houston Methodist Hospital and Research Institute, Houston, TX 77030, USA
| | - Lydia Ng
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Chinh Dang
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Allan Jones
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | - Haley R Gittleman
- Case Comprehensive Cancer Center, Case Western Reserve University School of Medicine, Cleveland, OH 44106, USA
| | - Jill S Barnholtz-Sloan
- Case Comprehensive Cancer Center, Case Western Reserve University School of Medicine, Cleveland, OH 44106, USA
| | - Patrick J Cimino
- Department of Pathology, Division of Neuropathology, University of Washington School of Medicine, Seattle, WA 98104, USA
| | - Megha S Uppin
- Nizam's Institute of Medical Sciences, Punjagutta, Hyderabad 500082, India
| | - C Dirk Keene
- Department of Pathology, Division of Neuropathology, University of Washington School of Medicine, Seattle, WA 98104, USA
| | | | - Justin D Lathia
- Department of Cellular and Molecular Medicine, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Michael E Berens
- TGen, Translational Genomics Research Institute, Phoenix, AZ 85004, USA
| | - Antonio Iavarone
- Institute for Cancer Genetics, Columbia University, New York, NY 10032, USA
- Department of Neurology, Columbia University, New York, NY 10032, USA
- Department of Pathology, Columbia University, New York, NY 10032, USA
| | - Amy Bernard
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Ed Lein
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | | | - Charles Cobbs
- Ben and Catherine Ivy Center for Advanced Brain Tumor Treatment, Swedish Neuroscience Institute, Seattle, WA 98122, USA
| | | | - Greg D Foltz
- Ben and Catherine Ivy Center for Advanced Brain Tumor Treatment, Swedish Neuroscience Institute, Seattle, WA 98122, USA
| |
Collapse
|
12
|
Zeng Q, Samei G, Karimi D, Kesch C, Mahdavi SS, Abolmaesumi P, Salcudean SE. Prostate segmentation in transrectal ultrasound using magnetic resonance imaging priors. Int J Comput Assist Radiol Surg 2018; 13:749-757. [DOI: 10.1007/s11548-018-1742-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 03/19/2018] [Indexed: 10/17/2022]
|
13
|
Jang J, Park Y, Kim B, Lee SM, Kwon JY, Seo JK. Automatic Estimation of Fetal Abdominal Circumference From Ultrasound Images. IEEE J Biomed Health Inform 2017; 22:1512-1520. [PMID: 29990257 DOI: 10.1109/jbhi.2017.2776116] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Ultrasound diagnosis is routinely used in obstetrics and gynecology for fetal biometry, and owing to its time-consuming process, there has been a great demand for automatic estimation. However, the automated analysis of ultrasound images is complicated because they are patient specific, operator dependent, and machine specific. Among various types of fetal biometry, the accurate estimation of abdominal circumference (AC) is especially difficult to perform automatically because the abdomen has low contrast against surroundings, nonuniform contrast, and irregular shape compared to other parameters. We propose a method for the automatic estimation of the fetal AC from two-dimensional ultrasound data through a specially designed convolutional neural network (CNN), which takes account of doctors' decision process, anatomical structure, and the characteristics of the ultrasound image. The proposed method uses CNN to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical vein) and Hough transformation for measuring AC. We test the proposed method using clinical ultrasound data acquired from 56 pregnant women. Experimental results show that, with relatively small training samples, the proposed CNN provides sufficient classification results for AC estimation through the Hough transformation. The proposed method automatically estimates AC from ultrasound images. The method is quantitatively evaluated and shows stable performance in most cases and even for ultrasound images deteriorated by shadowing artifacts. As a result of experiments for our acceptance check, the accuracies are 0.809 and 0.771 with expert 1 and expert 2, respectively, whereas the accuracy between the two experts is 0.905. However, for cases of oversized fetus, when the amniotic fluid is not observed or the abdominal area is distorted, it could not correctly estimate AC.
Collapse
|
14
|
Phase based distance regularized level set for the segmentation of ultrasound kidney images. Pattern Recognit Lett 2017. [DOI: 10.1016/j.patrec.2016.12.002] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
15
|
Fully automatic prostate segmentation from transrectal ultrasound images based on radial bas-relief initialization and slice-based propagation. Comput Biol Med 2016; 74:74-90. [PMID: 27208705 DOI: 10.1016/j.compbiomed.2016.05.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2015] [Revised: 05/03/2016] [Accepted: 05/05/2016] [Indexed: 11/22/2022]
Abstract
Prostate segmentation from transrectal ultrasound (TRUS) images plays an important role in the diagnosis and treatment planning of prostate cancer. In this paper, a fully automatic slice-based segmentation method was developed to segment TRUS prostate images. The initial prostate contour was determined using a novel method based on the radial bas-relief (RBR) method, and a false edge removal algorithm proposed here in. 2D slice-based propagation was used in which the contour on each image slice was deformed using a level-set evolution model, which was driven by edge-based and region-based energy fields generated by dyadic wavelet transform. The optimized contour on an image slice propagated to the adjacent slice, and subsequently deformed using the level-set model. The propagation continued until all image slices were segmented. To determine the initial slice where the propagation began, the initial prostate contour was deformed individually on each transverse image. A method was developed to self-assess the accuracy of the deformed contour based on the average image intensity inside and outside of the contour. The transverse image on which highest accuracy was attained was chosen to be the initial slice for the propagation process. Evaluation was performed for 336 transverse images from 15 prostates that include images acquired at mid-gland, base and apex regions of the prostates. The average mean absolute difference (MAD) between algorithm and manual segmentations was 0.79±0.26mm, which is comparable to results produced by previously published semi-automatic segmentation methods. Statistical evaluation shows that accurate segmentation was not only obtained at the mid-gland, but also at the base and apex regions.
Collapse
|
16
|
Yang X, Rossi PJ, Jani AB, Mao H, Curran WJ, Liu T. 3D Transrectal Ultrasound (TRUS) Prostate Segmentation Based on Optimal Feature Learning Framework. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9784. [PMID: 31467459 DOI: 10.1117/12.2216396] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We propose a 3D prostate segmentation method for transrectal ultrasound (TRUS) images, which is based on patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to train the kernel support vector machine (KSVM). The well-trained SVM was used to localize the prostate of the new patient. Our segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentations (gold standard). The mean volume Dice overlap coefficient was 89.7%. In this study, we have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute
| | - Peter J Rossi
- Department of Radiation Oncology and Winship Cancer Institute
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute
| |
Collapse
|
17
|
Chi J, Eramian M. Enhancement of textural differences based on morphological component analysis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:2671-2684. [PMID: 25935032 DOI: 10.1109/tip.2015.2427514] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper proposes a new texture enhancement method which uses an image decomposition that allows different visual characteristics of textures to be represented by separate components in contrast with previous methods which either enhance texture indirectly or represent all texture information using a single image component. Our method is intended to be used as a preprocessing step prior to the use of texture-based image segmentation algorithms. Our method uses a modification of morphological component analysis (MCA) which allows texture to be separated into multiple morphological components each representing a different visual characteristic of texture. We select four such texture characteristics and propose new dictionaries to extract these components using MCA. We then propose procedures for modifying each texture component and recombining them to produce a texture-enhanced image. We applied our method as a preprocessing step prior to a number of texture-based segmentation methods and compared the accuracy of the results, finding that our method produced results superior to comparator methods for all segmentation algorithms tested. We also demonstrate by example the main mechanism by which our method produces superior results, namely that it causes the clusters of local texture features of each distinct image texture to mutually diverge within the multidimensional feature space to a vastly superior degree versus the comparator enhancement methods.
Collapse
|
18
|
Wu P, Liu Y, Li Y, Liu B. Robust Prostate Segmentation Using Intrinsic Properties of TRUS Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1321-1335. [PMID: 25576565 DOI: 10.1109/tmi.2015.2388699] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Accurate segmentation is usually crucial in transrectal ultrasound (TRUS) image based prostate diagnosis; however, it is always hampered by heavy speckles. Contrary to the traditional view that speckles are adverse to segmentation, we exploit intrinsic properties induced by speckles to facilitate the task, based on the observations that sizes and orientations of speckles provide salient cues to determine the prostate boundary. Since the speckle orientation changes in accordance with a statistical prior rule, rotation-invariant texture feature is extracted along the orientations revealed by the rule. To address the problem of feature changes due to different speckle sizes, TRUS images are split into several arc-like strips. In each strip, every individual feature vector is sparsely represented, and representation residuals are obtained. The residuals, along with the spatial coherence inherited from biological tissues, are combined to segment the prostate preliminarily via graph cuts. After that, the segmentation is fine-tuned by a novel level sets model, which integrates (1) the prostate shape prior, (2) dark-to-light intensity transition near the prostate boundary, and (3) the texture feature just obtained. The proposed method is validated on two 2-D image datasets obtained from two different sonographic imaging systems, with the mean absolute distance on the mid gland images only 1.06±0.53 mm and 1.25±0.77 mm, respectively. The method is also extended to segment apex and base images, producing competitive results over the state of the art.
Collapse
|
19
|
Nouranian S, Mahdavi SS, Spadinger I, Morris WJ, Salcudean SE, Abolmaesumi P. A multi-atlas-based segmentation framework for prostate brachytherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:950-961. [PMID: 25474806 DOI: 10.1109/tmi.2014.2371823] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Low-dose-rate brachytherapy is a radiation treatment method for localized prostate cancer. The standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate in order to devise a plan to deliver sufficient radiation dose to the cancerous tissue. Brachytherapy planning involves delineation of contours in these images, which closely follow the prostate boundary, i.e., clinical target volume. This process is currently performed either manually or semi-automatically, which requires user interaction for landmark initialization. In this paper, we propose a multi-atlas fusion framework to automatically delineate the clinical target volume in ultrasound images. A dataset of a priori segmented ultrasound images, i.e., atlases, is registered to a target image. We introduce a pairwise atlas agreement factor that combines an image-similarity metric and similarity between a priori segmented contours. This factor is used in an atlas selection algorithm to prune the dataset before combining the atlas contours to produce a consensus segmentation. We evaluate the proposed segmentation approach on a set of 280 transrectal prostate volume studies. The proposed method produces segmentation results that are within the range of observer variability when compared to a semi-automatic segmentation technique that is routinely used in our cancer clinic.
Collapse
|
20
|
Ukwatta E, Yuan J, Buchanan D, Chiu B, Awad J, Qiu W, Parraga G, Fenster A. Three-dimensional segmentation of three-dimensional ultrasound carotid atherosclerosis using sparse field level sets. Med Phys 2013; 40:052903. [PMID: 23635296 DOI: 10.1118/1.4800797] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Three-dimensional ultrasound (3DUS) vessel wall volume (VWV) provides a 3D measurement of carotid artery wall remodeling and atherosclerotic plaque and is sensitive to temporal changes of carotid plaque burden. Unfortunately, although 3DUS VWV provides many advantages compared to measurements of arterial wall thickening or plaque alone, it is still not widely used in research or clinical practice because of the inordinate amount of time required to train observers and to generate 3DUS VWV measurements. In this regard, semiautomated methods for segmentation of the carotid media-adventitia boundary (MAB) and the lumen-intima boundary (LIB) would greatly improve the time to train observers and for them to generate 3DUS VWV measurements with high reproducibility. METHODS The authors describe a 3D algorithm based on a modified sparse field level set method for segmenting the MAB and LIB of the common carotid artery (CCA) from 3DUS images. To the authors' knowledge, the proposed algorithm is the first direct 3D segmentation method, which has been validated for segmenting both the carotid MAB and the LIB from 3DUS images for the purpose of computing VWV. Initialization of the algorithm requires the observer to choose anchor points on each boundary on a set of transverse slices with a user-specified interslice distance (ISD), in which larger ISD requires fewer user interactions than smaller ISD. To address the challenges of the MAB and LIB segmentations from 3DUS images, the authors integrated regional- and boundary-based image statistics, expert initializations, and anatomically motivated boundary separation into the segmentation. The MAB is segmented by incorporating local region-based image information, image gradients, and the anchor points provided by the observer. Moreover, a local smoothness term is utilized to maintain the smooth surface of the MAB. The LIB is segmented by constraining its evolution using the already segmented surface of the MAB, in addition to the global region-based information and the anchor points. The algorithm-generated surfaces were sliced and evaluated with respect to manual segmentations on a slice-by-slice basis using 21 3DUS images. RESULTS The authors used ISD of 1, 2, 3, 4, and 10 mm for algorithm initialization to generate segmentation results. The algorithm-generated accuracy and intraobserver variability results are comparable to the previous methods, but with fewer user interactions. For example, for the ISD of 3 mm, the algorithm yielded an average Dice coefficient of 94.4% ± 2.2% and 90.6% ± 5.0% for the MAB and LIB and the coefficient of variation of 6.8% for computing the VWV of the CCA, while requiring only 1.72 min (vs 8.3 min for manual segmentation) for a 3DUS image. CONCLUSIONS The proposed 3D semiautomated segmentation algorithm yielded high-accuracy and high-repeatability, while reducing the expert interaction required for initializing the algorithm than the previous 2D methods.
Collapse
Affiliation(s)
- E Ukwatta
- Biomedical Engineering Graduate Program and Robarts Research Institute, The University of Western Ontario, London, Ontario N6A 3K7, Canada.
| | | | | | | | | | | | | | | |
Collapse
|
21
|
A supervised learning framework of statistical shape and probability priors for automatic prostate segmentation in ultrasound images. Med Image Anal 2013; 17:587-600. [DOI: 10.1016/j.media.2013.04.001] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2012] [Revised: 02/05/2013] [Accepted: 04/01/2013] [Indexed: 11/21/2022]
|
22
|
Kim SG, Seo YG. A TRUS Prostate Segmentation using Gabor Texture Features and Snake-like Contour. JOURNAL OF INFORMATION PROCESSING SYSTEMS 2013. [DOI: 10.3745/jips.2013.9.1.103] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
23
|
Mahdavi SS, Spadinger I, Chng N, Salcudean SE, Morris WJ. Semiautomatic segmentation for prostate brachytherapy: Dosimetric evaluation. Brachytherapy 2013; 12:65-76. [DOI: 10.1016/j.brachy.2011.07.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2011] [Revised: 07/05/2011] [Accepted: 07/22/2011] [Indexed: 10/17/2022]
|
24
|
Mahdavi SS, Moradi M, Morris WJ, Goldenberg SL, Salcudean SE. Fusion of ultrasound B-mode and vibro-elastography images for automatic 3D segmentation of the prostate. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:2073-2082. [PMID: 22829391 DOI: 10.1109/tmi.2012.2209204] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Prostate segmentation in B-mode images is a challenging task even when done manually by experts. In this paper we propose a 3D automatic prostate segmentation algorithm which makes use of information from both ultrasound B-mode and vibro-elastography data.We exploit the high contrast to noise ratio of vibro-elastography images of the prostate, in addition to the commonly used B-mode images, to implement a 2D Active Shape Model (ASM)-based segmentation algorithm on the midgland image. The prostate model is deformed by a combination of two measures: the gray level similarity and the continuity of the prostate edge in both image types. The automatically obtained mid-gland contour is then used to initialize a 3D segmentation algorithm which models the prostate as a tapered and warped ellipsoid. Vibro-elastography images are used in addition to ultrasound images to improve boundary detection.We report a Dice similarity coefficient of 0.87±0.07 and 0.87±0.08 comparing the 2D automatic contours with manual contours of two observers on 61 images. For 11 cases, a whole gland volume error of 10.2±2.2% and 13.5±4.1% and whole gland volume difference of -7.2±9.1% and -13.3±12.6% between 3D automatic and manual surfaces of two observers is obtained. This is the first validated work showing the fusion of B-mode and vibro-elastography data for automatic 3D segmentation of the prostate.
Collapse
|
25
|
Ghose S, Oliver A, Martí R, Lladó X, Vilanova JC, Freixenet J, Mitra J, Sidibé D, Meriaudeau F. A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2012; 108:262-287. [PMID: 22739209 DOI: 10.1016/j.cmpb.2012.04.006] [Citation(s) in RCA: 108] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2011] [Revised: 04/17/2012] [Accepted: 04/17/2012] [Indexed: 06/01/2023]
Abstract
Prostate segmentation is a challenging task, and the challenges significantly differ from one imaging modality to another. Low contrast, speckle, micro-calcifications and imaging artifacts like shadow poses serious challenges to accurate prostate segmentation in transrectal ultrasound (TRUS) images. However in magnetic resonance (MR) images, superior soft tissue contrast highlights large variability in shape, size and texture information inside the prostate. In contrast poor soft tissue contrast between prostate and surrounding tissues in computed tomography (CT) images pose a challenge in accurate prostate segmentation. This article reviews the methods developed for prostate gland segmentation TRUS, MR and CT images, the three primary imaging modalities that aids prostate cancer diagnosis and treatment. The objective of this work is to study the key similarities and differences among the different methods, highlighting their strengths and weaknesses in order to assist in the choice of an appropriate segmentation methodology. We define a new taxonomy for prostate segmentation strategies that allows first to group the algorithms and then to point out the main advantages and drawbacks of each strategy. We provide a comprehensive description of the existing methods in all TRUS, MR and CT modalities, highlighting their key-points and features. Finally, a discussion on choosing the most appropriate segmentation strategy for a given imaging modality is provided. A quantitative comparison of the results as reported in literature is also presented.
Collapse
Affiliation(s)
- Soumya Ghose
- Computer Vision and Robotics Group, University of Girona, Campus Montilivi, Edifici P-IV, 17071 Girona, Spain.
| | | | | | | | | | | | | | | | | |
Collapse
|
26
|
Akbari H, Fei B. 3D ultrasound image segmentation using wavelet support vector machines. Med Phys 2012; 39:2972-84. [PMID: 22755682 PMCID: PMC3360689 DOI: 10.1118/1.4709607] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2011] [Revised: 04/09/2012] [Accepted: 04/11/2012] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS) imaging is clinically used in prostate biopsy and therapy. Segmentation of the prostate on TRUS images has many applications. In this study, a three-dimensional (3D) segmentation method for TRUS images of the prostate is presented for 3D ultrasound-guided biopsy. METHODS This segmentation method utilizes a statistical shape, texture information, and intensity profiles. A set of wavelet support vector machines (W-SVMs) is applied to the images at various subregions of the prostate. The W-SVMs are trained to adaptively capture the features of the ultrasound images in order to differentiate the prostate and nonprostate tissue. This method consists of a set of wavelet transforms for extraction of prostate texture features and a kernel-based support vector machine to classify the textures. The voxels around the surface of the prostate are labeled in sagittal, coronal, and transverse planes. The weight functions are defined for each labeled voxel on each plane and on the model at each region. In the 3D segmentation procedure, the intensity profiles around the boundary between the tentatively labeled prostate and nonprostate tissue are compared to the prostate model. Consequently, the surfaces are modified based on the model intensity profiles. The segmented prostate is updated and compared to the shape model. These two steps are repeated until they converge. Manual segmentation of the prostate serves as the gold standard and a variety of methods are used to evaluate the performance of the segmentation method. RESULTS The results from 40 TRUS image volumes of 20 patients show that the Dice overlap ratio is 90.3% ± 2.3% and that the sensitivity is 87.7% ± 4.9%. CONCLUSIONS The proposed method provides a useful tool in our 3D ultrasound image-guided prostate biopsy and can also be applied to other applications in the prostate.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329, USA
| | | |
Collapse
|
27
|
Yang X, Fei B. 3D Prostate Segmentation of Ultrasound Images Combining Longitudinal Image Registration and Machine Learning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 8316:83162O. [PMID: 24027622 DOI: 10.1117/12.912188] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 ± 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | |
Collapse
|
28
|
Fei B, Schuster DM, Master V, Akbari H, Fenster A, Nieh P. A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 2012. [PMID: 22708023 DOI: 10.1117/12.912182] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.
Collapse
Affiliation(s)
- Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30329
| | | | | | | | | | | |
Collapse
|
29
|
SHAO FAN, LING KECKVOON, PHEE LOUIS, NG WANSING, XIAO DI. EFFICIENT 3D PROSTATE SURFACE DETECTION FOR ULTRASOUND GUIDED ROBOTIC BIOPSY. INT J HUM ROBOT 2011. [DOI: 10.1142/s0219843606000862] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Prostate surface detection from ultrasound images plays a key role in our recently developed ultrasound guided robotic biopsy system. However, due to the low contrast, speckle noise and shadowing in ultrasound images, this still remains a difficult task. In the current system, a 3D prostate surface is reconstructed from a sequence of 2D outlines, which are performed manually. This is arduous and the results depend heavily on the user's expertise. This paper presents a new practical method, called Evolving Bubbles, based on the level set method to semi-automatically detect the prostate surface from transrectal ultrasound (TRUS) images. To produce good results, a few initial bubbles are simply specified by the user from five particular slices based on the prostate shape. When the initial bubbles evolve along their normal directions, they expand, shrink, merge and split, and finally are attracted to the desired prostate surface. Meanwhile, to remedy the boundary leaking problem caused by gaps or weak boundaries, domain specific knowledge of the prostate and statistical information are incorporated into the Evolving Bubbles. We apply the bubbles model to eight 3D and four stacks of 2D TRUS images and the results show its effectiveness.
Collapse
Affiliation(s)
- FAN SHAO
- Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, 308433, Singapore
| | - KECK VOON LING
- School of Electrical and Electronic Engineering, Nanyang Technological University, 50, Nanyang Avenue, 639798, Singapore
| | - LOUIS PHEE
- School of Mechanical and Production Engineering, Nanyang Technological University, 50, Nanyang Avenue, 639798, Singapore
| | - WAN SING NG
- School of Mechanical and Production Engineering, Nanyang Technological University, 50, Nanyang Avenue, 639798, Singapore
| | - DI XIAO
- Singapore General Hospital, Outram Road, 169608, Singapore
| |
Collapse
|
30
|
Hacihaliloglu I, Abugharbieh R, Hodgson AJ, Rohling RN. Automatic adaptive parameterization in local phase feature-based bone segmentation in ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2011; 37:1689-1703. [PMID: 21821346 DOI: 10.1016/j.ultrasmedbio.2011.06.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2010] [Revised: 06/02/2011] [Accepted: 06/14/2011] [Indexed: 05/31/2023]
Abstract
Intensity-invariant local phase features based on Log-Gabor filters have been recently shown to produce highly accurate localizations of bone surfaces from three-dimensional (3-D) ultrasound. A key challenge, however, remains in the proper selection of filter parameters, whose values have so far been chosen empirically and kept fixed for a given image. Since Log-Gabor filter responses widely change when varying the filter parameters, actual parameter selection can significantly affect the quality of extracted features. This article presents a novel method for contextual parameter selection that autonomously adapts to image content. Our technique automatically selects the scale, bandwidth and orientation parameters of Log-Gabor filters for optimizing local phase symmetry. The proposed approach incorporates principle curvature computed from the Hessian matrix and directional filter banks in a phase scale-space framework. Evaluations performed on carefully designed in vitro experiments demonstrate 35% improvement in accuracy of bone surface localization compared with empirically-set parameterization results. Results from a pilot in vivo study on human subjects, scanned in the operating room, show similar improvements.
Collapse
Affiliation(s)
- Ilker Hacihaliloglu
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | | | | | | |
Collapse
|
31
|
Sara Mahdavi S, Moradi M, Wen X, Morris WJ, Salcudean SE. Evaluation of visualization of the prostate gland in vibro-elastography images. Med Image Anal 2011; 15:589-600. [DOI: 10.1016/j.media.2011.03.004] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2010] [Revised: 03/01/2011] [Accepted: 03/15/2011] [Indexed: 01/15/2023]
|
32
|
Ghose S, Oliver A, Martí R, Lladó X, Freixenet J, Mitra J, Vilanova JC, Comet-Batlle J, Meriaudeau F. Statistical shape and texture model of quadrature phase information for prostate segmentation. Int J Comput Assist Radiol Surg 2011; 7:43-55. [DOI: 10.1007/s11548-011-0616-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2011] [Accepted: 05/05/2011] [Indexed: 11/28/2022]
|
33
|
Ukwatta E, Awad J, Ward AD, Buchanan D, Samarabandu J, Parraga G, Fenster A. Three-dimensional ultrasound of carotid atherosclerosis: Semiautomated segmentation using a level set-based method. Med Phys 2011; 38:2479-93. [DOI: 10.1118/1.3574887] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
34
|
Akbari H, Yang X, Halig LV, Fei B. 3D Segmentation of Prostate Ultrasound images Using Wavelet Transform. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2011; 7962:79622K. [PMID: 22468205 DOI: 10.1117/12.878072] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The current definitive diagnosis of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. However, the current procedure is limited by using 2D biopsy tools to target 3D biopsy locations. This paper presents a new method for automatic segmentation of the prostate in three-dimensional transrectal ultrasound images, by extracting texture features and by statistically matching geometrical shape of the prostate. A set of Wavelet-based support vector machines (W-SVMs) are located and trained at different regions of the prostate surface. The WSVMs capture texture priors of ultrasound images for classification of the prostate and non-prostate tissues in different zones around the prostate boundary. In the segmentation procedure, these W-SVMs are trained in three sagittal, coronal, and transverse planes. The pre-trained W-SVMs are employed to tentatively label each voxel around the surface of the model as a prostate or non-prostate voxel by the texture matching. The labeled voxels in three planes after post-processing is overlaid on a prostate probability model. The probability prostate model is created using 10 segmented prostate data. Consequently, each voxel has four labels: sagittal, coronal, and transverse planes and one probability label. By defining a weight function for each labeling in each region, each voxel is labeled as a prostate or non-prostate voxel. Experimental results by using real patient data show the good performance of the proposed model in segmenting the prostate from ultrasound images.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology, Emory University, 1841 Clifton Rd, NE, Atlanta, GA, USA 30329
| | | | | | | |
Collapse
|
35
|
Garnier C, Bellanger JJ, Wu K, Shu H, Costet N, Mathieu R, De Crevoisier R, Coatrieux JL. Prostate segmentation in HIFU therapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:792-803. [PMID: 21118767 PMCID: PMC3095593 DOI: 10.1109/tmi.2010.2095465] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Prostate segmentation in 3-D transrectal ultrasound images is an important step in the definition of the intra-operative planning of high intensity focused ultrasound (HIFU) therapy. This paper presents two main approaches for the semi-automatic methods based on discrete dynamic contour and optimal surface detection. They operate in 3-D and require a minimal user interaction. They are considered both alone or sequentially combined, with and without postregularization, and applied on anisotropic and isotropic volumes. Their performance, using different metrics, has been evaluated on a set of 28 3-D images by comparison with two expert delineations. For the most efficient algorithm, the symmetric average surface distance was found to be 0.77 mm.
Collapse
Affiliation(s)
- Carole Garnier
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
| | - Jean-Jacques Bellanger
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
| | - Ke Wu
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- LIST, Laboratory of Image Science and Technology
SouthEast UniversitySi Pai Lou 2, Nanjing, 210096,CN
| | - Huazhong Shu
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- LIST, Laboratory of Image Science and Technology
SouthEast UniversitySi Pai Lou 2, Nanjing, 210096,CN
| | - Nathalie Costet
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
| | - Romain Mathieu
- Service d'urologie
CHU RennesHôpital PontchaillouUniversité de Rennes I2 rue Henri Le Guilloux 35033 Rennes cedex 9,FR
| | - Renaud De Crevoisier
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
- Département de radiothérapie
CRLCC Eugène Marquis35000 Rennes,FR
| | - Jean-Louis Coatrieux
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- * Correspondence should be adressed to: Jean-Louis Coatrieux
| |
Collapse
|
36
|
Rotemberg V, Palmeri M, Rosenzweig S, Grant S, Macleod D, Nightingale K. Acoustic Radiation Force Impulse (ARFI) imaging-based needle visualization. ULTRASONIC IMAGING 2011; 33:1-16. [PMID: 21608445 PMCID: PMC3116439 DOI: 10.1177/016173461103300101] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Ultrasound-guided needle placement is widely used in the clinical setting, particularly for central venous catheter placement, tissue biopsy and regional anesthesia. Difficulties with ultrasound guidance in these areas often result from steep needle insertion angles and spatial offsets between the imaging plane and the needle. Acoustic Radiation Force Impulse (ARFI) imaging leads to improved needle visualization because it uses a standard diagnostic scanner to perform radiation force based elasticity imaging, creating a displacement map that displays tissue stiffness variations. The needle visualization in ARFI images is independent of needle-insertion angle and also extends needle visibility out of plane. Although ARFI images portray needles well, they often do not contain the usual B-mode landmarks. Therefore, a three-step segmentation algorithm has been developed to identify a needle in an ARFI image and overlay the needle prediction on a coregistered B-mode image. The steps are: (1) contrast enhancement by median filtration and Laplacian operator filtration, (2) noise suppression through displacement estimate correlation coefficient thresholding and (3) smoothing by removal of outliers and best-fit line prediction. The algorithm was applied to data sets from horizontal 18, 21 and 25 gauge needles between 0-4 mm offset in elevation from the transducer imaging plane and to 18G needles on the transducer axis (in plane) between 10 degrees and 35 degrees from the horizontal. Needle tips were visualized within 2 mm of their actual position for both horizontal needle orientations up to 1.5 mm offset in elevation from the transducer imaging plane and on-axis angled needles between 10 degrees-35 degrees above the horizontal orientation. We conclude that segmented ARFI images overlaid on matched B-mode images hold promise for improved needle visibility in many clinical applications.
Collapse
Affiliation(s)
- Veronica Rotemberg
- Department of Biomedical Engineering, Duke University, Box 90281, 136 Hudson Hall Durham, NC 27708, USA.
| | | | | | | | | | | |
Collapse
|
37
|
Unsupervised 3D Prostate Segmentation Based on Diffusion-Weighted Imaging MRI Using Active Contour Models with a Shape Prior. JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING 2011. [DOI: 10.1155/2011/410912] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Accurate estimation of the prostate location and volume fromin vivoimages plays a crucial role in various clinical applications. Recently, magnetic resonance imaging (MRI) is proposed as a promising modality to detect and monitor prostate-related diseases. In this paper, we propose an unsupervised algorithm to segment prostate with 3D apparent diffusion coefficient (ADC) images derived from diffusion-weighted imaging (DWI) MRI without the need of a training dataset, whereas previous methods for this purpose require training datasets. We first apply a coarse segmentation to extract the shape information. Then, the shape prior is incorporated into the active contour model. Finally, morphological operations are applied to refine the segmentation results. We apply our method to an MR dataset obtained from three patients and provide segmentation results obtained by our method and an expert. Our experimental results show that the performance of the proposed method is quite successful.
Collapse
|
38
|
A magnetic resonance spectroscopy driven initialization scheme for active shape model based prostate segmentation. Med Image Anal 2010; 15:214-25. [PMID: 21195016 DOI: 10.1016/j.media.2010.09.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2009] [Revised: 09/20/2010] [Accepted: 09/28/2010] [Indexed: 11/22/2022]
Abstract
Segmentation of the prostate boundary on clinical images is useful in a large number of applications including calculation of prostate volume pre- and post-treatment, to detect extra-capsular spread, and for creating patient-specific anatomical models. Manual segmentation of the prostate boundary is, however, time consuming and subject to inter- and intra-reader variability. T2-weighted (T2-w) magnetic resonance (MR) structural imaging (MRI) and MR spectroscopy (MRS) have recently emerged as promising modalities for detection of prostate cancer in vivo. MRS data consists of spectral signals measuring relative metabolic concentrations, and the metavoxels near the prostate have distinct spectral signals from metavoxels outside the prostate. Active Shape Models (ASM's) have become very popular segmentation methods for biomedical imagery. However, ASMs require careful initialization and are extremely sensitive to model initialization. The primary contribution of this paper is a scheme to automatically initialize an ASM for prostate segmentation on endorectal in vivo multi-protocol MRI via automated identification of MR spectra that lie within the prostate. A replicated clustering scheme is employed to distinguish prostatic from extra-prostatic MR spectra in the midgland. The spatial locations of the prostate spectra so identified are used as the initial ROI for a 2D ASM. The midgland initializations are used to define a ROI that is then scaled in 3D to cover the base and apex of the prostate. A multi-feature ASM employing statistical texture features is then used to drive the edge detection instead of just image intensity information alone. Quantitative comparison with another recent ASM initialization method by Cosio showed that our scheme resulted in a superior average segmentation performance on a total of 388 2D MRI sections obtained from 32 3D endorectal in vivo patient studies. Initialization of a 2D ASM via our MRS-based clustering scheme resulted in an average overlap accuracy (true positive ratio) of 0.60, while the scheme of Cosio yielded a corresponding average accuracy of 0.56 over 388 2D MR image sections. During an ASM segmentation, using no initialization resulted in an overlap of 0.53, using the Cosio based methodology resulted in an overlap of 0.60, and using the MRS-based methodology resulted in an overlap of 0.67, with a paired Student's t-test indicating statistical significance to a high degree for all results. We also show that the final ASM segmentation result is highly correlated (as high as 0.90) to the initialization scheme.
Collapse
|
39
|
Mahdavi SS, Chng N, Spadinger I, Morris WJ, Salcudean SE. Semi-automatic segmentation for prostate interventions. Med Image Anal 2010; 15:226-37. [PMID: 21084216 DOI: 10.1016/j.media.2010.10.002] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2009] [Revised: 09/05/2010] [Accepted: 10/19/2010] [Indexed: 11/24/2022]
Abstract
In this paper we report and characterize a semi-automatic prostate segmentation method for prostate brachytherapy. Based on anatomical evidence and requirements of the treatment procedure, a warped and tapered ellipsoid was found suitable as the a-priori 3D shape of the prostate. By transforming the acquired endorectal transverse images of the prostate into ellipses, the shape fitting problem was cast into a convex problem which can be solved efficiently. The average whole gland error between non-overlapping volumes created from manual and semi-automatic contours from 21 patients was 6.63 ± 0.9%. For use in brachytherapy treatment planning, the resulting contours were modified, if deemed necessary, by radiation oncologists prior to treatment. The average whole gland volume error between the volumes computed from semi-automatic contours and those computed from modified contours, from 40 patients, was 5.82 ± 4.15%. The amount of bias in the physicians' delineations when given an initial semi-automatic contour was measured by comparing the volume error between 10 prostate volumes computed from manual contours with those of modified contours. This error was found to be 7.25 ± 0.39% for the whole gland. Automatic contouring reduced subjectivity, as evidenced by a decrease in segmentation inter- and intra-observer variability from 4.65% and 5.95% for manual segmentation to 3.04% and 3.48% for semi-automatic segmentation, respectively. We characterized the performance of the method relative to the reference obtained from manual segmentation by using a novel approach that divides the prostate region into nine sectors. We analyzed each sector independently as the requirements for segmentation accuracy depend on which region of the prostate is considered. The measured segmentation time is 14 ± 1s with an additional 32 ± 14s for initialization. By assuming 1-3 min for modification of the contours, if necessary, a total segmentation time of less than 4 min is required, with no additional time required prior to treatment planning. This compares favorably to the 5-15 min manual segmentation time required for experienced individuals. The method is currently used at the British Columbia Cancer Agency (BCCA) Vancouver Cancer Centre as part of the standard treatment routine in low dose rate prostate brachytherapy and is found to be a fast, consistent and accurate tool for the delineation of the prostate gland in ultrasound images.
Collapse
Affiliation(s)
- S Sara Mahdavi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | | | | | | | | |
Collapse
|
40
|
Wong A, Mishra AK. Generalized probabilistic scale space for image restoration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2010; 19:2774-2780. [PMID: 20421184 DOI: 10.1109/tip.2010.2048973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.
Collapse
|
41
|
|
42
|
Xu RS, Michailovich O, Salama M. Information tracking approach to segmentation of ultrasound imagery of the prostate. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2010; 57:1748-1761. [PMID: 20679005 DOI: 10.1109/tuffc.2010.1613] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
The volume of the prostate is known to be a pivotal quantity used by clinicians to assess the condition of the gland during prostate cancer screening. As an alternative to palpation, an increasing number of methods for estimation of the volume of the prostate are based on using imagery data. The necessity to process large volumes of such data creates a need for automatic segmentation tools which would allow the estimation to be carried out with maximum accuracy and efficiency. In particular, the use of transrectal ultrasound (TRUS) imaging in prostate cancer screening seems to be becoming a standard clinical practice because of the high benefit-to-cost ratio of this imaging modality. Unfortunately, the segmentation of TRUS images is still hampered by relatively low contrast and reduced SNR of the images, thereby requiring the segmentation algorithms to incorporate prior knowledge about the geometry of the gland. In this paper, a novel approach to the problem of segmenting the TRUS images is described. The proposed approach is based on the concept of distribution tracking, which provides a unified framework for modeling and fusing image-related and morphological features of the prostate. Moreover, the same framework allows the segmentation to be regularized by using a new type of weak shape priors, which minimally bias the estimation procedure, while rendering the procedure stable and robust. The value of the proposed methodology is demonstrated in a series of in silico and in vivo experiments.
Collapse
Affiliation(s)
- Robert Sheng Xu
- School of Electrical and Computer Engineering, University of Waterloo, Canada
| | | | | |
Collapse
|
43
|
Abstract
Prostate segmentation from trans-rectal transverse B-mode ultrasound images is required for radiation treatment of prostate cancer. Manual segmentation is a time-consuming task, the results of which are dependent on image quality and physicians' experience. This paper introduces a semi-automatic 3D method based on super-ellipsoidal shapes. It produces a 3D segmentation in less than 15 seconds using a warped, tapered ellipsoid fit to the prostate. A study of patient images shows good performance and repeatability. This method is currently in clinical use at the Vancouver Cancer Center where it has become the standard segmentation procedure for low dose-rate brachytherapy treatment.
Collapse
|
44
|
Alessio AM, Kinahan PE, Champley KM, Caldwell JH. Attenuation-emission alignment in cardiac PET/CT based on consistency conditions. Med Phys 2010; 37:1191-200. [PMID: 20384256 DOI: 10.1118/1.3315368] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
PURPOSE In cardiac PET and PET/CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. METHODS The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. ["Attenuation correction in PET using consistency information," IEEE Trans. Nucl. Sci. 45, 3134-3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET/CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. RESULTS The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by > 50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. CONCLUSIONS The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET/CT and provides a viable supplement to subjective manual realignment tools.
Collapse
Affiliation(s)
- Adam M Alessio
- Department of Radiology, University of Washington Medical Center, 4000 15th Avenue NE, Box 357987, Seattle, Washington 98195-7987, USA.
| | | | | | | |
Collapse
|
45
|
Abstract
Ultrasound image segmentation deals with delineating the boundaries of structures, as a step towards semi-automated or fully automated measurement of dimensions or for characterizing tissue regions. Ultrasound tissue characterization (UTC) is driven by knowledge of the physics of ultrasound and its interactions with biological tissue, and has traditionally used signal modelling and analysis to characterize and differentiate between healthy and diseased tissue. Thus, both aim to enhance the capabilities of ultrasound as a quantitative tool in clinical medicine, and the two end goals can be the same, namely to characterize the health of tissue. This article reviews both research topics, and finds that the two fields are becoming more tightly coupled, even though there are key challenges to overcome in each area, influenced by factors such as more open software-based ultrasound system architectures, increased computational power, and advances in imaging transducer design.
Collapse
Affiliation(s)
- J A Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Headington, Oxford OX3 7DQ, UK.
| |
Collapse
|
46
|
Puentes J, Dhibi M, Bressollette L, Guias B, Solaiman B. Computer-assisted venous thrombosis volume quantification. IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE : A PUBLICATION OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY 2009; 13:174-183. [PMID: 19272860 DOI: 10.1109/titb.2008.2007592] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Venous thrombosis (VT) volume assessment, by verifying its risk of progression when anticoagulant or thrombolytic therapies are prescribed, is often necessary to screen life-threatening complications. Commonly, VT volume estimation is done by manual delineation of few contours in the ultrasound (US) image sequence, assuming that the VT has a regular shape and constant radius, thus producing significant errors. This paper presents and evaluates a comprehensive functional approach based on the combination of robust anisotropic diffusion and deformable contours to calculate VT volume in a more accurate manner when applied to freehand 2-D US image sequences. Robust anisotropic filtering reduces image speckle noise without generating incoherent edge discontinuities. Prior knowledge of the VT shape allows initializing the deformable contour, which is then guided by the noise-filtering outcome. Segmented contours are subsequently used to calculate VT volume. The proposed approach is integrated into a system prototype compatible with existing clinical US machines that additionally tracks the acquired images 3-D position and provides a dense Delaunay triangulation required for volume calculation. A predefined robust anisotropic diffusion and deformable contour parameter set enhances the system usability. Experimental results pertinence is assessed by comparison with manual and tetrahedron-based volume computations, using images acquired by two medical experts of eight plastic phantoms and eight in vitro VTs, whose independently measured volume is the reference ground truth. Results show a mean difference between 16 and 35 mm(3) for volumes that vary from 655 to 2826 mm(3). Two in vivo VT volumes are also calculated to illustrate how this approach could be applied in clinical conditions when the real value is unknown. Comparative results for the two experts differ from 1.2% to 10.08% of the smallest estimated value when the image acquisition cadences are similar.
Collapse
Affiliation(s)
- John Puentes
- Image and Information Processing Department, Institut TELECOM, TELECOM Bretagne, Brest 29238, France.
| | | | | | | | | |
Collapse
|
47
|
Vikal S, Haker S, Tempany C, Fichtinger G. Prostate contouring in MRI guided biopsy. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2009; 7259:72594A. [PMID: 21132083 DOI: 10.1117/12.812433] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
With MRI possibly becoming a modality of choice for detection and staging of prostate cancer, fast and accurate outlining of the prostate is required in the volume of clinical interest. We present a semi-automatic algorithm that uses a priori knowledge of prostate shape to arrive at the final prostate contour. The contour of one slice is then used as initial estimate in the neighboring slices. Thus we propagate the contour in 3D through steps of refinement in each slice. The algorithm makes only minimum assumptions about the prostate shape. A statistical shape model of prostate contour in polar transform space is employed to narrow search space. Further, shape guidance is implicitly imposed by allowing only plausible edge orientations using template matching. The algorithm does not require region-homogeneity, discriminative edge force, or any particular edge profile. Likewise, it makes no assumption on the imaging coils and pulse sequences used and it is robust to the patient's pose (supine, prone, etc.). The contour method was validated using expert segmentation on clinical MRI data. We recorded a mean absolute distance of 2.0 ± 0.6 mm and dice similarity coefficient of 0.93 ± 0.3 in midsection. The algorithm takes about 1 second per slice.
Collapse
Affiliation(s)
- Siddharth Vikal
- School of Computing, Queen's University, Kingston, ON, Canada
| | | | | | | |
Collapse
|
48
|
Carneiro G, Georgescu B, Good S, Comaniciu D. Detection and measurement of fetal anatomies from ultrasound images using a constrained probabilistic boosting tree. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:1342-55. [PMID: 18753047 DOI: 10.1109/tmi.2008.928917] [Citation(s) in RCA: 88] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
We propose a novel method for the automatic detection and measurement of fetal anatomical structures in ultrasound images. This problem offers a myriad of challenges, including: difficulty of modeling the appearance variations of the visual object of interest, robustness to speckle noise and signal dropout, and large search space of the detection procedure. Previous solutions typically rely on the explicit encoding of prior knowledge and formulation of the problem as a perceptual grouping task solved through clustering or variational approaches. These methods are constrained by the validity of the underlying assumptions and usually are not enough to capture the complex appearances of fetal anatomies. We propose a novel system for fast automatic detection and measurement of fetal anatomies that directly exploits a large database of expert annotated fetal anatomical structures in ultrasound images. Our method learns automatically to distinguish between the appearance of the object of interest and background by training a constrained probabilistic boosting tree classifier. This system is able to produce the automatic segmentation of several fetal anatomies using the same basic detection algorithm. We show results on fully automatic measurement of biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), femur length (FL), humerus length (HL), and crown rump length (CRL). Notice that our approach is the first in the literature to deal with the HL and CRL measurements. Extensive experiments (with clinical validation) show that our system is, on average, close to the accuracy of experts in terms of segmentation and obstetric measurements. Finally, this system runs under half second on a standard dual-core PC computer.
Collapse
Affiliation(s)
- Gustavo Carneiro
- Integrated Data Systems Department, Siemens Corporate Research, Princeton, NJ 08540, USA.
| | | | | | | |
Collapse
|
49
|
Rusnell BJ, Pierson RA, Singh J, Adams GP, Eramian MG. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images. Reprod Biol Endocrinol 2008; 6:33. [PMID: 18680589 PMCID: PMC2519064 DOI: 10.1186/1477-7827-6-33] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/27/2008] [Accepted: 08/04/2008] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL) boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1-2 mm by a level set image segmentation methodology. METHODS Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8) obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD), root mean squared difference (RMSD), Hausdorff distance (HD), sensitivity, and specificity metrics. RESULTS AND DISCUSSION The mean MAD was 0.87 mm (sigma = 0.36 mm), RMSD was 1.1 mm (sigma = 0.47 mm), and HD was 3.4 mm (sigma = 2.0 mm) indicating that, on average, boundaries were accurate within 1-2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171) and 0.990 (sigma = 0.00786), respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early. CONCLUSION The hypothesis that level set segmentation can be accurate to within 1-2 mm on average was supported, although there can be some greater deviation. The method was robust to boundary leakage as evidenced by the high specificity. It was concluded that the technique is promising and that a suitable data set of human ovarian images should be obtained to conduct further studies.
Collapse
Affiliation(s)
- Brennan J Rusnell
- Department of Computer Science, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Roger A Pierson
- Department of Obstetrics, Gynecology and Reproductive Sciences, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Jaswant Singh
- Department of Veterinary Biomedical Sciences, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Gregg P Adams
- Department of Veterinary Biomedical Sciences, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Mark G Eramian
- Department of Computer Science, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| |
Collapse
|
50
|
Yu J, Wang Y, Shen Y. Noise reduction and edge detection via kernel anisotropic diffusion. Pattern Recognit Lett 2008. [DOI: 10.1016/j.patrec.2008.03.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|