1
|
Wang H, Wu H, Wang Z, Yue P, Ni D, Heng PA, Wang Y. A Narrative Review of Image Processing Techniques Related to Prostate Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:189-209. [PMID: 39551652 DOI: 10.1016/j.ultrasmedbio.2024.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 09/15/2024] [Accepted: 10/06/2024] [Indexed: 11/19/2024]
Abstract
Prostate cancer (PCa) poses a significant threat to men's health, with early diagnosis being crucial for improving prognosis and reducing mortality rates. Transrectal ultrasound (TRUS) plays a vital role in the diagnosis and image-guided intervention of PCa. To facilitate physicians with more accurate and efficient computer-assisted diagnosis and interventions, many image processing algorithms in TRUS have been proposed and achieved state-of-the-art performance in several tasks, including prostate gland segmentation, prostate image registration, PCa classification and detection and interventional needle detection. The rapid development of these algorithms over the past 2 decades necessitates a comprehensive summary. As a consequence, this survey provides a narrative review of this field, outlining the evolution of image processing methods in the context of TRUS image analysis and meanwhile highlighting their relevant contributions. Furthermore, this survey discusses current challenges and suggests future research directions to possibly advance this field further.
Collapse
Affiliation(s)
- Haiqiao Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hong Wu
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Zhuoyuan Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Peiyan Yue
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Dong Ni
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yi Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China.
| |
Collapse
|
2
|
Liu M, Shao X, Jiang L, Wu K. 3D EAGAN: 3D edge-aware attention generative adversarial network for prostate segmentation in transrectal ultrasound images. Quant Imaging Med Surg 2024; 14:4067-4085. [PMID: 38846298 PMCID: PMC11151225 DOI: 10.21037/qims-23-1698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 04/18/2024] [Indexed: 06/09/2024]
Abstract
Background The segmentation of prostates from transrectal ultrasound (TRUS) images is a critical step in the diagnosis and treatment of prostate cancer. Nevertheless, the manual segmentation performed by physicians is a time-consuming and laborious task. To address this challenge, there is a pressing need to develop computerized algorithms capable of autonomously segmenting prostates from TRUS images, which sets a direction and form for future development. However, automatic prostate segmentation in TRUS images has always been a challenging problem since prostates in TRUS images have ambiguous boundaries and inhomogeneous intensity distribution. Although many prostate segmentation methods have been proposed, they still need to be improved due to the lack of sensibility to edge information. Consequently, the objective of this study is to devise a highly effective prostate segmentation method that overcomes these limitations and achieves accurate segmentation of prostates in TRUS images. Methods A three-dimensional (3D) edge-aware attention generative adversarial network (3D EAGAN)-based prostate segmentation method is proposed in this paper, which consists of an edge-aware segmentation network (EASNet) that performs the prostate segmentation and a discriminator network that distinguishes predicted prostates from real prostates. The proposed EASNet is composed of an encoder-decoder-based U-Net backbone network, a detail compensation module (DCM), four 3D spatial and channel attention modules (3D SCAM), an edge enhancement module (EEM), and a global feature extractor (GFE). The DCM is proposed to compensate for the loss of detailed information caused by the down-sampling process of the encoder. The features of the DCM are selectively enhanced by the 3D spatial and channel attention module. Furthermore, an EEM is proposed to guide shallow layers in the EASNet to focus on contour and edge information in prostates. Finally, features from shallow layers and hierarchical features from the decoder module are fused through the GFE to predict the segmentation prostates. Results The proposed method is evaluated on our TRUS image dataset and the open-source µRegPro dataset. Specifically, experimental results on two datasets show that the proposed method significantly improved the average segmentation Dice score from 85.33% to 90.06%, Jaccard score from 76.09% to 84.11%, Hausdorff distance (HD) score from 8.59 to 4.58 mm, Precision score from 86.48% to 90.58%, and Recall score from 84.79% to 89.24%. Conclusions A novel 3D EAGAN-based prostate segmentation method is proposed. The proposed method consists of an EASNet and a discriminator network. Experimental results demonstrate that the proposed method has achieved satisfactory results on 3D TRUS image segmentation for prostates.
Collapse
Affiliation(s)
- Mengqing Liu
- School of Computer and Information Engineering, Nantong Institute of Technology, Nantong, China
- School of Information Engineering, Nanchang Hangkong University, Nanchang, China
| | - Xiao Shao
- School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, China
| | - Liping Jiang
- Department of Ultrasound Medicine, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Kaizhi Wu
- School of Information Engineering, Nanchang Hangkong University, Nanchang, China
| |
Collapse
|
3
|
Chen X, Liu X, Wu Y, Wang Z, Wang SH. Research related to the diagnosis of prostate cancer based on machine learning medical images: A review. Int J Med Inform 2024; 181:105279. [PMID: 37977054 DOI: 10.1016/j.ijmedinf.2023.105279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 09/06/2023] [Accepted: 10/29/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Prostate cancer is currently the second most prevalent cancer among men. Accurate diagnosis of prostate cancer can provide effective treatment for patients and greatly reduce mortality. The current medical imaging tools for screening prostate cancer are mainly MRI, CT and ultrasound. In the past 20 years, these medical imaging methods have made great progress with machine learning, especially the rise of deep learning has led to a wider application of artificial intelligence in the use of image-assisted diagnosis of prostate cancer. METHOD This review collected medical image processing methods, prostate and prostate cancer on MR images, CT images, and ultrasound images through search engines such as web of science, PubMed, and Google Scholar, including image pre-processing methods, segmentation of prostate gland on medical images, registration between prostate gland on different modal images, detection of prostate cancer lesions on the prostate. CONCLUSION Through these collated papers, it is found that the current research on the diagnosis and staging of prostate cancer using machine learning and deep learning is in its infancy, and most of the existing studies are on the diagnosis of prostate cancer and classification of lesions, and the accuracy is low, with the best results having an accuracy of less than 0.95. There are fewer studies on staging. The research is mainly focused on MR images and much less on CT images, ultrasound images. DISCUSSION Machine learning and deep learning combined with medical imaging have a broad application prospect for the diagnosis and staging of prostate cancer, but the research in this area still has more room for development.
Collapse
Affiliation(s)
- Xinyi Chen
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Xiang Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Yuke Wu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Zhenglei Wang
- Department of Medical Imaging, Shanghai Electric Power Hospital, Shanghai 201620, China.
| | - Shuo Hong Wang
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
4
|
Bi H, Sun J, Jiang Y, Ni X, Shu H. Structure boundary-preserving U-Net for prostate ultrasound image segmentation. Front Oncol 2022; 12:900340. [PMID: 35965563 PMCID: PMC9366193 DOI: 10.3389/fonc.2022.900340] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 06/30/2022] [Indexed: 11/19/2022] Open
Abstract
Prostate cancer diagnosis is performed under ultrasound-guided puncture for pathological cell extraction. However, determining accurate prostate location remains a challenge from two aspects: (1) prostate boundary in ultrasound images is always ambiguous; (2) the delineation of radiologists always occupies multiple pixels, leading to many disturbing points around the actual contour. We proposed a boundary structure-preserving U-Net (BSP U-Net) in this paper to achieve precise prostate contour. BSP U-Net incorporates prostate shape prior to traditional U-Net. The prior shape is built by the key point selection module, which is an active shape model-based method. Then, the module plugs into the traditional U-Net structure network to achieve prostate segmentation. The experiments were conducted on two datasets: PH2 + ISBI 2016 challenge and our private prostate ultrasound dataset. The results on PH2 + ISBI 2016 challenge achieved a Dice similarity coefficient (DSC) of 95.94% and a Jaccard coefficient (JC) of 88.58%. The results of prostate contour based on our method achieved a higher pixel accuracy of 97.05%, a mean intersection over union of 93.65%, a DSC of 92.54%, and a JC of 93.16%. The experimental results show that the proposed BSP U-Net has good performance on PH2 + ISBI 2016 challenge and prostate ultrasound image segmentation and outperforms other state-of-the-art methods.
Collapse
Affiliation(s)
- Hui Bi
- Department of Radiation Oncology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
- Key Laboratory of Computer Network and Information Integration, Southeast University, Nanjing, China
| | - Jiawei Sun
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Yibo Jiang
- School of Electrical and Information Engineering, Changzhou Institute of Technology, Changzhou, China
| | - Xinye Ni
- Department of Radiation Oncology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- *Correspondence: Xinye Ni,
| | - Huazhong Shu
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China
- Centre de Recherche en Information Biomédicale Sino-francais, Rennes, France
- Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, Southeast University, Nanjing, China
| |
Collapse
|
5
|
Xu X, Sanford T, Turkbey B, Xu S, Wood BJ, Yan P. Shadow-Consistent Semi-Supervised Learning for Prostate Ultrasound Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1331-1345. [PMID: 34971530 PMCID: PMC9709821 DOI: 10.1109/tmi.2021.3139999] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Prostate segmentation in transrectal ultrasound (TRUS) image is an essential prerequisite for many prostate-related clinical procedures, which, however, is also a long-standing problem due to the challenges caused by the low image quality and shadow artifacts. In this paper, we propose a Shadow-consistent Semi-supervised Learning (SCO-SSL) method with two novel mechanisms, namely shadow augmentation (Shadow-AUG) and shadow dropout (Shadow-DROP), to tackle this challenging problem. Specifically, Shadow-AUG enriches training samples by adding simulated shadow artifacts to the images to make the network robust to the shadow patterns. Shadow-DROP enforces the segmentation network to infer the prostate boundary using the neighboring shadow-free pixels. Extensive experiments are conducted on two large clinical datasets (a public dataset containing 1,761 TRUS volumes and an in-house dataset containing 662 TRUS volumes). In the fully-supervised setting, a vanilla U-Net equipped with our Shadow-AUG&Shadow-DROP outperforms the state-of-the-arts with statistical significance. In the semi-supervised setting, even with only 20% labeled training data, our SCO-SSL method still achieves highly competitive performance, suggesting great clinical value in relieving the labor of data annotation. Source code is released at https://github.com/DIAL-RPI/SCO-SSL.
Collapse
|
6
|
Peng T, Tang C, Wu Y, Cai J. H-SegMed: A Hybrid Method for Prostate Segmentation in TRUS Images via Improved Closed Principal Curve and Improved Enhanced Machine Learning. Int J Comput Vis 2022. [DOI: 10.1007/s11263-022-01619-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
7
|
Xu X, Sanford T, Turkbey B, Xu S, Wood BJ, Yan P. Polar transform network for prostate ultrasound segmentation with uncertainty estimation. Med Image Anal 2022; 78:102418. [PMID: 35349838 PMCID: PMC9082929 DOI: 10.1016/j.media.2022.102418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 12/08/2021] [Accepted: 03/07/2022] [Indexed: 10/18/2022]
Abstract
Automatic and accurate prostate ultrasound segmentation is a long-standing and challenging problem due to the severe noise and ambiguous/missing prostate boundaries. In this work, we propose a novel polar transform network (PTN) to handle this problem from a fundamentally new perspective, where the prostate is represented and segmented in the polar coordinate space rather than the original image grid space. This new representation gives a prostate volume, especially the most challenging apex and base sub-areas, much denser samples than the background and thus facilitate the learning of discriminative features for accurate prostate segmentation. Moreover, in the polar representation, the prostate surface can be efficiently parameterized using a 2D surface radius map with respect to a centroid coordinate, which allows the proposed PTN to obtain superior accuracy compared with its counterparts using convolutional neural networks while having significantly fewer (18%∼41%) trainable parameters. We also equip our PTN with a novel strategy of centroid perturbed test-time augmentation (CPTTA), which is designed to further improve the segmentation accuracy and quantitatively assess the model uncertainty at the same time. The uncertainty estimation function provides valuable feedback to clinicians when manual modifications or approvals are required for the segmentation, substantially improving the clinical significance of our work. We conduct a three-fold cross validation on a clinical dataset consisting of 315 transrectal ultrasound (TRUS) images to comprehensively evaluate the performance of the proposed method. The experimental results show that our proposed PTN with CPTTA outperforms the state-of-the-art methods with statistical significance on most of the metrics while exhibiting a much smaller model size. Source code of the proposed PTN is released at https://github.com/DIAL-RPI/PTN.
Collapse
Affiliation(s)
- Xuanang Xu
- Department of Biomedical Engineering and the Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Thomas Sanford
- Department of Urology, The State University of New York Upstate Medical University, Syracuse, NY 13210, USA
| | - Baris Turkbey
- Molecular Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Sheng Xu
- Center for Interventional Oncology, Radiology & Imaging Sciences at National Institutes of Health, Bethesda, MD 20892, USA
| | - Bradford J Wood
- Center for Interventional Oncology, Radiology & Imaging Sciences at National Institutes of Health, Bethesda, MD 20892, USA
| | - Pingkun Yan
- Department of Biomedical Engineering and the Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| |
Collapse
|
8
|
Huang J, Shao D, Liu H, Xiang Y, Ma L, Yi S, Xu H. A lightweight segmentation method based on residual U-Net for MR images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-211424] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Automatic segmentation of Magnetic Resonance Imaging (MRI), which bases on Residual U-Net (ResU-Net), helps radiologists to quickly assess the condition. However, the ResU-Net structure requires a large number of parameters and storage model space. It is not convenient to apply to mobile MRI device. To solve this problem, Depthwise Separable Convolution and Squeeze-and-Excitation Residual U-Networks (DSRU-Net) is proposed to segment MRI. Squeeze-and-Excitation method is a channel attention mechanism. The proposed method is conducive to simplify ResU-Net model, making ResU-Net more convenient to be applied to mobile MRI device. The fuzzy comprehensive evaluation method, which includes three evaluation factors are that the required parameters of the model, the value of Dice Similarity Coefficient (DSC), and the value of Hausdorff Distance (HD), is used to evaluate the test results of the proposed method on the MICCAI 2012 Prostate MR Image Segmentation (PROMISE12) challenge dataset and Automatic Cardiac Diagnosis Challenge (ACDC) dataset. The fuzzy comprehensive evaluation values obtained by the proposed method in 5 PROMISE12 samples and 15 ACDC samples are 0.9889 and 0.9652, respectively. Combining the average results of the two datasets, the proposed method has the best effect in balancing the accuracy of segmentation and the amount of model parameters.
Collapse
Affiliation(s)
- Junhui Huang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
| | - Dangguo Shao
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
- Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming, Yunnan, China
| | - Han Liu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
| | - Yan Xiang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
- Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming, Yunnan, China
| | - Lei Ma
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
| | - Sanli Yi
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
| | - Hui Xu
- First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| |
Collapse
|
9
|
Jiang J, Guo Y, Bi Z, Huang Z, Yu G, Wang J. Segmentation of prostate ultrasound images: the state of the art and the future directions of segmentation algorithms. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10179-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
10
|
Autonomous Prostate Segmentation in 2D B-Mode Ultrasound Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12062994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Prostate brachytherapy is a treatment for prostate cancer; during the planning of the procedure, ultrasound images of the prostate are taken. The prostate must be segmented out in each of the ultrasound images, and to assist with the procedure, an autonomous prostate segmentation algorithm is proposed. The prostate contouring system presented here is based on a novel superpixel algorithm, whereby pixels in the ultrasound image are grouped into superpixel regions that are optimized based on statistical similarity measures, so that the various structures within the ultrasound image can be differentiated. An active shape prostate contour model is developed and then used to delineate the prostate within the image based on the superpixel regions. Before segmentation, this contour model was fit to a series of point-based clinician-segmented prostate contours exported from conventional prostate brachytherapy planning software to develop a statistical model of the shape of the prostate. The algorithm was evaluated on nine sets of in vivo prostate ultrasound images and compared with manually segmented contours from a clinician, where the algorithm had an average volume difference of 4.49 mL or 10.89%.
Collapse
|
11
|
Drukker K, Yan P, Sibley A, Wang G. Biomedical imaging and analysis through deep learning. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00004-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
12
|
Zhang J, Shi Y, Sun J, Wang L, Zhou L, Gao Y, Shen D. Interactive medical image segmentation via a point-based interaction. Artif Intell Med 2020; 111:101998. [PMID: 33461691 DOI: 10.1016/j.artmed.2020.101998] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 10/05/2020] [Accepted: 11/23/2020] [Indexed: 11/20/2022]
Abstract
Due to low tissue contrast, irregular shape, and large location variance, segmenting the objects from different medical imaging modalities (e.g., CT, MR) is considered as an important yet challenging task. In this paper, a novel method is presented for interactive medical image segmentation with the following merits. (1) Its design is fundamentally different from previous pure patch-based and image-based segmentation methods. It is observed that during delineation, the physician repeatedly check the intensity from area inside-object to outside-object to determine the boundary, which indicates that comparison in an inside-out manner is extremely important. Thus, the method innovatively models the segmentation task as learning the representation of bi-directional sequential patches, starting from (or ending in) the given central point of the object. This can be realized by the proposed ConvRNN network embedded with a gated memory propagation unit. (2) Unlike previous interactive methods (requiring bounding box or seed points), the proposed method only asks the physician to merely click on the rough central point of the object before segmentation, which could simultaneously enhance the performance and reduce the segmentation time. (3) The method is utilized in a multi-level framework for better performance. It has been systematically evaluated in three different segmentation tasks, including CT kidney tumor, MR prostate, and PROMISE12 challenge, showing promising results compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Jian Zhang
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, China; National Institute of Healthcare Data Science, Nanjing University, China
| | - Jinquan Sun
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Lei Wang
- School of Computing and Information Technology, University of Wollongong, Australia
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia
| | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, China; National Institute of Healthcare Data Science, Nanjing University, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, China; Shanghai United Imaging Intelligence Co., Ltd., China; Department of Artificial Intelligence, Korea University, Republic of Korea
| |
Collapse
|
13
|
Mason SA, White IM, Lalondrelle S, Bamber JC, Harris EJ. The Stacked-Ellipse Algorithm: An Ultrasound-Based 3-D Uterine Segmentation Tool for Enabling Adaptive Radiotherapy for Uterine Cervix Cancer. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:1040-1052. [PMID: 31926750 PMCID: PMC7043010 DOI: 10.1016/j.ultrasmedbio.2019.09.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Revised: 08/30/2019] [Accepted: 09/04/2019] [Indexed: 06/10/2023]
Abstract
The stacked-ellipse (SE) algorithm was developed to rapidly segment the uterus on 3-D ultrasound (US) for the purpose of enabling US-guided adaptive radiotherapy (RT) for uterine cervix cancer patients. The algorithm was initialised manually on a single sagittal slice to provide a series of elliptical initialisation contours in semi-axial planes along the uterus. The elliptical initialisation contours were deformed according to US features such that they conformed to the uterine boundary. The uterus of 15 patients was scanned with 3-D US using the Clarity System (Elekta Ltd.) at multiple days during RT and manually contoured (n = 49 images and corresponding contours). The median (interquartile range) Dice similarity coefficient and mean surface-to-surface-distance between the SE algorithm and manual contours were 0.80 (0.03) and 3.3 (0.2) mm, respectively, which are within the ranges of reported inter-observer contouring variabilities. The SE algorithm could be implemented in adaptive RT to precisely segment the uterus on 3-D US.
Collapse
Affiliation(s)
- Sarah A Mason
- Joint Department of Physics, Institute of Cancer Research, London, United Kingdom
| | - Ingrid M White
- Radiotherapy Department, Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Susan Lalondrelle
- Radiotherapy Department, Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Jeffrey C Bamber
- Joint Department of Physics, Institute of Cancer Research, London, United Kingdom
| | - Emma J Harris
- Joint Department of Physics, Institute of Cancer Research, London, United Kingdom.
| |
Collapse
|
14
|
Bi H, Jiang Y, Tang H, Yang G, Shu H, Dillenseger JL. Fast and accurate segmentation method of active shape model with Rayleigh mixture model clustering for prostate ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 184:105097. [PMID: 31634807 DOI: 10.1016/j.cmpb.2019.105097] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 09/24/2019] [Accepted: 09/25/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE The prostate cancer interventions, which need an accurate prostate segmentation, are performed under ultrasound imaging guidance. However, prostate ultrasound segmentation is facing two challenges. The first is the low signal-to-noise ratio and inhomogeneity of the ultrasound image. The second is the non-standardized shape and size of the prostate. METHODS For prostate ultrasound image segmentation, this paper proposed an accurate and efficient method of Active shape model (ASM) with Rayleigh mixture model Clustering (ASM-RMMC). Firstly, Rayleigh mixture model (RMM) is adopted for clustering the image regions which present similar speckle distributions. These content-based clustered images are then used to initialize and guide the deformation of an ASM model. RESULTS The performance of the proposed method is assessed on 30 prostate ultrasound images using four metrics as Mean Average Distance (MAD), Dice Similarity Coefficient (DSC), False Positive Error (FPE) and False Negative Error (FNE). The proposed ASM-RMMC reaches high segmentation accuracy with 95% ± 0.81% for DSC, 1.86 ± 0.02 pixels for MAD, 2.10% ± 0.36% for FPE and 2.78% ± 0.71% for FNE, respectively. Moreover, the average segmentation time is less than 8 s when treating a single prostate ultrasound image through ASM-RMMC. CONCLUSIONS This paper presents a method for prostate ultrasound image segmentation, which achieves high accuracy with less computational complexity and meets the clinical requirements.
Collapse
Affiliation(s)
- Hui Bi
- Changzhou University, Changzhou, China
| | - Yibo Jiang
- Changzhou Institute of Technology, Changzhou, China
| | - Hui Tang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Guanyu Yang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Huazhong Shu
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China; Centre de Recherche en Information Biomédicale sino-français (CRIBs), Nanjing, China.
| | - Jean-Louis Dillenseger
- Centre de Recherche en Information Biomédicale sino-français (CRIBs), Nanjing, China; Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| |
Collapse
|
15
|
Wang Y, Dou H, Hu X, Zhu L, Yang X, Xu M, Qin J, Heng PA, Wang T, Ni D. Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2768-2778. [PMID: 31021793 DOI: 10.1109/tmi.2019.2913184] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Automatic prostate segmentation in transrectal ultrasound (TRUS) images is of essential importance for image-guided prostate interventions and treatment planning. However, developing such automatic solutions remains very challenging due to the missing/ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS, as well as the large variability in prostate shapes. This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our attention module utilizes the attention mechanism to selectively leverage the multi-level features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance. The proposed attention mechanism is a general strategy to aggregate multi-level deep features and has the potential to be used for other medical image segmentation tasks. The code is publicly available at https://github.com/wulalago/DAF3D.
Collapse
|
16
|
Mason SA, White IM, O'Shea T, McNair HA, Alexander S, Kalaitzaki E, Bamber JC, Harris EJ, Lalondrelle S. Combined Ultrasound and Cone Beam CT Improves Target Segmentation for Image Guided Radiation Therapy in Uterine Cervix Cancer. Int J Radiat Oncol Biol Phys 2019; 104:685-693. [PMID: 30872145 PMCID: PMC6542416 DOI: 10.1016/j.ijrobp.2019.03.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Revised: 02/07/2019] [Accepted: 03/03/2019] [Indexed: 12/14/2022]
Abstract
PURPOSE Adaptive radiation therapy strategies could account for interfractional uterine motion observed in patients with cervix cancer, but the current cone beam computed tomography (CBCT)-based treatment workflow is limited by poor soft-tissue contrast. The goal of the present study was to determine if ultrasound (US) could be used to improve visualization of the uterus, either as a single modality or in combination with CBCT. METHODS AND MATERIALS Interobserver uterine contour agreement and confidence were compared on 40 corresponding CBCT, US, and CBCT-US-fused images from 11 patients with cervix cancer. Contour agreement was measured using the Dice similarity coefficient (DSC) and mean contour-to-contour distance (MCCD). Observers rated their contour confidence on a scale from 1 to 10. Pairwise Wilcoxon signed-rank tests were used to measure differences in contour agreement and confidence. RESULTS CBCT-US fused images had significantly better contour agreement and confidence than either individual modality (P < .05), with median (interquartile range [IQR]) values of 0.84 (0.11), 1.26 (0.23) mm, and 7 (2) for the DSC, MCCD, and observer confidence ratings, respectively. Contour agreement was similar between US and CBCT, with median (IQR) DSCs of 0.81 (0.17) and 0.82 (0.14) and MCCDs of 1.75 (1.15) mm and 1.62 (0.74) mm. Observers were significantly more confident in their US-based contours than in their CBCT-based contours (P < .05), with median (IQR) confidence ratings of 7 (2.75) versus 5 (4). CONCLUSIONS CBCT and US are complementary and improve uterine segmentation precision when combined. Observers could localize the uterus with a similar precision on independent US and CBCT images.
Collapse
Affiliation(s)
- Sarah A Mason
- Institute of Cancer Research, Radiotherapy and Imaging, London, United Kingdom
| | - Ingrid M White
- Radiotherapy Department, Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Tuathan O'Shea
- Radiotherapy Physics Department, Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Helen A McNair
- Radiotherapy Department, Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Sophie Alexander
- Radiotherapy Department, Royal Marsden NHS Foundation Trust, London, United Kingdom
| | | | - Jeffrey C Bamber
- Institute of Cancer Research, Radiotherapy and Imaging, London, United Kingdom
| | - Emma J Harris
- Institute of Cancer Research, Radiotherapy and Imaging, London, United Kingdom.
| | - Susan Lalondrelle
- Radiotherapy Department, Royal Marsden NHS Foundation Trust, London, United Kingdom
| |
Collapse
|
17
|
Lei Y, Tian S, He X, Wang T, Wang B, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net. Med Phys 2019; 46:3194-3206. [PMID: 31074513 PMCID: PMC6625925 DOI: 10.1002/mp.13577] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 04/14/2019] [Accepted: 05/01/2019] [Indexed: 01/09/2023] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation. METHODS AND MATERIALS We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. RESULTS Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 ± 0.03, 3.94 ± 1.55, 0.60 ± 0.23, and 0.90 ± 0.38 mm, respectively. CONCLUSION We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Bo Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| |
Collapse
|
18
|
Abstract
Radiomics and radiogenomics are attractive research topics in prostate cancer. Radiomics mainly focuses on extraction of quantitative information from medical imaging, whereas radiogenomics aims to correlate these imaging features to genomic data. The purpose of this review is to provide a brief overview summarizing recent progress in the application of radiomics-based approaches in prostate cancer and to discuss the potential role of radiogenomics in prostate cancer.
Collapse
|
19
|
Jaouen V, Bert J, Mountris KA, Boussion N, Schick U, Pradier O, Valeri A, Visvikis D. Prostate Volume Segmentation in TRUS Using Hybrid Edge-Bhattacharyya Active Surfaces. IEEE Trans Biomed Eng 2019; 66:920-933. [DOI: 10.1109/tbme.2018.2865428] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
20
|
Li X, Li C, Liu H, Yang X. A modified level set algorithm based on point distance shape constraint for lesion and organ segmentation. Phys Med 2019; 57:123-136. [PMID: 30738516 DOI: 10.1016/j.ejmp.2018.12.032] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 10/02/2018] [Accepted: 12/23/2018] [Indexed: 11/27/2022] Open
|
21
|
Drulyte I, Ruzgas T, Raisutis R, Valiukeviciene S, Linkeviciute G. Application of automatic statistical post-processing method for analysis of ultrasonic and digital dermatoscopy images. Libyan J Med 2018; 13:1479600. [PMID: 29943665 PMCID: PMC6022253 DOI: 10.1080/19932820.2018.1479600] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 05/12/2018] [Indexed: 11/06/2022] Open
Abstract
Ultrasonic and digital dermatoscopy diagnostic methods are used in order to estimate the changes of structure, as well as to non-invasively measure the changes of parameters of lesions of human tissue. These days, it is very actual to perform the quantitative analysis of medical data, which allows to achieve the reliable early-stage diagnosis of lesions and help to save more lives. The proposed automatic statistical post-processing method based on integration of ultrasonic and digital dermatoscopy measurements is intended to estimate the parameters of malignant tumours, measure spatial dimensions (e.g. thickness) and shape, and perform faster diagnostics by increasing the accuracy of tumours differentiation. It leads to optimization of time-consuming analysis procedures of medical images and could be used as a reliable decision support tool in the field of dermatology.
Collapse
Affiliation(s)
- Indre Drulyte
- Prof. K. Baršauskas Ultrasound Research Institute, Kaunas University of Technology, Kaunas, Lithuania
| | - Tomas Ruzgas
- Department of Applied Mathematics, Faculty of Mathematics and Natural Sciences, Kaunas University of Technology, Kaunas, Lithuania
| | - Renaldas Raisutis
- Prof. K. Baršauskas Ultrasound Research Institute, Kaunas University of Technology, Kaunas, Lithuania
- Department of Electrical Power systems, Faculty of Electrical and Electronics Engineering, Kaunas University of Technology, Kaunas, Lithuania
| | - Skaidra Valiukeviciene
- Department of Skin and Venereal Diseases, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Gintare Linkeviciute
- Department of Skin and Venereal Diseases, Lithuanian University of Health Sciences, Kaunas, Lithuania
| |
Collapse
|
22
|
Ghavami N, Hu Y, Bonmati E, Rodell R, Gibson E, Moore C, Barratt D. Integration of spatial information in convolutional neural networks for automatic segmentation of intraoperative transrectal ultrasound images. J Med Imaging (Bellingham) 2018; 6:011003. [PMID: 30840715 PMCID: PMC6102407 DOI: 10.1117/1.jmi.6.1.011003] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 07/30/2018] [Indexed: 12/04/2022] Open
Abstract
Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition.
Collapse
Affiliation(s)
- Nooshin Ghavami
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Yipeng Hu
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Ester Bonmati
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Rachael Rodell
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Eli Gibson
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Caroline Moore
- University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom.,University College London, Division of Surgery and Interventional Science, London, United Kingdom.,University College London Hospitals NHS Foundation Trust, Department of Urology, London, United Kingdom
| | - Dean Barratt
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| |
Collapse
|
23
|
A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy. Med Image Anal 2018; 48:107-116. [PMID: 29886268 DOI: 10.1016/j.media.2018.05.010] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Revised: 05/30/2018] [Accepted: 05/31/2018] [Indexed: 12/14/2022]
Abstract
Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach.
Collapse
|
24
|
Onofrey JA, Staib LH, Sarkar S, Venkataraman R, Nawaf CB, Sprenkle PC, Papademetris X. Learning Non-rigid Deformations for Robust, Constrained Point-based Registration in Image-Guided MR-TRUS Prostate Intervention. Med Image Anal 2017; 39:29-43. [PMID: 28431275 DOI: 10.1016/j.media.2017.04.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2016] [Revised: 02/28/2017] [Accepted: 04/03/2017] [Indexed: 01/13/2023]
Abstract
Accurate and robust non-rigid registration of pre-procedure magnetic resonance (MR) imaging to intra-procedure trans-rectal ultrasound (TRUS) is critical for image-guided biopsies of prostate cancer. Prostate cancer is one of the most prevalent forms of cancer and the second leading cause of cancer-related death in men in the United States. TRUS-guided biopsy is the current clinical standard for prostate cancer diagnosis and assessment. State-of-the-art, clinical MR-TRUS image fusion relies upon semi-automated segmentations of the prostate in both the MR and the TRUS images to perform non-rigid surface-based registration of the gland. Segmentation of the prostate in TRUS imaging is itself a challenging task and prone to high variability. These segmentation errors can lead to poor registration and subsequently poor localization of biopsy targets, which may result in false-negative cancer detection. In this paper, we present a non-rigid surface registration approach to MR-TRUS fusion based on a statistical deformation model (SDM) of intra-procedural deformations derived from clinical training data. Synthetic validation experiments quantifying registration volume of interest overlaps of the PI-RADS parcellation standard and tests using clinical landmark data demonstrate that our use of an SDM for registration, with median target registration error of 2.98 mm, is significantly more accurate than the current clinical method. Furthermore, we show that the low-dimensional SDM registration results are robust to segmentation errors that are not uncommon in clinical TRUS data.
Collapse
Affiliation(s)
| | - Lawrence H Staib
- Department of Radiology & Biomedical Imaging, USA; Department of Electrical Engineering, USA; Department of Biomedical Engineering, USA.
| | | | | | - Cayce B Nawaf
- Department of Urology, Yale University, New Haven, Connecticut, USA.
| | | | - Xenophon Papademetris
- Department of Radiology & Biomedical Imaging, USA; Department of Biomedical Engineering, USA.
| |
Collapse
|
25
|
Li X, Li C, Fedorov A, Kapur T, Yang X. Segmentation of prostate from ultrasound images using level sets on active band and intensity variation across edges. Med Phys 2017; 43:3090-3103. [PMID: 27277056 DOI: 10.1118/1.4950721] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In this paper, the authors propose a novel efficient method to segment ultrasound images of the prostate with weak boundaries. Segmentation of the prostate from ultrasound images with weak boundaries widely exists in clinical applications. One of the most typical examples is the diagnosis and treatment of prostate cancer. Accurate segmentation of the prostate boundaries from ultrasound images plays an important role in many prostate-related applications such as the accurate placement of the biopsy needles, the assignment of the appropriate therapy in cancer treatment, and the measurement of the prostate volume. METHODS Ultrasound images of the prostate are usually corrupted with intensity inhomogeneities, weak boundaries, and unwanted edges, which make the segmentation of the prostate an inherently difficult task. Regarding to these difficulties, the authors introduce an active band term and an edge descriptor term in the modified level set energy functional. The active band term is to deal with intensity inhomogeneities and the edge descriptor term is to capture the weak boundaries or to rule out unwanted boundaries. The level set function of the proposed model is updated in a band region around the zero level set which the authors call it an active band. The active band restricts the authors' method to utilize the local image information in a banded region around the prostate contour. Compared to traditional level set methods, the average intensities inside∖outside the zero level set are only computed in this banded region. Thus, only pixels in the active band have influence on the evolution of the level set. For weak boundaries, they are hard to be distinguished by human eyes, but in local patches in the band region around prostate boundaries, they are easier to be detected. The authors incorporate an edge descriptor to calculate the total intensity variation in a local patch paralleled to the normal direction of the zero level set, which can detect weak boundaries and avoid unwanted edges in the ultrasound images. RESULTS The efficiency of the proposed model is demonstrated by experiments on real 3D volume images and 2D ultrasound images and comparisons with other approaches. Validation results on real 3D TRUS prostate images show that the authors' model can obtain a Dice similarity coefficient (DSC) of 94.03% ± 1.50% and a sensitivity of 93.16% ± 2.30%. Experiments on 100 typical 2D ultrasound images show that the authors' method can obtain a sensitivity of 94.87% ± 1.85% and a DSC of 95.82% ± 2.23%. A reproducibility experiment is done to evaluate the robustness of the proposed model. CONCLUSIONS As far as the authors know, prostate segmentation from ultrasound images with weak boundaries and unwanted edges is a difficult task. A novel method using level sets with active band and the intensity variation across edges is proposed in this paper. Extensive experimental results demonstrate that the proposed method is more efficient and accurate.
Collapse
Affiliation(s)
- Xu Li
- School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Chunming Li
- School of Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Andriy Fedorov
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts 02446
| | - Tina Kapur
- Department of Mathematics, Nanjing University, Nanjing 210093, China
| | - Xiaoping Yang
- School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
| |
Collapse
|
26
|
Ghose S, Denham JW, Ebert MA, Kennedy A, Mitra J, Dowling JA. Multi-atlas and unsupervised learning approach to perirectal space segmentation in CT images. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2016; 39:933-941. [PMID: 27844331 DOI: 10.1007/s13246-016-0496-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2016] [Accepted: 10/31/2016] [Indexed: 11/27/2022]
Abstract
Perirectal space segmentation in computed tomography images aids in quantifying radiation dose received by healthy tissues and toxicity during the course of radiation therapy treatment of the prostate. Radiation dose normalised by tissue volume facilitates predicting outcomes or possible harmful side effects of radiation therapy treatment. Manual segmentation of the perirectal space is time consuming and challenging in the presence of inter-patient anatomical variability and may suffer from inter- and intra-observer variabilities. However automatic or semi-automatic segmentation of the perirectal space in CT images is a challenging task due to inter patient anatomical variability, contrast variability and imaging artifacts. In the model presented here, a volume of interest is obtained in a multi-atlas based segmentation approach. Un-supervised learning in the volume of interest with a Gaussian-mixture-modeling based clustering approach is adopted to achieve a soft segmentation of the perirectal space. Probabilities from soft clustering are further refined by rigid registration of the multi-atlas mask in a probabilistic domain. A maximum a posteriori approach is adopted to achieve a binary segmentation from the refined probabilities. A mean volume similarity value of 97% and a mean surface difference of 3.06 ± 0.51 mm is achieved in a leave-one-patient-out validation framework with a subset of a clinical trial dataset. Qualitative results show a good approximation of the perirectal space volume compared to the ground truth.
Collapse
Affiliation(s)
- Soumya Ghose
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Ave, Cleveland, OH, 44106, USA
| | - James W Denham
- School of Medicine and Public Health, University of Newcastle, Callaghan, NSW, 2308, Australia
| | - Martin A Ebert
- Radiation Oncology, Sir Charles Gairdner Hospital, Hospital Ave, Nedlands, WA, 6009, Australia. .,School of Physics, University of Western Australia, 35 Stirling Hwy, Crawley, WA, 6009, Australia.
| | - Angel Kennedy
- Radiation Oncology, Sir Charles Gairdner Hospital, Hospital Ave, Nedlands, WA, 6009, Australia
| | - Jhimli Mitra
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Ave, Cleveland, OH, 44106, USA
| | - Jason A Dowling
- Australian e-Health Research Centre, CSIRO, Brisbane, QLD, 4029, Australia
| |
Collapse
|
27
|
Sridar P, Kumar A, Li C, Woo J, Quinton A, Benzie R, Peek MJ, Feng D, Kumar RK, Nanan R, Kim J. Automatic Measurement of Thalamic Diameter in 2-D Fetal Ultrasound Brain Images Using Shape Prior Constrained Regularized Level Sets. IEEE J Biomed Health Inform 2016; 21:1069-1078. [PMID: 27333614 DOI: 10.1109/jbhi.2016.2582175] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We derived an automated algorithm for accurately measuring the thalamic diameter from 2-D fetal ultrasound (US) brain images. The algorithm overcomes the inherent limitations of the US image modality: nonuniform density; missing boundaries; and strong speckle noise. We introduced a "guitar" structure that represents the negative space surrounding the thalamic regions. The guitar acts as a landmark for deriving the widest points of the thalamus even when its boundaries are not identifiable. We augmented a generalized level-set framework with a shape prior and constraints derived from statistical shape models of the guitars; this framework was used to segment US images and measure the thalamic diameter. Our segmentation method achieved a higher mean Dice similarity coefficient, Hausdorff distance, specificity, and reduced contour leakage when compared to other well-established methods. The automatic thalamic diameter measurement had an interobserver variability of -0.56 ± 2.29 mm compared to manual measurement by an expert sonographer. Our method was capable of automatically estimating the thalamic diameter, with the measurement accuracy on par with clinical assessment. Our method can be used as part of computer-assisted screening tools that automatically measure the biometrics of the fetal thalamus; these biometrics are linked to neurodevelopmental outcomes.
Collapse
|
28
|
Fully automatic prostate segmentation from transrectal ultrasound images based on radial bas-relief initialization and slice-based propagation. Comput Biol Med 2016; 74:74-90. [PMID: 27208705 DOI: 10.1016/j.compbiomed.2016.05.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2015] [Revised: 05/03/2016] [Accepted: 05/05/2016] [Indexed: 11/22/2022]
Abstract
Prostate segmentation from transrectal ultrasound (TRUS) images plays an important role in the diagnosis and treatment planning of prostate cancer. In this paper, a fully automatic slice-based segmentation method was developed to segment TRUS prostate images. The initial prostate contour was determined using a novel method based on the radial bas-relief (RBR) method, and a false edge removal algorithm proposed here in. 2D slice-based propagation was used in which the contour on each image slice was deformed using a level-set evolution model, which was driven by edge-based and region-based energy fields generated by dyadic wavelet transform. The optimized contour on an image slice propagated to the adjacent slice, and subsequently deformed using the level-set model. The propagation continued until all image slices were segmented. To determine the initial slice where the propagation began, the initial prostate contour was deformed individually on each transverse image. A method was developed to self-assess the accuracy of the deformed contour based on the average image intensity inside and outside of the contour. The transverse image on which highest accuracy was attained was chosen to be the initial slice for the propagation process. Evaluation was performed for 336 transverse images from 15 prostates that include images acquired at mid-gland, base and apex regions of the prostates. The average mean absolute difference (MAD) between algorithm and manual segmentations was 0.79±0.26mm, which is comparable to results produced by previously published semi-automatic segmentation methods. Statistical evaluation shows that accurate segmentation was not only obtained at the mid-gland, but also at the base and apex regions.
Collapse
|
29
|
Yang X, Rossi PJ, Jani AB, Mao H, Curran WJ, Liu T. 3D Transrectal Ultrasound (TRUS) Prostate Segmentation Based on Optimal Feature Learning Framework. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9784. [PMID: 31467459 DOI: 10.1117/12.2216396] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We propose a 3D prostate segmentation method for transrectal ultrasound (TRUS) images, which is based on patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to train the kernel support vector machine (KSVM). The well-trained SVM was used to localize the prostate of the new patient. Our segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentations (gold standard). The mean volume Dice overlap coefficient was 89.7%. In this study, we have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute
| | - Peter J Rossi
- Department of Radiation Oncology and Winship Cancer Institute
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute
| |
Collapse
|
30
|
Wang Y, Cheng JZ, Ni D, Lin M, Qin J, Luo X, Xu M, Xie X, Heng PA. Towards Personalized Statistical Deformable Model and Hybrid Point Matching for Robust MR-TRUS Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:589-604. [PMID: 26441446 DOI: 10.1109/tmi.2015.2485299] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Registration and fusion of magnetic resonance (MR) and 3D transrectal ultrasound (TRUS) images of the prostate gland can provide high-quality guidance for prostate interventions. However, accurate MR-TRUS registration remains a challenging task, due to the great intensity variation between two modalities, the lack of intrinsic fiducials within the prostate, the large gland deformation caused by the TRUS probe insertion, and distinctive biomechanical properties in patients and prostate zones. To address these challenges, a personalized model-to-surface registration approach is proposed in this study. The main contributions of this paper can be threefold. First, a new personalized statistical deformable model (PSDM) is proposed with the finite element analysis and the patient-specific tissue parameters measured from the ultrasound elastography. Second, a hybrid point matching method is developed by introducing the modality independent neighborhood descriptor (MIND) to weight the Euclidean distance between points to establish reliable surface point correspondence. Third, the hybrid point matching is further guided by the PSDM for more physically plausible deformation estimation. Eighteen sets of patient data are included to test the efficacy of the proposed method. The experimental results demonstrate that our approach provides more accurate and robust MR-TRUS registration than state-of-the-art methods do. The averaged target registration error is 1.44 mm, which meets the clinical requirement of 1.9 mm for the accurate tumor volume detection. It can be concluded that the presented method can effectively fuse the heterogeneous image information in the elastography, MR, and TRUS to attain satisfactory image alignment performance.
Collapse
|
31
|
Wu P, Liu Y, Li Y, Liu B. Robust Prostate Segmentation Using Intrinsic Properties of TRUS Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1321-1335. [PMID: 25576565 DOI: 10.1109/tmi.2015.2388699] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Accurate segmentation is usually crucial in transrectal ultrasound (TRUS) image based prostate diagnosis; however, it is always hampered by heavy speckles. Contrary to the traditional view that speckles are adverse to segmentation, we exploit intrinsic properties induced by speckles to facilitate the task, based on the observations that sizes and orientations of speckles provide salient cues to determine the prostate boundary. Since the speckle orientation changes in accordance with a statistical prior rule, rotation-invariant texture feature is extracted along the orientations revealed by the rule. To address the problem of feature changes due to different speckle sizes, TRUS images are split into several arc-like strips. In each strip, every individual feature vector is sparsely represented, and representation residuals are obtained. The residuals, along with the spatial coherence inherited from biological tissues, are combined to segment the prostate preliminarily via graph cuts. After that, the segmentation is fine-tuned by a novel level sets model, which integrates (1) the prostate shape prior, (2) dark-to-light intensity transition near the prostate boundary, and (3) the texture feature just obtained. The proposed method is validated on two 2-D image datasets obtained from two different sonographic imaging systems, with the mean absolute distance on the mid gland images only 1.06±0.53 mm and 1.25±0.77 mm, respectively. The method is also extended to segment apex and base images, producing competitive results over the state of the art.
Collapse
|
32
|
Yan P, Cao Y, Yuan Y, Turkbey B, Choyke PL. Label image constrained multiatlas selection. IEEE TRANSACTIONS ON CYBERNETICS 2015; 45:1158-68. [PMID: 25415994 PMCID: PMC8323590 DOI: 10.1109/tcyb.2014.2346394] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Multiatlas based method is commonly used in medical image segmentation. In multiatlas based image segmentation, atlas selection and combination are considered as two key factors affecting the performance. Recently, manifold learning based atlas selection methods have emerged as very promising methods. However, due to the complexity of prostate structures in raw images, it is difficult to get accurate atlas selection results by only measuring the distance between raw images on the manifolds. Although the distance between the regions to be segmented across images can be readily obtained by the label images, it is infeasible to directly compute the distance between the test image (gray) and the label images (binary). This paper tries to address this problem by proposing a label image constrained atlas selection method, which exploits the label images to constrain the manifold projection of raw images. Analyzing the data point distribution of the selected atlases in the manifold subspace, a novel weight computation method for atlas combination is proposed. Compared with other related existing methods, the experimental results on prostate segmentation from T2w MRI showed that the selected atlases are closer to the target structure and more accurate segmentation were obtained by using our proposed method.
Collapse
|
33
|
Nouranian S, Mahdavi SS, Spadinger I, Morris WJ, Salcudean SE, Abolmaesumi P. A multi-atlas-based segmentation framework for prostate brachytherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:950-961. [PMID: 25474806 DOI: 10.1109/tmi.2014.2371823] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Low-dose-rate brachytherapy is a radiation treatment method for localized prostate cancer. The standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate in order to devise a plan to deliver sufficient radiation dose to the cancerous tissue. Brachytherapy planning involves delineation of contours in these images, which closely follow the prostate boundary, i.e., clinical target volume. This process is currently performed either manually or semi-automatically, which requires user interaction for landmark initialization. In this paper, we propose a multi-atlas fusion framework to automatically delineate the clinical target volume in ultrasound images. A dataset of a priori segmented ultrasound images, i.e., atlases, is registered to a target image. We introduce a pairwise atlas agreement factor that combines an image-similarity metric and similarity between a priori segmented contours. This factor is used in an atlas selection algorithm to prune the dataset before combining the atlas contours to produce a consensus segmentation. We evaluate the proposed segmentation approach on a set of 280 transrectal prostate volume studies. The proposed method produces segmentation results that are within the range of observer variability when compared to a semi-automatic segmentation technique that is routinely used in our cancer clinic.
Collapse
|
34
|
MRI segmentation of the human brain: challenges, methods, and applications. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2015; 2015:450341. [PMID: 25945121 PMCID: PMC4402572 DOI: 10.1155/2015/450341] [Citation(s) in RCA: 247] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2014] [Revised: 09/11/2014] [Accepted: 10/01/2014] [Indexed: 12/25/2022]
Abstract
Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation.
Collapse
|
35
|
Qin X, Tian Y, Yan P. Feature competition and partial sparse shape modeling for cardiac image sequences segmentation. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.07.044] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
36
|
Cheng J, Xiong W, Gu Y, Chia SC, Wang Y, Huang W, Zhou J, Zhou Y, Gao W, Tay KJ, Ho H. Prostate boundary segment extraction using cascaded shape regression and optimal surface detection. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:2886-9. [PMID: 25570594 DOI: 10.1109/embc.2014.6944226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
In this paper, we proposed a new method (CSR+OSD) for the extraction of irregular open prostate boundaries in noisy extracorporeal ultrasound image. First, cascaded shape regression (CSR) is used to locate the position of prostate boundary in the images. In CSR, a sequence of random fern predictors are trained in a boosted regression manner, using shape-indexed features to achieve invariance against position variations of prostate boundaries. Afterwards, we adopt optimal surface detection (OSD) to refine the prostate boundary segments across 3D sections globally and efficiently. The proposed method is tested on 162 ECUS images acquired from 8 patients with benign prostate hyperplasia. The method yields a Root Mean Square Distance of 2.11±1.72 mm and a Mean Absolute Distance of 1.61±1.26 mm, which are lower than those of JFilament, an open active contour algorithm and Chan-Vese region based level set model, respectively.
Collapse
|
37
|
Le YH, Kurkure U, Kakadiaris IA. PDM-ENLOR for segmentation of mouse brain gene expression images. Med Image Anal 2014; 20:19-33. [PMID: 25476414 DOI: 10.1016/j.media.2014.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2014] [Revised: 07/04/2014] [Accepted: 09/01/2014] [Indexed: 10/24/2022]
Abstract
Statistical shape models, such as Active Shape Models (ASMs), suffer from their inability to represent a large range of variations of a complex shape and to account for the large errors in detection of (point) landmarks. We propose a method, PDM-ENLOR (Point Distribution Model-based ENsemble of LOcal Regressors), that overcomes these limitations by locating each landmark individually using an ensemble of local regression models and appearance cues from selected landmarks. We first detect a set of reference landmarks which were selected based on their saliency during training. For each landmark, an ensemble of regressors is built. From the locations of the detected reference landmarks, each regressor infers a candidate location for that landmark using local geometric constraints, encoded by a point distribution model (PDM). The final location of that point is determined as a weighted linear combination, whose coefficients are learned from the training data, of candidates proposed by its ensemble's component regressors. We use multiple subsets of reference landmarks as explanatory variables for the component regressors to provide varying degrees of locality for the models in each ensemble. This helps our ensemble model to capture a larger range of shape variations as compared to a single PDM. We demonstrate the advantages of our method on the challenging problem of segmenting gene expression images of mouse brain. The overall mean and standard deviation of the Dice coefficient overlap over all 14 anatomical regions and all 100 test images were (88.1 ± 9.5)%.
Collapse
Affiliation(s)
- Yen H Le
- Computational Biomedicine Lab, University of Houston, Houston, TX, USA(1)
| | - Uday Kurkure
- Computational Biomedicine Lab, University of Houston, Houston, TX, USA(1)
| | | |
Collapse
|
38
|
A homotopy-based sparse representation for fast and accurate shape prior modeling in liver surgical planning. Med Image Anal 2014; 19:176-86. [PMID: 25461336 DOI: 10.1016/j.media.2014.10.003] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2014] [Revised: 08/28/2014] [Accepted: 10/10/2014] [Indexed: 11/21/2022]
Abstract
Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement was 94.31 ± 3.04%, 1.12 ± 0.69 mm and 3.65 ± 1.40 mm respectively.
Collapse
|
39
|
Chilali O, Ouzzane A, Diaf M, Betrouni N. A survey of prostate modeling for image analysis. Comput Biol Med 2014; 53:190-202. [PMID: 25156801 DOI: 10.1016/j.compbiomed.2014.07.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2013] [Revised: 06/22/2014] [Accepted: 07/23/2014] [Indexed: 11/18/2022]
Affiliation(s)
- O Chilali
- Inserm U703, 152, rue du Docteur Yersin, Lille University Hospital, 59120 Loos, France; Automatic Department, Mouloud Mammeri University, Tizi-Ouzou, Algeria
| | - A Ouzzane
- Inserm U703, 152, rue du Docteur Yersin, Lille University Hospital, 59120 Loos, France; Urology Department, Claude Huriez Hospital, Lille University Hospital, France
| | - M Diaf
- Automatic Department, Mouloud Mammeri University, Tizi-Ouzou, Algeria
| | - N Betrouni
- Inserm U703, 152, rue du Docteur Yersin, Lille University Hospital, 59120 Loos, France.
| |
Collapse
|
40
|
|
41
|
Truong H, Logan J, Turkbey B, Siddiqui MM, Rais-Bahrami S, Hoang AN, Pusateri C, Shuch B, Walton-Diaz A, Vourganti S, Nix J, Stamatakis L, Harris C, Chua C, Choyke PL, Wood BJ, Pinto PA. MRI characterization of the dynamic effects of 5α-reductase inhibitors on prostate zonal volumes. THE CANADIAN JOURNAL OF UROLOGY 2013; 20:7002-7007. [PMID: 24331340 PMCID: PMC7589483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
INTRODUCTION Prior studies of volumetric effects of 5α-reductase inhibitors (5ARIs) on the prostate have used transrectal ultrasound which provides poor differentiation of prostatic zones. We utilized high-resolution prostate MRI to evaluate the true dynamic effects of 5ARI in men who underwent multiple MRIs. MATERIALS AND METHODS A retrospective study of patients who underwent serial 3.0 Tesla prostate MRI from 2007 to 2012 and were treated with 5ARI were studied. Nineteen patients who had a baseline MRI prior to 5ARI initiation and subsequent MRI follow up were selected. A randomly selected group of 40 patients who had not received any form of therapy was selected as the control cohort. Total prostate volume (TPV), transition zone volume (TZV), and peripheral zone volume (PZV) were calculated using 3D reconstructions and prostate segmentation from T2-weighted MRI. Changes in volumes were correlated with the duration of treatment using linear regression analysis. RESULTS Following over 2 years of treatment, 5ARI decreased TPV significantly (16.7%, p < 0.0001). There were similar decreases in TZV (7.5%, p < 0.001) and PZV (27.4%, p = 0.0002) from baseline. In the control group, TPV and TZV increased (p < 0.0001) while PZV remained stable. When adjusted for the natural growth of prostate zonal volume dynamics seen in the control cohort, approximately 60% of the reduction of the TPV from 5ARI resulted from changes in the TZV and 40% of the reduction from changes in the PZV. CONCLUSIONS 3.0 Tesla MRI characterizations of the dynamic effects of 5ARI on prostate zonal volumes demonstrate significant decreases in TPV, TZV, and PZV. 5ARI blocks the natural growth of TZV as men age and decreases both TZV and PZV below their baselines. As imaging technology improves, prostate MRI allows for more accurate assessment of drug effects on dynamic prostate volumes.
Collapse
Affiliation(s)
- Hong Truong
- National Institutes of Health, Bethesda, Maryland, USA
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
42
|
TRUS image segmentation with non-parametric kernel density estimation shape prior. Biomed Signal Process Control 2013. [DOI: 10.1016/j.bspc.2013.07.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
43
|
Habes M, Schiller T, Rosenberg C, Burchardt M, Hoffmann W. Automated prostate segmentation in whole-body MRI scans for epidemiological studies. Phys Med Biol 2013; 58:5899-915. [PMID: 23920310 DOI: 10.1088/0031-9155/58/17/5899] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The whole prostatic volume (PV) is an important indicator for benign prostate hyperplasia. Correlating the PV with other clinical parameters in a population-based prospective cohort study (SHIP-2) requires valid prostate segmentation in a large number of whole-body MRI scans. The axial proton density fast spin echo fat saturated sequence is used for prostate screening in SHIP-2. Our automated segmentation method is based on support vector machines (SVM). We used three-dimensional neighborhood information to build classification vectors from automatically generated features and randomly selected 16 MR examinations for validation. The Hausdorff distance reached a mean value of 5.048 ± 2.413, and a mean value of 5.613 ± 2.897 compared to manual segmentation by observers A and B. The comparison between volume measurement of SVM-based segmentation and manual segmentation of observers A and B depicts a strong correlation resulting in Spearman's rank correlation coefficients (ρ) of 0.936 and 0.859, respectively. Our automated methodology based on SVM for prostate segmentation can segment the prostate in WBI scans with good segmentation quality and has considerable potential for integration in epidemiological studies.
Collapse
Affiliation(s)
- Mohamad Habes
- Institute for Community Medicine, Ernst Moritz Arndt University of Greifswald, Greifswald, Germany.
| | | | | | | | | |
Collapse
|
44
|
A supervised learning framework of statistical shape and probability priors for automatic prostate segmentation in ultrasound images. Med Image Anal 2013; 17:587-600. [DOI: 10.1016/j.media.2013.04.001] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2012] [Revised: 02/05/2013] [Accepted: 04/01/2013] [Indexed: 11/21/2022]
|
45
|
Turkbey B, Huang R, Vourganti S, Trivedi H, Bernardo M, Yan P, Benjamin C, Pinto PA, Choyke PL. Age-related changes in prostate zonal volumes as measured by high-resolution magnetic resonance imaging (MRI): a cross-sectional study in over 500 patients. BJU Int 2012; 110:1642-7. [PMID: 22973825 PMCID: PMC3816371 DOI: 10.1111/j.1464-410x.2012.11469.x] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
UNLABELLED Study Type--Diagnosis (case series) Level of Evidence 4. What's known on the subject? and What does the study add? Benign prostatic hyperplasia is the most common symptomatic disorder of the prostate and its severity varies greatly in the population. Various methods have been used to estimate prostate volumes in the past including the digital rectal examination and ultrasound measurements. High-resolution T2 weighted MRI can provide accurate measurements of zonal volumes and total volumes, which can be used to better understand the etiology of lower urinary tract symptoms of men. OBJECTIVE • To use ability of magnetic resonance imaging (MRI) to investigate age-related changes in zonal prostate volumes. PATIENTS AND METHODS • This Institutional Review Board approved, Health Insurance Portability and Accountability Act-compliant study consisted of 503 patients who underwent 3 T prostate MRI before any treatment for prostate cancer. • Whole prostate (WP) and central gland (CG) volumes were manually contoured on T2-weighted MRI using a semi-automated segmentation tool. WP, CG, peripheral zone (PZ) volumes were measured for each patient. • WP, CG, PZ volumes were correlated with age, serum prostate-specific antigen (PSA) level, International Prostate Symptom Score (IPSS), Sexual Health Inventory for Men (SHIM) scores. RESULTS • Linear regression analysis showed positive correlations between WP, CG volumes and patient age (P < 0.001); there was no correlation between age and PZ volume (P= 0.173). • There was a positive correlation between WP, CG volumes and serum PSA level (P < 0.001), as well as between PZ volume and serum PSA level (P= 0.002). • At logistic regression analysis, IPSS positively correlated with WP, CG volumes (P < 0.001). • SHIM positively correlated with WP (P= 0.015) and CG (P= 0.023) volumes. • As expected, the IPSS of patients with prostate volumes (WP, CG) in first decile for age were significantly lower than those in tenth decile. CONCLUSIONS • Prostate MRI is able to document age-related changes in prostate zonal volumes. • Changes in WP and CG volumes correlated inversely with changes in lower urinary tract symptoms. • These findings suggest a role for MRI in measuring accurate prostate zonal volumes; have interesting implications for study of age-related changes in the prostate.
Collapse
Affiliation(s)
- Baris Turkbey
- Molecular Imaging Program Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892-1088, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
46
|
Yang M, Li X, Turkbey B, Choyke PL, Yan P. Prostate segmentation in MR images using discriminant boundary features. IEEE Trans Biomed Eng 2012. [PMID: 23192474 DOI: 10.1109/tbme.2012.2228644] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Segmentation of the prostate in magnetic resonance image has become more in need for its assistance to diagnosis and surgical planning of prostate carcinoma. Due to the natural variability of anatomical structures, statistical shape model has been widely applied in medical image segmentation. Robust and distinctive local features are critical for statistical shape model to achieve accurate segmentation results. The scale invariant feature transformation (SIFT) has been employed to capture the information of the local patch surrounding the boundary. However, when SIFT feature being used for segmentation, the scale and variance are not specified with the location of the point of interest. To deal with it, the discriminant analysis in machine learning is introduced to measure the distinctiveness of the learned SIFT features for each landmark directly and to make the scale and variance adaptive to the locations. As the gray values and gradients vary significantly over the boundary of the prostate, separate appearance descriptors are built for each landmark and then optimized. After that, a two stage coarse-to-fine segmentation approach is carried out by incorporating the local shape variations. Finally, the experiments on prostate segmentation from MR image are conducted to verify the efficiency of the proposed algorithms.
Collapse
Affiliation(s)
- Meijuan Yang
- Center for OPTical IMagery Analysis and Learning, State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, Shaanxi, China.
| | | | | | | | | |
Collapse
|
47
|
Ghose S, Oliver A, Martí R, Lladó X, Vilanova JC, Freixenet J, Mitra J, Sidibé D, Meriaudeau F. A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2012; 108:262-287. [PMID: 22739209 DOI: 10.1016/j.cmpb.2012.04.006] [Citation(s) in RCA: 108] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2011] [Revised: 04/17/2012] [Accepted: 04/17/2012] [Indexed: 06/01/2023]
Abstract
Prostate segmentation is a challenging task, and the challenges significantly differ from one imaging modality to another. Low contrast, speckle, micro-calcifications and imaging artifacts like shadow poses serious challenges to accurate prostate segmentation in transrectal ultrasound (TRUS) images. However in magnetic resonance (MR) images, superior soft tissue contrast highlights large variability in shape, size and texture information inside the prostate. In contrast poor soft tissue contrast between prostate and surrounding tissues in computed tomography (CT) images pose a challenge in accurate prostate segmentation. This article reviews the methods developed for prostate gland segmentation TRUS, MR and CT images, the three primary imaging modalities that aids prostate cancer diagnosis and treatment. The objective of this work is to study the key similarities and differences among the different methods, highlighting their strengths and weaknesses in order to assist in the choice of an appropriate segmentation methodology. We define a new taxonomy for prostate segmentation strategies that allows first to group the algorithms and then to point out the main advantages and drawbacks of each strategy. We provide a comprehensive description of the existing methods in all TRUS, MR and CT modalities, highlighting their key-points and features. Finally, a discussion on choosing the most appropriate segmentation strategy for a given imaging modality is provided. A quantitative comparison of the results as reported in literature is also presented.
Collapse
Affiliation(s)
- Soumya Ghose
- Computer Vision and Robotics Group, University of Girona, Campus Montilivi, Edifici P-IV, 17071 Girona, Spain.
| | | | | | | | | | | | | | | | | |
Collapse
|
48
|
Zhang S, Zhan Y, Metaxas DN. Deformable segmentation via sparse representation and dictionary learning. Med Image Anal 2012; 16:1385-96. [PMID: 22959839 DOI: 10.1016/j.media.2012.07.007] [Citation(s) in RCA: 132] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2012] [Revised: 07/04/2012] [Accepted: 07/27/2012] [Indexed: 11/26/2022]
|
49
|
Shi L, Liu W, Zhang H, Xie Y, Wang D. A survey of GPU-based medical image computing techniques. Quant Imaging Med Surg 2012; 2:188-206. [PMID: 23256080 PMCID: PMC3496509 DOI: 10.3978/j.issn.2223-4292.2012.08.02] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2012] [Accepted: 08/08/2012] [Indexed: 11/14/2022]
Abstract
Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine.
Collapse
Affiliation(s)
- Lin Shi
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
- CUHK Shenzhen Research Institute, Shenzhen, Guangdong Province, P.R. China
- Shenzhen Institute of Advanced Integration Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, P.R. China
| | - Wen Liu
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
| | - Heye Zhang
- Shenzhen Institute of Advanced Integration Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, P.R. China
| | - Yongming Xie
- Shenzhen Institute of Advanced Integration Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, P.R. China
| | - Defeng Wang
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
- CUHK Shenzhen Research Institute, Shenzhen, Guangdong Province, P.R. China
| |
Collapse
|
50
|
Ghose S, Mitra J, Oliver A, Martí R, Lladó X, Freixenet J, Vilanova JC, Comet J, Sidibé D, Meriaudeau F. Spectral clustering of shape and probability prior models for automatic prostate segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2012:2335-2338. [PMID: 23366392 DOI: 10.1109/embc.2012.6346431] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Imaging artifacts in Transrectal Ultrasound (TRUS) images and inter-patient variations in prostate shape and size challenge computer-aided automatic or semi-automatic segmentation of the prostate. In this paper, we propose to use multiple mean parametric models derived from principal component analysis (PCA) of shape and posterior probability information to segment the prostate. In contrast to traditional statistical models of shape and intensity priors, we use posterior probability of the prostate region determined from random forest classification to build, initialize and propagate our model. Multiple mean models derived from spectral clustering of combined shape and appearance parameters ensure improvement in segmentation accuracies. The proposed method achieves mean Dice similarity coefficient (DSC) value of 0.96±0.01, with a mean segmentation time of 0.67±0.02 seconds when validated with 46 images from 23 datasets in a leave-one-patient-out validation framework.
Collapse
Affiliation(s)
- S Ghose
- Le2i CNRS-UMR 6306, Université de Bourgogne, Le Creusot, France.
| | | | | | | | | | | | | | | | | | | |
Collapse
|