1
|
Wang H, Wu H, Wang Z, Yue P, Ni D, Heng PA, Wang Y. A Narrative Review of Image Processing Techniques Related to Prostate Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:189-209. [PMID: 39551652 DOI: 10.1016/j.ultrasmedbio.2024.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 09/15/2024] [Accepted: 10/06/2024] [Indexed: 11/19/2024]
Abstract
Prostate cancer (PCa) poses a significant threat to men's health, with early diagnosis being crucial for improving prognosis and reducing mortality rates. Transrectal ultrasound (TRUS) plays a vital role in the diagnosis and image-guided intervention of PCa. To facilitate physicians with more accurate and efficient computer-assisted diagnosis and interventions, many image processing algorithms in TRUS have been proposed and achieved state-of-the-art performance in several tasks, including prostate gland segmentation, prostate image registration, PCa classification and detection and interventional needle detection. The rapid development of these algorithms over the past 2 decades necessitates a comprehensive summary. As a consequence, this survey provides a narrative review of this field, outlining the evolution of image processing methods in the context of TRUS image analysis and meanwhile highlighting their relevant contributions. Furthermore, this survey discusses current challenges and suggests future research directions to possibly advance this field further.
Collapse
Affiliation(s)
- Haiqiao Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hong Wu
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Zhuoyuan Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Peiyan Yue
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Dong Ni
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yi Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China.
| |
Collapse
|
2
|
Ma J, Kong D, Wu F, Bao L, Yuan J, Liu Y. Densely connected convolutional networks for ultrasound image based lesion segmentation. Comput Biol Med 2024; 168:107725. [PMID: 38006827 DOI: 10.1016/j.compbiomed.2023.107725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/03/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
Delineating lesion boundaries play a central role in diagnosing thyroid and breast cancers, making related therapy plans and evaluating therapeutic effects. However, it is often time-consuming and error-prone with limited reproducibility to manually annotate low-quality ultrasound (US) images, given high speckle noises, heterogeneous appearances, ambiguous boundaries etc., especially for nodular lesions with huge intra-class variance. It is hence appreciative but challenging for accurate lesion segmentations from US images in clinical practices. In this study, we propose a new densely connected convolutional network (called MDenseNet) architecture to automatically segment nodular lesions from 2D US images, which is first pre-trained over ImageNet database (called PMDenseNet) and then retrained upon the given US image datasets. Moreover, we also designed a deep MDenseNet with pre-training strategy (PDMDenseNet) for segmentation of thyroid and breast nodules by adding a dense block to increase the depth of our MDenseNet. Extensive experiments demonstrate that the proposed MDenseNet-based method can accurately extract multiple nodular lesions, with even complex shapes, from input thyroid and breast US images. Moreover, additional experiments show that the introduced MDenseNet-based method also outperforms three state-of-the-art convolutional neural networks in terms of accuracy and reproducibility. Meanwhile, promising results in nodular lesion segmentation from thyroid and breast US images illustrate its great potential in many other clinical segmentation tasks.
Collapse
Affiliation(s)
- Jinlian Ma
- School of Integrated Circuits, Shandong University, Jinan 250101, China; Shenzhen Research Institute of Shandong University, A301 Virtual University Park in South District of Shenzhen, China; State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China.
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Fa Wu
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Lingyun Bao
- Department of Ultrasound, Hangzhou First Peoples Hospital, Zhejiang University, Hangzhou, China
| | - Jing Yuan
- School of Mathematics and Statistics, Xidian University, China
| | - Yusheng Liu
- State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
3
|
A hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme for breast cancer segmentation based on DCE-MRI. Med Image Anal 2022; 82:102572. [PMID: 36055051 DOI: 10.1016/j.media.2022.102572] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 07/08/2022] [Accepted: 08/11/2022] [Indexed: 11/24/2022]
Abstract
Automatically and accurately annotating tumor in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which provides a noninvasive in vivo method to evaluate tumor vasculature architectures based on contrast accumulation and washout, is a crucial step in computer-aided breast cancer diagnosis and treatment. However, it remains challenging due to the varying sizes, shapes, appearances and densities of tumors caused by the high heterogeneity of breast cancer, and the high dimensionality and ill-posed artifacts of DCE-MRI. In this paper, we propose a hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme that integrates pharmacokinetics prior and feature refinement to generate sufficiently adequate features in DCE-MRI for breast cancer segmentation. The pharmacokinetics prior expressed by time intensity curve (TIC) is incorporated into the scheme through objective function called dynamic contrast-enhanced prior (DCP) loss. It contains contrast agent kinetic heterogeneity prior knowledge, which is important to optimize our model parameters. Besides, we design a spatial fusion module (SFM) embedded in the scheme to exploit intra-slices spatial structural correlations, and deploy a spatial-kinetic fusion module (SKFM) to effectively leverage the complementary information extracted from spatial-kinetic space. Furthermore, considering that low spatial resolution often leads to poor image quality in DCE-MRI, we integrate a reconstruction autoencoder into the scheme to refine feature maps in an unsupervised manner. We conduct extensive experiments to validate the proposed method and show that our approach can outperform recent state-of-the-art segmentation methods on breast cancer DCE-MRI dataset. Moreover, to explore the generalization for other segmentation tasks on dynamic imaging, we also extend the proposed method to brain segmentation in DSC-MRI sequence. Our source code will be released on https://github.com/AI-medical-diagnosis-team-of-JNU/DCEDuDoFNet.
Collapse
|
4
|
Gofer S, Haik O, Bardin R, Gilboa Y, Perlman S. Machine Learning Algorithms for Classification of First-Trimester Fetal Brain Ultrasound Images. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:1773-1779. [PMID: 34710247 DOI: 10.1002/jum.15860] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 10/15/2021] [Accepted: 10/15/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To evaluate the feasibility of machine learning (ML) tools for segmenting and classifying first-trimester fetal brain ultrasound images. METHODS Two image segmentation methods processed high-resolution fetal brain images obtained during the nuchal translucency scan: "Statistical Region Merging" (SRM) and "Trainable Weka Segmentation" (TWS), with training and testing sets in the latter. Measurement of the fetal cerebral cortex in original and processed images served to evaluate the performance of the algorithms. Mean absolute percentage error (MAPE) was used as an accuracy index of the segmentation processing. RESULTS The SRM plugin revealed a total MAPE of 1.71% ± 1.62 SD (standard deviation) and a MAPE of 1.4% ± 1.32 SD and 2.72% ± 2.21 SD for the normal and increased NT groups, respectively. The TWS plugin displayed a MAPE of 1.71% ± 0.59 SD (testing set). There were no significant differences between the training and testing sets after 5-fold cross-validation. The images obtained from normal NT fetuses and increased NT fetuses revealed a MAPE of 1.52% ± 1.02 SD and 2.63% ± 1.98 SD. CONCLUSIONS Our study demonstrates the feasibility of using ML algorithms to classify first-trimester fetal brain ultrasound images and lay the foundation for earlier diagnosis of fetal brain abnormalities.
Collapse
Affiliation(s)
- Stav Gofer
- Ultrasound Unit, The Helen Schneider Women's Hospital, Rabin Medical Center, Petach Tikva, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | | | - Ron Bardin
- Ultrasound Unit, The Helen Schneider Women's Hospital, Rabin Medical Center, Petach Tikva, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Yinon Gilboa
- Ultrasound Unit, The Helen Schneider Women's Hospital, Rabin Medical Center, Petach Tikva, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Sharon Perlman
- Ultrasound Unit, The Helen Schneider Women's Hospital, Rabin Medical Center, Petach Tikva, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
5
|
Xu X, Sanford T, Turkbey B, Xu S, Wood BJ, Yan P. Shadow-Consistent Semi-Supervised Learning for Prostate Ultrasound Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1331-1345. [PMID: 34971530 PMCID: PMC9709821 DOI: 10.1109/tmi.2021.3139999] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Prostate segmentation in transrectal ultrasound (TRUS) image is an essential prerequisite for many prostate-related clinical procedures, which, however, is also a long-standing problem due to the challenges caused by the low image quality and shadow artifacts. In this paper, we propose a Shadow-consistent Semi-supervised Learning (SCO-SSL) method with two novel mechanisms, namely shadow augmentation (Shadow-AUG) and shadow dropout (Shadow-DROP), to tackle this challenging problem. Specifically, Shadow-AUG enriches training samples by adding simulated shadow artifacts to the images to make the network robust to the shadow patterns. Shadow-DROP enforces the segmentation network to infer the prostate boundary using the neighboring shadow-free pixels. Extensive experiments are conducted on two large clinical datasets (a public dataset containing 1,761 TRUS volumes and an in-house dataset containing 662 TRUS volumes). In the fully-supervised setting, a vanilla U-Net equipped with our Shadow-AUG&Shadow-DROP outperforms the state-of-the-arts with statistical significance. In the semi-supervised setting, even with only 20% labeled training data, our SCO-SSL method still achieves highly competitive performance, suggesting great clinical value in relieving the labor of data annotation. Source code is released at https://github.com/DIAL-RPI/SCO-SSL.
Collapse
|
6
|
Peng T, Tang C, Wu Y, Cai J. H-SegMed: A Hybrid Method for Prostate Segmentation in TRUS Images via Improved Closed Principal Curve and Improved Enhanced Machine Learning. Int J Comput Vis 2022. [DOI: 10.1007/s11263-022-01619-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
7
|
Xu X, Sanford T, Turkbey B, Xu S, Wood BJ, Yan P. Polar transform network for prostate ultrasound segmentation with uncertainty estimation. Med Image Anal 2022; 78:102418. [PMID: 35349838 PMCID: PMC9082929 DOI: 10.1016/j.media.2022.102418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 12/08/2021] [Accepted: 03/07/2022] [Indexed: 10/18/2022]
Abstract
Automatic and accurate prostate ultrasound segmentation is a long-standing and challenging problem due to the severe noise and ambiguous/missing prostate boundaries. In this work, we propose a novel polar transform network (PTN) to handle this problem from a fundamentally new perspective, where the prostate is represented and segmented in the polar coordinate space rather than the original image grid space. This new representation gives a prostate volume, especially the most challenging apex and base sub-areas, much denser samples than the background and thus facilitate the learning of discriminative features for accurate prostate segmentation. Moreover, in the polar representation, the prostate surface can be efficiently parameterized using a 2D surface radius map with respect to a centroid coordinate, which allows the proposed PTN to obtain superior accuracy compared with its counterparts using convolutional neural networks while having significantly fewer (18%∼41%) trainable parameters. We also equip our PTN with a novel strategy of centroid perturbed test-time augmentation (CPTTA), which is designed to further improve the segmentation accuracy and quantitatively assess the model uncertainty at the same time. The uncertainty estimation function provides valuable feedback to clinicians when manual modifications or approvals are required for the segmentation, substantially improving the clinical significance of our work. We conduct a three-fold cross validation on a clinical dataset consisting of 315 transrectal ultrasound (TRUS) images to comprehensively evaluate the performance of the proposed method. The experimental results show that our proposed PTN with CPTTA outperforms the state-of-the-art methods with statistical significance on most of the metrics while exhibiting a much smaller model size. Source code of the proposed PTN is released at https://github.com/DIAL-RPI/PTN.
Collapse
Affiliation(s)
- Xuanang Xu
- Department of Biomedical Engineering and the Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Thomas Sanford
- Department of Urology, The State University of New York Upstate Medical University, Syracuse, NY 13210, USA
| | - Baris Turkbey
- Molecular Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Sheng Xu
- Center for Interventional Oncology, Radiology & Imaging Sciences at National Institutes of Health, Bethesda, MD 20892, USA
| | - Bradford J Wood
- Center for Interventional Oncology, Radiology & Imaging Sciences at National Institutes of Health, Bethesda, MD 20892, USA
| | - Pingkun Yan
- Department of Biomedical Engineering and the Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| |
Collapse
|
8
|
Jiang J, Guo Y, Bi Z, Huang Z, Yu G, Wang J. Segmentation of prostate ultrasound images: the state of the art and the future directions of segmentation algorithms. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10179-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
9
|
Estimation of the Prostate Volume from Abdominal Ultrasound Images by Image-Patch Voting. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031390] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Estimation of the prostate volume with ultrasound offers many advantages such as portability, low cost, harmlessness, and suitability for real-time operation. Abdominal Ultrasound (AUS) is a practical procedure that deserves more attention in automated prostate-volume-estimation studies. As the experts usually consider automatic end-to-end volume-estimation procedures as non-transparent and uninterpretable systems, we proposed an expert-in-the-loop automatic system that follows the classical prostate-volume-estimation procedures. Our system directly estimates the diameter parameters of the standard ellipsoid formula to produce the prostate volume. To obtain the diameters, our system detects four diameter endpoints from the transverse and two diameter endpoints from the sagittal AUS images as defined by the classical procedure. These endpoints are estimated using a new image-patch voting method to address characteristic problems of AUS images. We formed a novel prostate AUS data set from 305 patients with both transverse and sagittal planes. The data set includes MRI images for 75 of these patients. At least one expert manually marked all the data. Extensive experiments performed on this data set showed that the proposed system results ranged among experts’ volume estimations, and our system can be used in clinical practice.
Collapse
|
10
|
Martorell A, Martin-Gorgojo A, Ríos-Viñuela E, Rueda-Carnero J, Alfageme F, Taberner R. [Translated article] Artificial intelligence in dermatology: A threat or an opportunity? ACTAS DERMO-SIFILIOGRAFICAS 2022. [DOI: 10.1016/j.ad.2021.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/09/2022] Open
|
11
|
Martorell A, Martin-Gorgojo A, Ríos-Viñuela E, Rueda-Carnero J, Alfageme F, Taberner R. Inteligencia artificial en dermatología: ¿amenaza u oportunidad? ACTAS DERMO-SIFILIOGRAFICAS 2022; 113:30-46. [DOI: 10.1016/j.ad.2021.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 07/18/2021] [Indexed: 11/25/2022] Open
|
12
|
Martorell A, Martin-Gorgojo A, Ríos-Viñuela E, Rueda-Carnero J, Alfageme F, Taberner R. Artificial intelligence in dermatology: A threat or an opportunity? ACTAS DERMO-SIFILIOGRAFICAS 2021. [DOI: 10.1016/j.adengl.2021.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
|
13
|
Boundary Restored Network for Subpleural Pulmonary Lesion Segmentation on Ultrasound Images at Local and Global Scales. J Digit Imaging 2021; 33:1155-1166. [PMID: 32556913 DOI: 10.1007/s10278-020-00356-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
To evaluate the application of machine learning for the detection of subpleural pulmonary lesions (SPLs) in ultrasound (US) scans, we propose a novel boundary-restored network (BRN) for automated SPL segmentation to avoid issues associated with manual SPL segmentation (subjectivity, manual segmentation errors, and high time consumption). In total, 1612 ultrasound slices from 255 patients in which SPLs were visually present were exported. The segmentation performance of the neural network based on the Dice similarity coefficient (DSC), Matthews correlation coefficient (MCC), Jaccard similarity metric (Jaccard), Average Symmetric Surface Distance (ASSD), and Maximum symmetric surface distance (MSSD) was assessed. Our dual-stage boundary-restored network (BRN) outperformed existing segmentation methods (U-Net and a fully convolutional network (FCN)) for the segmentation accuracy parameters including DSC (83.45 ± 16.60%), MCC (0.8330 ± 0.1626), Jaccard (0.7391 ± 0.1770), ASSD (5.68 ± 2.70 mm), and MSSD (15.61 ± 6.07 mm). It also outperformed the original BRN in terms of the DSC by almost 5%. Our results suggest that deep learning algorithms aid fully automated SPL segmentation in patients with SPLs. Further improvement of this technology might improve the specificity of lung cancer screening efforts and could lead to new applications of lung US imaging.
Collapse
|
14
|
A Prostate MRI Segmentation Tool Based on Active Contour Models Using a Gradient Vector Flow. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10186163] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Medical support systems used to assist in the diagnosis of prostate lesions generally related to prostate segmentation is one of the majors focus of interest in recent literature. The main problem encountered in the diagnosis of a prostate study is the localization of a Regions of Interest (ROI) containing a tumor tissue. In this paper, a new GUI tool based on a semi-automatic prostate segmentation is presented. The main rationale behind this tool and the focus of this article is facilitate the time consuming segmentation process used for annotating images in the clinical practice, enabling the radiologists to use novel and easy to use semi-automatic segmentation techniques instead of manual segmentation. In this work, a detailed specification of the proposed segmentation algorithm using an Active Contour Models (ACM) aided with a Gradient Vector Flow (GVF) component is defined. The purpose is to help the manual segmentation process of the main ROIs of prostate gland zones. Finally, an experimental case of use and a discussion part of the results are presented.
Collapse
|
15
|
Bi H, Jiang Y, Tang H, Yang G, Shu H, Dillenseger JL. Fast and accurate segmentation method of active shape model with Rayleigh mixture model clustering for prostate ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 184:105097. [PMID: 31634807 DOI: 10.1016/j.cmpb.2019.105097] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 09/24/2019] [Accepted: 09/25/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE The prostate cancer interventions, which need an accurate prostate segmentation, are performed under ultrasound imaging guidance. However, prostate ultrasound segmentation is facing two challenges. The first is the low signal-to-noise ratio and inhomogeneity of the ultrasound image. The second is the non-standardized shape and size of the prostate. METHODS For prostate ultrasound image segmentation, this paper proposed an accurate and efficient method of Active shape model (ASM) with Rayleigh mixture model Clustering (ASM-RMMC). Firstly, Rayleigh mixture model (RMM) is adopted for clustering the image regions which present similar speckle distributions. These content-based clustered images are then used to initialize and guide the deformation of an ASM model. RESULTS The performance of the proposed method is assessed on 30 prostate ultrasound images using four metrics as Mean Average Distance (MAD), Dice Similarity Coefficient (DSC), False Positive Error (FPE) and False Negative Error (FNE). The proposed ASM-RMMC reaches high segmentation accuracy with 95% ± 0.81% for DSC, 1.86 ± 0.02 pixels for MAD, 2.10% ± 0.36% for FPE and 2.78% ± 0.71% for FNE, respectively. Moreover, the average segmentation time is less than 8 s when treating a single prostate ultrasound image through ASM-RMMC. CONCLUSIONS This paper presents a method for prostate ultrasound image segmentation, which achieves high accuracy with less computational complexity and meets the clinical requirements.
Collapse
Affiliation(s)
- Hui Bi
- Changzhou University, Changzhou, China
| | - Yibo Jiang
- Changzhou Institute of Technology, Changzhou, China
| | - Hui Tang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Guanyu Yang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Huazhong Shu
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China; Centre de Recherche en Information Biomédicale sino-français (CRIBs), Nanjing, China.
| | - Jean-Louis Dillenseger
- Centre de Recherche en Information Biomédicale sino-français (CRIBs), Nanjing, China; Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| |
Collapse
|
16
|
Wang Y, Dou H, Hu X, Zhu L, Yang X, Xu M, Qin J, Heng PA, Wang T, Ni D. Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2768-2778. [PMID: 31021793 DOI: 10.1109/tmi.2019.2913184] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Automatic prostate segmentation in transrectal ultrasound (TRUS) images is of essential importance for image-guided prostate interventions and treatment planning. However, developing such automatic solutions remains very challenging due to the missing/ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS, as well as the large variability in prostate shapes. This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our attention module utilizes the attention mechanism to selectively leverage the multi-level features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance. The proposed attention mechanism is a general strategy to aggregate multi-level deep features and has the potential to be used for other medical image segmentation tasks. The code is publicly available at https://github.com/wulalago/DAF3D.
Collapse
|
17
|
Hu R, Singla R, Deeba F, Rohling RN. Acoustic Shadow Detection: Study and Statistics of B-Mode and Radiofrequency Data. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:2248-2257. [PMID: 31101443 DOI: 10.1016/j.ultrasmedbio.2019.04.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 02/12/2019] [Accepted: 04/01/2019] [Indexed: 06/09/2023]
Abstract
An acoustic shadow is an ultrasound artifact occurring at boundaries between significantly different tissue impedances, resulting in signal loss and a dark appearance. Shadow detection is important as shadows can identify anatomical features or obscure regions of interest. A study was performed to scan human participants (N = 37) specifically to explore the statistical characteristics of various shadows from different anatomy and with different transducers. Differences in shadow statistics were observed and used for shadow detection algorithms with a fitted Nakagami distribution on radiofrequency (RF) speckle or cumulative entropy on brightness-mode (B-mode) data. The fitted Nakagami parameter and entropy values in shadows were consistent across different transducers and anatomy. Both algorithms utilized adaptive thresholding, needing only the transducer pulse length as an input parameter for easy utilization by different operators or equipment. Mean Dice coefficients (± standard deviation) of 0.90 ± 0.07 and 0.87 ± 0.08 were obtained for the RF and B-mode algorithms, which is within the range of manual annotators. The high accuracy in different imaging scenarios indicates that the shadows can be detected with high versatility and without expert configuration. The understanding of shadow statistics can be used for more specialized techniques to be developed for specific applications in the future, including pre-processing for machine learning and automatic interpretation.
Collapse
Affiliation(s)
- Ricky Hu
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada.
| | - Rohit Singla
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | - Farah Deeba
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | - Robert N Rohling
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada; Department of Mechanical Engineering, University of British Columbia, Vancouver, Canada
| |
Collapse
|
18
|
Karimi D, Zeng Q, Mathur P, Avinash A, Mahdavi S, Spadinger I, Abolmaesumi P, Salcudean SE. Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images. Med Image Anal 2019; 57:186-196. [PMID: 31325722 DOI: 10.1016/j.media.2019.07.005] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 06/06/2019] [Accepted: 07/04/2019] [Indexed: 12/31/2022]
Abstract
The goal of this work was to develop a method for accurate and robust automatic segmentation of the prostate clinical target volume in transrectal ultrasound (TRUS) images for brachytherapy. These images can be difficult to segment because of weak or insufficient landmarks or strong artifacts. We devise a method, based on convolutional neural networks (CNNs), that produces accurate segmentations on easy and difficult images alike. We propose two strategies to achieve improved segmentation accuracy on difficult images. First, for CNN training we adopt an adaptive sampling strategy, whereby the training process is encouraged to pay more attention to images that are difficult to segment. Secondly, we train a CNN ensemble and use the disagreement among this ensemble to identify uncertain segmentations and to estimate a segmentation uncertainty map. We improve uncertain segmentations by utilizing the prior shape information in the form of a statistical shape model. Our method achieves Hausdorff distance of 2.7 ± 2.3 mm and Dice score of 93.9 ± 3.5%. Comparisons with several competing methods show that our method achieves significantly better results and reduces the likelihood of committing large segmentation errors. Furthermore, our experiments show that our approach to estimating segmentation uncertainty is better than or on par with recent methods for estimation of prediction uncertainty in deep learning models. Our study demonstrates that estimation of model uncertainty and use of prior shape information can significantly improve the performance of CNN-based medical image segmentation methods, especially on difficult images.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Qi Zeng
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Prateek Mathur
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Apeksha Avinash
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | | | | | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
19
|
Abstract
Radiomics and radiogenomics are attractive research topics in prostate cancer. Radiomics mainly focuses on extraction of quantitative information from medical imaging, whereas radiogenomics aims to correlate these imaging features to genomic data. The purpose of this review is to provide a brief overview summarizing recent progress in the application of radiomics-based approaches in prostate cancer and to discuss the potential role of radiogenomics in prostate cancer.
Collapse
|
20
|
Brattain LJ, Telfer BA, Dhyani M, Grajo JR, Samir AE. Machine learning for medical ultrasound: status, methods, and future opportunities. Abdom Radiol (NY) 2018; 43:786-799. [PMID: 29492605 PMCID: PMC5886811 DOI: 10.1007/s00261-018-1517-0] [Citation(s) in RCA: 128] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Ultrasound (US) imaging is the most commonly performed cross-sectional diagnostic imaging modality in the practice of medicine. It is low-cost, non-ionizing, portable, and capable of real-time image acquisition and display. US is a rapidly evolving technology with significant challenges and opportunities. Challenges include high inter- and intra-operator variability and limited image quality control. Tremendous opportunities have arisen in the last decade as a result of exponential growth in available computational power coupled with progressive miniaturization of US devices. As US devices become smaller, enhanced computational capability can contribute significantly to decreasing variability through advanced image processing. In this paper, we review leading machine learning (ML) approaches and research directions in US, with an emphasis on recent ML advances. We also present our outlook on future opportunities for ML techniques to further improve clinical workflow and US-based disease diagnosis and characterization.
Collapse
Affiliation(s)
| | - Brian A Telfer
- MIT Lincoln Laboratory, 244 Wood St, Lexington, MA, 02420, USA
| | - Manish Dhyani
- Department of Internal Medicine, Steward Carney Hospital, Boston, MA, 02124, USA
- Division of Ultrasound, Department of Radiology, Center for Ultrasound Research & Translation, Massachusetts General Hospital, Boston, MA, 02114, USA
| | - Joseph R Grajo
- Department of Radiology, Division of Abdominal Imaging, University of Florida College of Medicine, Gainesville, FL, USA
| | - Anthony E Samir
- Division of Ultrasound, Department of Radiology, Center for Ultrasound Research & Translation, Massachusetts General Hospital, Boston, MA, 02114, USA
| |
Collapse
|
21
|
Ilunga-Mbuyamba E, Avina-Cervantes JG, Lindner D, Arlt F, Ituna-Yudonago JF, Chalopin C. Patient-specific model-based segmentation of brain tumors in 3D intraoperative ultrasound images. Int J Comput Assist Radiol Surg 2018; 13:331-342. [PMID: 29330658 DOI: 10.1007/s11548-018-1703-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 01/04/2018] [Indexed: 11/27/2022]
Abstract
PURPOSE Intraoperative ultrasound (iUS) imaging is commonly used to support brain tumor operation. The tumor segmentation in the iUS images is a difficult task and still under improvement because of the low signal-to-noise ratio. The success of automatic methods is also limited due to the high noise sensibility. Therefore, an alternative brain tumor segmentation method in 3D-iUS data using a tumor model obtained from magnetic resonance (MR) data for local MR-iUS registration is presented in this paper. The aim is to enhance the visualization of the brain tumor contours in iUS. METHODS A multistep approach is proposed. First, a region of interest (ROI) based on the specific patient tumor model is defined. Second, hyperechogenic structures, mainly tumor tissues, are extracted from the ROI of both modalities by using automatic thresholding techniques. Third, the registration is performed over the extracted binary sub-volumes using a similarity measure based on gradient values, and rigid and affine transformations. Finally, the tumor model is aligned with the 3D-iUS data, and its contours are represented. RESULTS Experiments were successfully conducted on a dataset of 33 patients. The method was evaluated by comparing the tumor segmentation with expert manual delineations using two binary metrics: contour mean distance and Dice index. The proposed segmentation method using local and binary registration was compared with two grayscale-based approaches. The outcomes showed that our approach reached better results in terms of computational time and accuracy than the comparative methods. CONCLUSION The proposed approach requires limited interaction and reduced computation time, making it relevant for intraoperative use. Experimental results and evaluations were performed offline. The developed tool could be useful for brain tumor resection supporting neurosurgeons to improve tumor border visualization in the iUS volumes.
Collapse
Affiliation(s)
- Elisee Ilunga-Mbuyamba
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| | - Juan Gabriel Avina-Cervantes
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico.
| | - Dirk Lindner
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Felix Arlt
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Jean Fulbert Ituna-Yudonago
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
| | - Claire Chalopin
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| |
Collapse
|
22
|
Meiburger KM, Acharya UR, Molinari F. Automated localization and segmentation techniques for B-mode ultrasound images: A review. Comput Biol Med 2017; 92:210-235. [PMID: 29247890 DOI: 10.1016/j.compbiomed.2017.11.018] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 11/30/2017] [Accepted: 11/30/2017] [Indexed: 12/14/2022]
Abstract
B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed.
Collapse
Affiliation(s)
- Kristen M Meiburger
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy
| | - U Rajendra Acharya
- Department of Electronic & Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Filippo Molinari
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy.
| |
Collapse
|
23
|
Ghose S, Greer PB, Sun J, Pichler P, Rivest-Henault D, Mitra J, Richardson H, Wratten C, Martin J, Arm J, Best L, Dowling JA. Regression and statistical shape model based substitute CT generation for MRI alone external beam radiation therapy from standard clinical MRI sequences. Phys Med Biol 2017; 62:8566-8580. [PMID: 28976369 DOI: 10.1088/1361-6560/aa9104] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
In MR only radiation therapy planning, generation of the tissue specific HU map directly from the MRI would eliminate the need of CT image acquisition and may improve radiation therapy planning. The aim of this work is to generate and validate substitute CT (sCT) scans generated from standard T2 weighted MR pelvic scans in prostate radiation therapy dose planning. A Siemens Skyra 3T MRI scanner with laser bridge, flat couch and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole pelvis MRI (1.6 mm 3D isotropic T2w SPACE sequence) was acquired. Patients received a routine planning CT scan. Co-registered whole pelvis CT and T2w MRI pairs were used as training images. Advanced tissue specific non-linear regression models to predict HU for the fat, muscle, bladder and air were created from co-registered CT-MRI image pairs. On a test case T2w MRI, the bones and bladder were automatically segmented using a novel statistical shape and appearance model, while other soft tissues were separated using an Expectation-Maximization based clustering model. The CT bone in the training database that was most 'similar' to the segmented bone was then transformed with deformable registration to create the sCT component of the test case T2w MRI bone tissue. Predictions for the bone, air and soft tissue from the separate regression models were successively combined to generate a whole pelvis sCT. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same IMRT dose plan was found to be [Formula: see text] (mean ± standard deviation) for 39 patients. The 3D Gamma pass rate was [Formula: see text] (2 mm/2%). The novel hybrid model is computationally efficient, generating an sCT in 20 min from standard T2w images for prostate cancer radiation therapy dose planning and DRR generation.
Collapse
Affiliation(s)
- Soumya Ghose
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States of America
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
24
|
Li X, Li C, Fedorov A, Kapur T, Yang X. Segmentation of prostate from ultrasound images using level sets on active band and intensity variation across edges. Med Phys 2017; 43:3090-3103. [PMID: 27277056 DOI: 10.1118/1.4950721] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In this paper, the authors propose a novel efficient method to segment ultrasound images of the prostate with weak boundaries. Segmentation of the prostate from ultrasound images with weak boundaries widely exists in clinical applications. One of the most typical examples is the diagnosis and treatment of prostate cancer. Accurate segmentation of the prostate boundaries from ultrasound images plays an important role in many prostate-related applications such as the accurate placement of the biopsy needles, the assignment of the appropriate therapy in cancer treatment, and the measurement of the prostate volume. METHODS Ultrasound images of the prostate are usually corrupted with intensity inhomogeneities, weak boundaries, and unwanted edges, which make the segmentation of the prostate an inherently difficult task. Regarding to these difficulties, the authors introduce an active band term and an edge descriptor term in the modified level set energy functional. The active band term is to deal with intensity inhomogeneities and the edge descriptor term is to capture the weak boundaries or to rule out unwanted boundaries. The level set function of the proposed model is updated in a band region around the zero level set which the authors call it an active band. The active band restricts the authors' method to utilize the local image information in a banded region around the prostate contour. Compared to traditional level set methods, the average intensities inside∖outside the zero level set are only computed in this banded region. Thus, only pixels in the active band have influence on the evolution of the level set. For weak boundaries, they are hard to be distinguished by human eyes, but in local patches in the band region around prostate boundaries, they are easier to be detected. The authors incorporate an edge descriptor to calculate the total intensity variation in a local patch paralleled to the normal direction of the zero level set, which can detect weak boundaries and avoid unwanted edges in the ultrasound images. RESULTS The efficiency of the proposed model is demonstrated by experiments on real 3D volume images and 2D ultrasound images and comparisons with other approaches. Validation results on real 3D TRUS prostate images show that the authors' model can obtain a Dice similarity coefficient (DSC) of 94.03% ± 1.50% and a sensitivity of 93.16% ± 2.30%. Experiments on 100 typical 2D ultrasound images show that the authors' method can obtain a sensitivity of 94.87% ± 1.85% and a DSC of 95.82% ± 2.23%. A reproducibility experiment is done to evaluate the robustness of the proposed model. CONCLUSIONS As far as the authors know, prostate segmentation from ultrasound images with weak boundaries and unwanted edges is a difficult task. A novel method using level sets with active band and the intensity variation across edges is proposed in this paper. Extensive experimental results demonstrate that the proposed method is more efficient and accurate.
Collapse
Affiliation(s)
- Xu Li
- School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Chunming Li
- School of Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Andriy Fedorov
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts 02446
| | - Tina Kapur
- Department of Mathematics, Nanjing University, Nanjing 210093, China
| | - Xiaoping Yang
- School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
| |
Collapse
|
25
|
Zhang D, Liu Y, Yang Y, Xu M, Yan Y, Qin Q. A region-based segmentation method for ultrasound images in HIFU therapy. Med Phys 2016; 43:2975-2989. [DOI: 10.1118/1.4950706] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
26
|
Fully automatic prostate segmentation from transrectal ultrasound images based on radial bas-relief initialization and slice-based propagation. Comput Biol Med 2016; 74:74-90. [PMID: 27208705 DOI: 10.1016/j.compbiomed.2016.05.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2015] [Revised: 05/03/2016] [Accepted: 05/05/2016] [Indexed: 11/22/2022]
Abstract
Prostate segmentation from transrectal ultrasound (TRUS) images plays an important role in the diagnosis and treatment planning of prostate cancer. In this paper, a fully automatic slice-based segmentation method was developed to segment TRUS prostate images. The initial prostate contour was determined using a novel method based on the radial bas-relief (RBR) method, and a false edge removal algorithm proposed here in. 2D slice-based propagation was used in which the contour on each image slice was deformed using a level-set evolution model, which was driven by edge-based and region-based energy fields generated by dyadic wavelet transform. The optimized contour on an image slice propagated to the adjacent slice, and subsequently deformed using the level-set model. The propagation continued until all image slices were segmented. To determine the initial slice where the propagation began, the initial prostate contour was deformed individually on each transverse image. A method was developed to self-assess the accuracy of the deformed contour based on the average image intensity inside and outside of the contour. The transverse image on which highest accuracy was attained was chosen to be the initial slice for the propagation process. Evaluation was performed for 336 transverse images from 15 prostates that include images acquired at mid-gland, base and apex regions of the prostates. The average mean absolute difference (MAD) between algorithm and manual segmentations was 0.79±0.26mm, which is comparable to results produced by previously published semi-automatic segmentation methods. Statistical evaluation shows that accurate segmentation was not only obtained at the mid-gland, but also at the base and apex regions.
Collapse
|
27
|
Ghose S, Mitra J, Rivest-Hénault D, Fazlollahi A, Stanwell P, Pichler P, Sun J, Fripp J, Greer PB, Dowling JA. MRI-alone radiation therapy planning for prostate cancer: Automatic fiducial marker detection. Med Phys 2016; 43:2218. [DOI: 10.1118/1.4944871] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
28
|
Bukhari Q, Borsook D, Rudin M, Becerra L. Random Forest Segregation of Drug Responses May Define Regions of Biological Significance. Front Comput Neurosci 2016; 10:21. [PMID: 27014046 PMCID: PMC4783407 DOI: 10.3389/fncom.2016.00021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 02/23/2016] [Indexed: 12/02/2022] Open
Abstract
The ability to assess brain responses in unsupervised manner based on fMRI measure has remained a challenge. Here we have applied the Random Forest (RF) method to detect differences in the pharmacological MRI (phMRI) response in rats to treatment with an analgesic drug (buprenorphine) as compared to control (saline). Three groups of animals were studied: two groups treated with different doses of the opioid buprenorphine, low (LD), and high dose (HD), and one receiving saline. PhMRI responses were evaluated in 45 brain regions and RF analysis was applied to allocate rats to the individual treatment groups. RF analysis was able to identify drug effects based on differential phMRI responses in the hippocampus, amygdala, nucleus accumbens, superior colliculus, and the lateral and posterior thalamus for drug vs. saline. These structures have high levels of mu opioid receptors. In addition these regions are involved in aversive signaling, which is inhibited by mu opioids. The results demonstrate that buprenorphine mediated phMRI responses comprise characteristic features that allow a supervised differentiation from placebo treated rats as well as the proper allocation to the respective drug dose group using the RF method, a method that has been successfully applied in clinical studies.
Collapse
Affiliation(s)
- Qasim Bukhari
- Institute for Biomedical Engineering, ETH Zürich and University of ZürichZürich, Switzerland
| | - David Borsook
- Pain and Analgesia Imaging Neuroscience Group, Departments of Anesthesia, Perioperative and Pain Medicine, Boston Children's HospitalWaltham, MA, USA
- Department of Radiology, Boston Children's HospitalWaltham, MA, USA
| | - Markus Rudin
- Institute for Biomedical Engineering, ETH Zürich and University of ZürichZürich, Switzerland
- Institute of Pharmacology and Toxicology, University of ZürichZürich, Switzerland
| | - Lino Becerra
- Pain and Analgesia Imaging Neuroscience Group, Departments of Anesthesia, Perioperative and Pain Medicine, Boston Children's HospitalWaltham, MA, USA
- Department of Radiology, Boston Children's HospitalWaltham, MA, USA
| |
Collapse
|
29
|
Deformable models direct supervised guidance: A novel paradigm for automatic image segmentation. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.11.023] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
30
|
Wu P, Liu Y, Li Y, Liu B. Robust Prostate Segmentation Using Intrinsic Properties of TRUS Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1321-1335. [PMID: 25576565 DOI: 10.1109/tmi.2015.2388699] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Accurate segmentation is usually crucial in transrectal ultrasound (TRUS) image based prostate diagnosis; however, it is always hampered by heavy speckles. Contrary to the traditional view that speckles are adverse to segmentation, we exploit intrinsic properties induced by speckles to facilitate the task, based on the observations that sizes and orientations of speckles provide salient cues to determine the prostate boundary. Since the speckle orientation changes in accordance with a statistical prior rule, rotation-invariant texture feature is extracted along the orientations revealed by the rule. To address the problem of feature changes due to different speckle sizes, TRUS images are split into several arc-like strips. In each strip, every individual feature vector is sparsely represented, and representation residuals are obtained. The residuals, along with the spatial coherence inherited from biological tissues, are combined to segment the prostate preliminarily via graph cuts. After that, the segmentation is fine-tuned by a novel level sets model, which integrates (1) the prostate shape prior, (2) dark-to-light intensity transition near the prostate boundary, and (3) the texture feature just obtained. The proposed method is validated on two 2-D image datasets obtained from two different sonographic imaging systems, with the mean absolute distance on the mid gland images only 1.06±0.53 mm and 1.25±0.77 mm, respectively. The method is also extended to segment apex and base images, producing competitive results over the state of the art.
Collapse
|
31
|
Nouranian S, Mahdavi SS, Spadinger I, Morris WJ, Salcudean SE, Abolmaesumi P. A multi-atlas-based segmentation framework for prostate brachytherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:950-961. [PMID: 25474806 DOI: 10.1109/tmi.2014.2371823] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Low-dose-rate brachytherapy is a radiation treatment method for localized prostate cancer. The standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate in order to devise a plan to deliver sufficient radiation dose to the cancerous tissue. Brachytherapy planning involves delineation of contours in these images, which closely follow the prostate boundary, i.e., clinical target volume. This process is currently performed either manually or semi-automatically, which requires user interaction for landmark initialization. In this paper, we propose a multi-atlas fusion framework to automatically delineate the clinical target volume in ultrasound images. A dataset of a priori segmented ultrasound images, i.e., atlases, is registered to a target image. We introduce a pairwise atlas agreement factor that combines an image-similarity metric and similarity between a priori segmented contours. This factor is used in an atlas selection algorithm to prune the dataset before combining the atlas contours to produce a consensus segmentation. We evaluate the proposed segmentation approach on a set of 280 transrectal prostate volume studies. The proposed method produces segmentation results that are within the range of observer variability when compared to a semi-automatic segmentation technique that is routinely used in our cancer clinic.
Collapse
|
32
|
Mata C, Walker PM, Oliver A, Brunotte F, Martí J, Lalande A. ProstateAnalyzer: Web-based medical application for the management of prostate cancer using multiparametric MR imaging. Inform Health Soc Care 2015; 41:286-306. [PMID: 25710606 DOI: 10.3109/17538157.2015.1008488] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVES In this paper, we present ProstateAnalyzer, a new web-based medical tool for prostate cancer diagnosis. ProstateAnalyzer allows the visualization and analysis of magnetic resonance images (MRI) in a single framework. METHODS ProstateAnalyzer recovers the data from a PACS server and displays all the associated MRI images in the same framework, usually consisting of 3D T2-weighted imaging for anatomy, dynamic contrast-enhanced MRI for perfusion, diffusion-weighted imaging in the form of an apparent diffusion coefficient (ADC) map and MR Spectroscopy. ProstateAnalyzer allows annotating regions of interest in a sequence and propagates them to the others. RESULTS From a representative case, the results using the four visualization platforms are fully detailed, showing the interaction among them. The tool has been implemented as a Java-based applet application to facilitate the portability of the tool to the different computer architectures and software and allowing the possibility to work remotely via the web. CONCLUSION ProstateAnalyzer enables experts to manage prostate cancer patient data set more efficiently. The tool allows delineating annotations by experts and displays all the required information for use in diagnosis. According to the current European Society of Urogenital Radiology guidelines, it also includes the PI-RADS structured reporting scheme.
Collapse
Affiliation(s)
- Christian Mata
- a Department of Computer Architecture and Technology , University of Girona , Girona , Spain .,b Laboratoire Electronique Informatique et Image (Le2I) , Université de Bourgogne , Dijon , France
| | - Paul M Walker
- b Laboratoire Electronique Informatique et Image (Le2I) , Université de Bourgogne , Dijon , France .,c Department of NMR Spectroscopy , University Hospital , Dijon , France
| | - Arnau Oliver
- a Department of Computer Architecture and Technology , University of Girona , Girona , Spain
| | - François Brunotte
- b Laboratoire Electronique Informatique et Image (Le2I) , Université de Bourgogne , Dijon , France .,c Department of NMR Spectroscopy , University Hospital , Dijon , France
| | - Joan Martí
- a Department of Computer Architecture and Technology , University of Girona , Girona , Spain
| | - Alain Lalande
- b Laboratoire Electronique Informatique et Image (Le2I) , Université de Bourgogne , Dijon , France .,c Department of NMR Spectroscopy , University Hospital , Dijon , France
| |
Collapse
|
33
|
Cheng J, Xiong W, Gu Y, Chia SC, Wang Y, Huang W, Zhou J, Zhou Y, Gao W, Tay KJ, Ho H. Prostate boundary segment extraction using cascaded shape regression and optimal surface detection. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:2886-9. [PMID: 25570594 DOI: 10.1109/embc.2014.6944226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
In this paper, we proposed a new method (CSR+OSD) for the extraction of irregular open prostate boundaries in noisy extracorporeal ultrasound image. First, cascaded shape regression (CSR) is used to locate the position of prostate boundary in the images. In CSR, a sequence of random fern predictors are trained in a boosted regression manner, using shape-indexed features to achieve invariance against position variations of prostate boundaries. Afterwards, we adopt optimal surface detection (OSD) to refine the prostate boundary segments across 3D sections globally and efficiently. The proposed method is tested on 162 ECUS images acquired from 8 patients with benign prostate hyperplasia. The method yields a Root Mean Square Distance of 2.11±1.72 mm and a Mean Absolute Distance of 1.61±1.26 mm, which are lower than those of JFilament, an open active contour algorithm and Chan-Vese region based level set model, respectively.
Collapse
|
34
|
Chilali O, Ouzzane A, Diaf M, Betrouni N. A survey of prostate modeling for image analysis. Comput Biol Med 2014; 53:190-202. [PMID: 25156801 DOI: 10.1016/j.compbiomed.2014.07.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2013] [Revised: 06/22/2014] [Accepted: 07/23/2014] [Indexed: 11/18/2022]
Affiliation(s)
- O Chilali
- Inserm U703, 152, rue du Docteur Yersin, Lille University Hospital, 59120 Loos, France; Automatic Department, Mouloud Mammeri University, Tizi-Ouzou, Algeria
| | - A Ouzzane
- Inserm U703, 152, rue du Docteur Yersin, Lille University Hospital, 59120 Loos, France; Urology Department, Claude Huriez Hospital, Lille University Hospital, France
| | - M Diaf
- Automatic Department, Mouloud Mammeri University, Tizi-Ouzou, Algeria
| | - N Betrouni
- Inserm U703, 152, rue du Docteur Yersin, Lille University Hospital, 59120 Loos, France.
| |
Collapse
|
35
|
Comparison and supervised learning of segmentation methods dedicated to specular microscope images of corneal endothelium. Int J Biomed Imaging 2014; 2014:704791. [PMID: 25328510 PMCID: PMC4190134 DOI: 10.1155/2014/704791] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2014] [Accepted: 08/12/2014] [Indexed: 12/04/2022] Open
Abstract
The cornea is the front of the eye. Its inner cell layer, called the endothelium, is important because it is closely related to the light transparency of the cornea. An in vivo observation of this layer is performed by using specular microscopy to evaluate the health of the cells: a high spatial density will result in a good transparency. Thus, the main criterion required by ophthalmologists is the cell density of the cornea endothelium, mainly obtained by an image segmentation process. Different methods can perform the image segmentation of these cells, and the three most performing methods are studied here. The question for the ophthalmologists is how to choose the best algorithm and to obtain the best possible results with it. This paper presents a methodology to compare these algorithms together. Moreover, by the way of geometric dissimilarity criteria, the algorithms are tuned up, and the best parameter values are thus proposed to the expert ophthalmologists.
Collapse
|
36
|
Iterative multi-class multi-scale stacked sequential learning: Definition and application to medical volume segmentation. Pattern Recognit Lett 2014. [DOI: 10.1016/j.patrec.2014.05.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|