1
|
Wang X, Zhang Z, Wu K, Yin X, Guo H. Gabor Dictionary of Sparse Image Patches Selected in Prior Boundaries for 3D Liver Segmentation in CT Images. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5552864. [PMID: 34925736 PMCID: PMC8677387 DOI: 10.1155/2021/5552864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 06/26/2021] [Accepted: 10/04/2021] [Indexed: 11/17/2022]
Abstract
The gray contrast between the liver and other soft tissues is low, and the boundary is not obvious. As a result, it is still a challenging task to accurately segment the liver from CT images. In recent years, methods of machine learning have become a research hotspot in the field of medical image segmentation because they can effectively use the "gold standard" personalized features of the liver from different data. However, machine learning usually requires a large number of data samples to train the model and improve the accuracy of medical image segmentation. This paper proposed a method for liver segmentation based on the Gabor dictionary of sparse image blocks with prior boundaries. This method reduced the number of samples by selecting the test sample set within the initial boundary area of the liver. The Gabor feature was extracted and the query dictionary was created, and the sparse coefficient was calculated to obtain the boundary information of the liver. By optimizing the reconstruction error and filling holes, a smooth liver boundary was obtained. The proposed method was tested on the MICCAI 2007 dataset and ISBI2017 dataset, and five measures were used to evaluate the results. The proposed method was compared with methods for liver segmentation proposed in recent years. The experimental results show that this method can improve the accuracy of liver segmentation and effectively repair the discontinuity and local overlap of segmentation results.
Collapse
Affiliation(s)
- Xuehu Wang
- College of Electronic and Information Engineering, Hebei University, Baoding 071002, China
- Research Center of Machine Vision Engineering and Technology of Hebei Province, Baoding 071002, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding 071002, China
| | - Zhiling Zhang
- College of Electronic and Information Engineering, Hebei University, Baoding 071002, China
- Research Center of Machine Vision Engineering and Technology of Hebei Province, Baoding 071002, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding 071002, China
| | - Kunlun Wu
- Hebei Research Institute of Construction and Geotechnical Investigation Co.,Ltd., Shijiazhuang, Hebei, China
| | - Xiaoping Yin
- Affiliated Hospital of Hebei University, Baoding 071000, China
| | - Haifeng Guo
- College of Electronic and Information Engineering, Hebei University, Baoding 071002, China
- Research Center of Machine Vision Engineering and Technology of Hebei Province, Baoding 071002, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding 071002, China
| |
Collapse
|
2
|
Abstract
Segmentation of medical images using multiple atlases has recently gained immense attention due to their augmented robustness against variabilities across different subjects. These atlas-based methods typically comprise of three steps: atlas selection, image registration, and finally label fusion. Image registration is one of the core steps in this process, accuracy of which directly affects the final labeling performance. However, due to inter-subject anatomical variations, registration errors are inevitable. The aim of this paper is to develop a deep learning-based confidence estimation method to alleviate the potential effects of registration errors. We first propose a fully convolutional network (FCN) with residual connections to learn the relationship between the image patch pair (i.e., patches from the target subject and the atlas) and the related label confidence patch. With the obtained label confidence patch, we can identify the potential errors in the warped atlas labels and correct them. Then, we use two label fusion methods to fuse the corrected atlas labels. The proposed methods are validated on a publicly available dataset for hippocampus segmentation. Experimental results demonstrate that our proposed methods outperform the state-of-the-art segmentation methods.
Collapse
Affiliation(s)
- Hancan Zhu
- School of Mathematics Physics and Information, Shaoxing University, Shaoxing, 312000, China
| | - Ehsan Adeli
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, 94305, CA, USA
| | - Feng Shi
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, 27599, North Carolina, USA.
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
3
|
Yang W, Shi Y, Park SH, Yang M, Gao Y, Shen D. An Effective MR-Guided CT Network Training for Segmenting Prostate in CT Images. IEEE J Biomed Health Inform 2020; 24:2278-2291. [DOI: 10.1109/jbhi.2019.2960153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
4
|
Onofrey JA, Staib LH, Huang X, Zhang F, Papademetris X, Metaxas D, Rueckert D, Duncan JS. Sparse Data-Driven Learning for Effective and Efficient Biomedical Image Segmentation. Annu Rev Biomed Eng 2020; 22:127-153. [PMID: 32169002 PMCID: PMC9351438 DOI: 10.1146/annurev-bioeng-060418-052147] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Sparsity is a powerful concept to exploit for high-dimensional machine learning and associated representational and computational efficiency. Sparsity is well suited for medical image segmentation. We present a selection of techniques that incorporate sparsity, including strategies based on dictionary learning and deep learning, that are aimed at medical image segmentation and related quantification.
Collapse
Affiliation(s)
- John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Urology, Yale School of Medicine, New Haven, Connecticut 06520, USA
| | - Lawrence H Staib
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| | - Xiaojie Huang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Citadel Securities, Chicago, Illinois 60603, USA
| | - Fan Zhang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
| | - Xenophon Papademetris
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| | - Dimitris Metaxas
- Department of Computer Science, Rutgers University, Piscataway, New Jersey 08854, USA
| | - Daniel Rueckert
- Department of Computing, Imperial College London, London SW7 2AZ, United Kingdom
| | - James S Duncan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| |
Collapse
|
5
|
Wang S, Nie D, Qu L, Shao Y, Lian J, Wang Q, Shen D. CT Male Pelvic Organ Segmentation via Hybrid Loss Network With Incomplete Annotation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2151-2162. [PMID: 31940526 PMCID: PMC8195629 DOI: 10.1109/tmi.2020.2966389] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Sufficient data with complete annotation is essential for training deep models to perform automatic and accurate segmentation of CT male pelvic organs, especially when such data is with great challenges such as low contrast and large shape variation. However, manual annotation is expensive in terms of both finance and human effort, which usually results in insufficient completely annotated data in real applications. To this end, we propose a novel deep framework to segment male pelvic organs in CT images with incomplete annotation delineated in a very user-friendly manner. Specifically, we design a hybrid loss network derived from both voxel classification and boundary regression, to jointly improve the organ segmentation performance in an iterative way. Moreover, we introduce a label completion strategy to complete the labels of the rich unannotated voxels and then embed them into the training data to enhance the model capability. To reduce the computation complexity and improve segmentation performance, we locate the pelvic region based on salient bone structures to focus on the candidate segmentation organs. Experimental results on a large planning CT pelvic organ dataset show that our proposed method with incomplete annotation achieves comparable segmentation performance to the state-of-the-art methods with complete annotation. Moreover, our proposed method requires much less effort of manual contouring from medical professionals such that an institutional specific model can be more easily established.
Collapse
|
6
|
Lei Y, Shu HK, Tian S, Wang T, Liu T, Mao H, Shim H, Curran WJ, Yang X. Pseudo CT Estimation using Patch-based Joint Dictionary Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:5150-5153. [PMID: 30441499 DOI: 10.1109/embc.2018.8513475] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Magnetic resonance (MR) simulators have recently gained popularity; it avoids the unnecessary radiation exposure associated with Computed Tomography (CT) when used for radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on joint dictionary learning. Patient-specific anatomical features were extracted from the aligned training images and adopted as signatures for each voxel. The most relevant and informative features were identified to train the joint dictionary learning-based model. The well-trained dictionary was used to predict the pseudo CT of a new patient. This prediction technique was validated with a clinical study of 12 patients with MR and CT images of the brain. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross correlation (NCC) indexes were used to quantify the prediction accuracy. We compared our proposed method with a state-of-the-art dictionary learning method. Overall our proposed method significantly improves the prediction accuracy over the state-of-the-art dictionary learning method. We have investigated a novel joint dictionary Iearning- based approach to predict CT images from routine MRIs and demonstrated its reliability. This CT prediction technique could be a useful tool for MRI-based radiation treatment planning or attenuation correction for quantifying PET images for PET/MR imaging.
Collapse
|
7
|
Wang S, He K, Nie D, Zhou S, Gao Y, Shen D. CT male pelvic organ segmentation using fully convolutional networks with boundary sensitive representation. Med Image Anal 2019; 54:168-178. [PMID: 30928830 PMCID: PMC6506162 DOI: 10.1016/j.media.2019.03.003] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 03/17/2019] [Accepted: 03/20/2019] [Indexed: 12/27/2022]
Abstract
Accurate segmentation of the prostate and organs at risk (e.g., bladder and rectum) in CT images is a crucial step for radiation therapy in the treatment of prostate cancer. However, it is a very challenging task due to unclear boundaries, large intra- and inter-patient shape variability, and uncertain existence of bowel gases and fiducial markers. In this paper, we propose a novel automatic segmentation framework using fully convolutional networks with boundary sensitive representation to address this challenging problem. Our novel segmentation framework contains three modules. First, an organ localization model is designed to focus on the candidate segmentation region of each organ for better performance. Then, a boundary sensitive representation model based on multi-task learning is proposed to represent the semantic boundary information in a more robust and accurate manner. Finally, a multi-label cross-entropy loss function combining boundary sensitive representation is introduced to train a fully convolutional network for the organ segmentation. The proposed method is evaluated on a large and diverse planning CT dataset with 313 images from 313 prostate cancer patients. Experimental results show that the performance of our proposed method outperforms the baseline fully convolutional networks, as well as other state-of-the-art methods in CT male pelvic organ segmentation.
Collapse
Affiliation(s)
- Shuai Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Kelei He
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Sihang Zhou
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; School of Computer, National University of Defense Technology, Changsha, China
| | - Yaozong Gao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.
| |
Collapse
|
8
|
Wang T, Lei Y, Manohar N, Tian S, Jani AB, Shu HK, Higgins K, Dhabaan A, Patel P, Tang X, Liu T, Curran WJ, Yang X. Dosimetric study on learning-based cone-beam CT correction in adaptive radiation therapy. Med Dosim 2019; 44:e71-e79. [PMID: 30948341 DOI: 10.1016/j.meddos.2019.03.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 08/16/2018] [Accepted: 03/04/2019] [Indexed: 11/16/2022]
Abstract
INTRODUCTION Cone-beam CT (CBCT) image quality is important for its quantitative analysis in adaptive radiation therapy. However, due to severe artifacts, the CBCTs are primarily used for verifying patient setup only so far. We have developed a learning-based image quality improvement method which could provide CBCTs with image quality comparable to planning CTs (pCTs). The accuracy of dose calculations based on these CBCTs is unknown. In this study, we aim to investigate the dosimetric accuracy of our corrected CBCT (CCBCT) in brain stereotactic radiosurgery (SRS) and pelvic radiotherapy. MATERIALS AND METHODS We retrospectively investigated a total of 32 treatment plans from 22 patients, each of whom with both original treatment pCTs and CBCTs acquired during treatment setup. The CCBCT and original CBCT (OCBCT) were registered to the pCT for generating CCBCT-based and OCBCT-based treatment plans. The original pCT-based plans served as ground truth. Clinically-relevant dose volume histogram (DVH) metrics were extracted from the ground truth, OCBCT-based and CCBCT-based plans for comparison. Gamma analysis was also performed to compare the absorbed dose distributions between the pCT-based and OCBCT/CCBCT-based plans of each patient. RESULTS CCBCTs demonstrated better image contrast and more accurate HU ranges when compared side-by-side with OCBCTs. For pelvic radiotherapy plans, the mean dose error in DVH metrics for planning target volume (PTV), bladder and rectum was significantly reduced, from 1% to 0.3%, after CBCT correction. The gamma analysis showed the average pass rate increased from 94.5% before correction to 99.0% after correction. For brain SRS treatment plans, both original and corrected CBCT images were accurate enough for dose calculation, though CCBCT featured higher image quality. CONCLUSION CCBCTs can provide a level of dose accuracy comparable to traditional pCTs for brain and prostate radiotherapy planning and the correction method proposed here can be useful in CBCT-guided adaptive radiotherapy.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Nivedh Manohar
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Anees Dhabaan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA.
| |
Collapse
|
9
|
Wang B, Lei Y, Tian S, Wang T, Liu Y, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Deeply supervised 3D fully convolutional networks with group dilated convolution for automatic MRI prostate segmentation. Med Phys 2019; 46:1707-1718. [PMID: 30702759 DOI: 10.1002/mp.13416] [Citation(s) in RCA: 123] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 01/18/2019] [Accepted: 01/24/2019] [Indexed: 12/15/2022] Open
Abstract
PURPOSE Reliable automated segmentation of the prostate is indispensable for image-guided prostate interventions. However, the segmentation task is challenging due to inhomogeneous intensity distributions, variation in prostate anatomy, among other problems. Manual segmentation can be time-consuming and is subject to inter- and intraobserver variation. We developed an automated deep learning-based method to address this technical challenge. METHODS We propose a three-dimensional (3D) fully convolutional networks (FCN) with deep supervision and group dilated convolution to segment the prostate on magnetic resonance imaging (MRI). In this method, a deeply supervised mechanism was introduced into a 3D FCN to effectively alleviate the common exploding or vanishing gradients problems in training deep models, which forces the update process of the hidden layer filters to favor highly discriminative features. A group dilated convolution which aggregates multiscale contextual information for dense prediction was proposed to enlarge the effective receptive field of convolutional neural networks, which improve the prediction accuracy of prostate boundary. In addition, we introduced a combined loss function including cosine and cross entropy, which measures similarity and dissimilarity between segmented and manual contours, to further improve the segmentation accuracy. Prostate volumes manually segmented by experienced physicians were used as a gold standard against which our segmentation accuracy was measured. RESULTS The proposed method was evaluated on an internal dataset comprising 40 T2-weighted prostate MR volumes. Our method achieved a Dice similarity coefficient (DSC) of 0.86 ± 0.04, a mean surface distance (MSD) of 1.79 ± 0.46 mm, 95% Hausdorff distance (95%HD) of 7.98 ± 2.91 mm, and absolute relative volume difference (aRVD) of 15.65 ± 10.82. A public dataset (PROMISE12) including 50 T2-weighted prostate MR volumes was also employed to evaluate our approach. Our method yielded a DSC of 0.88 ± 0.05, MSD of 1.02 ± 0.35 mm, 95% HD of 9.50 ± 5.11 mm, and aRVD of 8.93 ± 7.56. CONCLUSION We developed a novel deeply supervised deep learning-based approach with a group dilated convolution to automatically segment the MRI prostate, demonstrated its clinical feasibility, and validated its accuracy against manual segmentation. The proposed technique could be a useful tool for image-guided interventions in prostate cancer.
Collapse
Affiliation(s)
- Bo Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.,School of Physics and Electronic-Electrical Engineering, Ningxia University, Yinchuan, Ningxia, 750021, P.R. China
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
10
|
Automated geographic atrophy segmentation for SD-OCT images based on two-stage learning model. Comput Biol Med 2019; 105:102-111. [DOI: 10.1016/j.compbiomed.2018.12.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 12/27/2018] [Accepted: 12/27/2018] [Indexed: 01/19/2023]
|
11
|
Shahedi M, Halicek M, Li Q, Liu L, Zhang Z, Verma S, Schuster DM, Fei B. A semiautomatic approach for prostate segmentation in MR images using local texture classification and statistical shape modeling. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10951:109512I. [PMID: 32528212 PMCID: PMC7289512 DOI: 10.1117/12.2512282] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Segmentation of the prostate in magnetic resonance (MR) images has many applications in image-guided treatment planning and procedures such as biopsy and focal therapy. However, manual delineation of the prostate boundary is a time-consuming task with high inter-observer variation. In this study, we proposed a semiautomated, three-dimensional (3D) prostate segmentation technique for T2-weighted MR images based on shape and texture analysis. The prostate gland shape is usually globular with a smoothly curved surface that could be accurately modeled and reconstructed if the locations of a limited number of well-distributed surface points are known. For a training image set, we used an inter-subject correspondence between the prostate surface points to model the prostate shape variation based on a statistical point distribution modeling. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. To segment a new image, we used the learned prostate shape and texture characteristics to search for the prostate border close to an initially estimated prostate surface. We used 23 MR images for training, and 14 images for testing the algorithm performance. We compared the results to two sets of experts' manual reference segmentations. The measured mean ± standard deviation of error values for the whole gland were 1.4 ± 0.4 mm, 8.5 ± 2.0 mm, and 86 ± 3% in terms of mean absolute distance (MAD), Hausdorff distance (HDist), and Dice similarity coefficient (DSC). The average measured differences between the two experts on the same datasets were 1.5 mm (MAD), 9.0 mm (HDist), and 83% (DSC). The proposed algorithm illustrated a fast, accurate, and robust performance for 3D prostate segmentation. The accuracy of the algorithm is within the inter-expert variability observed in manual segmentation and comparable to the best performance results reported in the literature.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
| | - Martin Halicek
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA
| | - Qinmei Li
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Lizhi Liu
- State Key Laboratory of Oncology Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenfeng Zhang
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Sadhna Verma
- Department of Radiology, University of Cincinnati Medical Center and The Veterans Administration Hospital, Cincinnati, OH
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
12
|
Shahedi M, Ma L, Halicek M, Guo R, Zhang G, Schuster DM, Nieh P, Master V, Fei B. A semiautomatic algorithm for three-dimensional segmentation of the prostate on CT images using shape and local texture characteristics. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10576. [PMID: 30245541 DOI: 10.1117/12.2293195] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Prostate segmentation in computed tomography (CT) images is useful for planning and guidance of the diagnostic and therapeutic procedures. However, the low soft-tissue contrast of CT images makes the manual prostate segmentation a time-consuming task with high inter-observer variation. We developed a semi-automatic, three-dimensional (3D) prostate segmentation algorithm using shape and texture analysis and have evaluated the method against manual reference segmentations. In a training data set we defined an inter-subject correspondence between surface points in the spherical coordinate system. We applied this correspondence to model the globular and smoothly curved shape of the prostate with 86, well-distributed surface points using a point distribution model that captures prostate shape variation. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. For segmentation, we used the learned shape and texture characteristics of the prostate in CT images and we used a set of user inputs for prostate localization. We trained our algorithm using 23 CT images and tested it on 10 images. We evaluated the results compared with those of two experts' manual reference segmentations using different error metrics. The average measured Dice similarity coefficient (DSC) and mean absolute distance (MAD) were 88 ± 2% and 1.9 ± 0.5 mm, respectively. The averaged inter-expert difference measured on the same dataset was 91 ± 4% (DSC) and 1.3 ± 0.6 mm (MAD). With no prior intra-patient information, the proposed algorithm showed a fast, robust and accurate performance for 3D CT segmentation.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Martin Halicek
- The Wallace H. Coulter Department of Biomedical Engineering Georgia Institute of Technology and Emory University, Atlanta, GA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Peter Nieh
- Department of Urology, Emory University, Atlanta, GA
| | - Viraj Master
- Department of Urology, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,Department of Urology, Emory University, Atlanta, GA.,Winship Cancer Institute of Emory University, Atlanta, GA
| |
Collapse
|
13
|
An iterative multi-atlas patch-based approach for cortex segmentation from neonatal MRI. Comput Med Imaging Graph 2018; 70:73-82. [PMID: 30296626 DOI: 10.1016/j.compmedimag.2018.09.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 08/10/2018] [Accepted: 09/13/2018] [Indexed: 11/21/2022]
Abstract
Brain structure analysis in the newborn is a major health issue. This is especially the case for preterm neonates, in order to obtain predictive information related to the child development. In particular, the cortex is a structure of interest, that can be observed in magnetic resonance imaging (MRI). However, neonatal MRI data present specific properties that make them challenging to process. In this context, multi-atlas approaches constitute an efficient strategy, taking advantage of images processed beforehand. The method proposed in this article relies on such a multi-atlas strategy. More precisely, it uses two paradigms: first, a non-local model based on patches; second, an iterative optimization scheme. Coupling both concepts allows us to consider patches related not only to the image information, but also to the current segmentation. This strategy is compared to other multi-atlas methods proposed in the literature. Experiments on dHCP datasets show that the proposed approach provides robust cortex segmentation results.
Collapse
|
14
|
Chai H, Guo Y, Wang Y, Zhou G. Automatic computer aided analysis algorithms and system for adrenal tumors on CT images. Technol Health Care 2018; 25:1105-1118. [PMID: 28800344 DOI: 10.3233/thc-160597] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND The adrenal tumor will disturb the secreting function of adrenocortical cells, leading to many diseases. Different kinds of adrenal tumors require different therapeutic schedules. OBJECTIVE In the practical diagnosis, it highly relies on the doctor's experience to judge the tumor type by reading the hundreds of CT images. METHODS This paper proposed an automatic computer aided analysis method for adrenal tumors detection and classification. It consisted of the automatic segmentation algorithms, the feature extraction and the classification algorithms. These algorithms were then integrated into a system and conducted on the graphic interface by using MATLAB Graphic user interface (GUI). RESULTS The accuracy of the automatic computer aided segmentation and classification reached 90% on 436 CT images. CONCLUSION The experiments proved the stability and reliability of this automatic computer aided analytic system.
Collapse
Affiliation(s)
- Hanchao Chai
- Department of Electronic Engineering, Fudan University, Shanghai, China.,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China
| | - Yi Guo
- Department of Electronic Engineering, Fudan University, Shanghai, China.,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China
| | - Yuanyuan Wang
- Department of Electronic Engineering, Fudan University, Shanghai, China.,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China
| | - Guohui Zhou
- Department of Electronic Engineering, Fudan University, Shanghai, China.,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China
| |
Collapse
|
15
|
Shahedi M, Halicek M, Guo R, Zhang G, Schuster DM, Fei B. A semiautomatic segmentation method for prostate in CT images using local texture classification and statistical shape modeling. Med Phys 2018; 45:2527-2541. [PMID: 29611216 DOI: 10.1002/mp.12898] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 03/15/2018] [Accepted: 03/24/2018] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Prostate segmentation in computed tomography (CT) images is useful for treatment planning and procedure guidance such as external beam radiotherapy and brachytherapy. However, because of the low, soft tissue contrast of CT images, manual segmentation of the prostate is a time-consuming task with high interobserver variation. In this study, we proposed a semiautomated, three-dimensional (3D) segmentation for prostate CT images using shape and texture analysis and we evaluated the method against manual reference segmentations. METHODS The prostate gland usually has a globular shape with a smoothly curved surface, and its shape could be accurately modeled or reconstructed having a limited number of well-distributed surface points. In a training dataset, using the prostate gland centroid point as the origin of a coordination system, we defined an intersubject correspondence between the prostate surface points based on the spherical coordinates. We applied this correspondence to generate a point distribution model for prostate shape using principal component analysis and to study the local texture difference between prostate and nonprostate tissue close to the different prostate surface subregions. We used the learned shape and texture characteristics of the prostate in CT images and then combined them with user inputs to segment a new image. We trained our segmentation algorithm using 23 CT images and tested the algorithm on two sets of 10 nonbrachytherapy and 37 postlow dose rate brachytherapy CT images. We used a set of error metrics to evaluate the segmentation results using two experts' manual reference segmentations. RESULTS For both nonbrachytherapy and post-brachytherapy image sets, the average measured Dice similarity coefficient (DSC) was 88% and the average mean absolute distance (MAD) was 1.9 mm. The average measured differences between the two experts on both datasets were 92% (DSC) and 1.1 mm (MAD). CONCLUSIONS The proposed, semiautomatic segmentation algorithm showed a fast, robust, and accurate performance for 3D prostate segmentation of CT images, specifically when no previous, intrapatient information, that is, previously segmented images, was available. The accuracy of the algorithm is comparable to the best performance results reported in the literature and approaches the interexpert variability observed in manual segmentation.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Martin Halicek
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30322, USA.,Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, 30332, USA.,Winship Cancer Institute of Emory University, Atlanta, GA, 30322, USA.,Department of Mathematics and Computer Science, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
16
|
Tian P, Qi L, Shi Y, Zhou L, Gao Y, Shen D. A NOVEL IMAGE-SPECIFIC TRANSFER APPROACH FOR PROSTATE SEGMENTATION IN MR IMAGES. PROCEEDINGS OF THE ... IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. ICASSP (CONFERENCE) 2018; 2018:806-810. [PMID: 30636936 PMCID: PMC6328258 DOI: 10.1109/icassp.2018.8461716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Prostate segmentation in Magnetic Resonance (MR) Images is a significant yet challenging task for prostate cancer treatment. Most of the existing works attempted to design a global classifier for all MR images, which neglect the discrepancy of images across different patients. To this end, we propose a novel transfer approach for prostate segmentation in MR images. Firstly, an image-specific classifier is built for each training image. Secondly, a pair of dictionaries and a mapping matrix are jointly obtained by a novel Semi-Coupled Dictionary Transfer Learning (SCDTL). Finally, the classifiers on the source domain could be selectively transferred to the target domain (i.e. testing images) by the dictionaries and the mapping matrix. The evaluation demonstrates that our approach has a competitive performance compared with the state-of-the-art transfer learning methods. Moreover, the proposed transfer approach outperforms the conventional deep neural network based method.
Collapse
Affiliation(s)
- Pinzhuo Tian
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Lei Qi
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Luping Zhou
- School of Computing and Information Technology, University of Wollongong, Australia
| | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC Chapel Hill, USA
| |
Collapse
|
17
|
Wang Y, Ma G, Wu X, Zhou J. Patch-Based Label Fusion with Structured Discriminant Embedding for Hippocampus Segmentation. Neuroinformatics 2018; 16:411-423. [PMID: 29512026 DOI: 10.1007/s12021-018-9364-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Automatic and accurate segmentation of hippocampal structures in medical images is of great importance in neuroscience studies. In multi-atlas based segmentation methods, to alleviate the misalignment when registering atlases to the target image, patch-based methods have been widely studied to improve the performance of label fusion. However, weights assigned to the fused labels are usually computed based on predefined features (e.g. image intensities), thus being not necessarily optimal. Due to the lack of discriminating features, the original feature space defined by image intensities may limit the description accuracy. To solve this problem, we propose a patch-based label fusion with structured discriminant embedding method to automatically segment the hippocampal structure from the target image in a voxel-wise manner. Specifically, multi-scale intensity features and texture features are first extracted from the image patch for feature representation. Margin fisher analysis (MFA) is then applied to the neighboring samples in the atlases for the target voxel, in order to learn a subspace in which the distance between intra-class samples is minimized and the distance between inter-class samples is simultaneously maximized. Finally, the k-nearest neighbor (kNN) classifier is employed in the learned subspace to determine the final label for the target voxel. In the experiments, we evaluate our proposed method by conducting hippocampus segmentation using the ADNI dataset. Both the qualitative and quantitative results show that our method outperforms the conventional multi-atlas based segmentation methods.
Collapse
Affiliation(s)
- Yan Wang
- College of Computer Science, Sichuan University, Chengdu, China. .,Fujian Provincial Key Laboratory of Information Processing and Intelligent Control (Minjiang University), Fuzhou, 350121, China.
| | - Guangkai Ma
- Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin, China
| | - Xi Wu
- Department of Computer Science, Chengdu University of Information Technology, Chengdu, China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu, China.,Department of Computer Science, Chengdu University of Information Technology, Chengdu, China
| |
Collapse
|
18
|
Lei Y, Tang X, Higgins K, Wang T, Liu T, Dhabaan A, Shim H, Curran WJ, Yang X. Improving Image Quality of Cone-Beam CT Using Alternating Regression Forest. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10573:1057345. [PMID: 31456600 PMCID: PMC6711599 DOI: 10.1117/12.2292886] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We propose a CBCT image quality improvement method based on anatomic signature and auto-context alternating regression forest. Patient-specific anatomical features are extracted from the aligned training images and served as signatures for each voxel. The most relevant and informative features are identified to train regression forest. The well-trained regression forest is used to correct the CBCT of a new patient. This proposed algorithm was evaluated using 10 patients' data with CBCT and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross correlation (NCC) indexes were used to quantify the correction accuracy of the proposed algorithm. The mean MAE, PSNR and NCC between corrected CBCT and ground truth CT were 16.66HU, 37.28dB and 0.98, which demonstrated the CBCT correction accuracy of the proposed learning-based method. We have developed a learning-based method and demonstrated that this method could significantly improve CBCT image quality. The proposed method has great potential in improving CBCT image quality to a level close to planning CT, therefore, allowing its quantitative use in CBCT-guided adaptive radiotherapy.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Anees Dhabaan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Hyunsuk Shim
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
19
|
Cherukuri V, Ssenyonga P, Warf BC, Kulkarni AV, Monga V, Schiff SJ. Learning Based Segmentation of CT Brain Images: Application to Postoperative Hydrocephalic Scans. IEEE Trans Biomed Eng 2017; 65:1871-1884. [PMID: 29989926 DOI: 10.1109/tbme.2017.2783305] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Hydrocephalus is a medical condition in which there is an abnormal accumulation of cerebrospinal fluid (CSF) in the brain. Segmentation of brain imagery into brain tissue and CSF [before and after surgery, i.e., preoperative (pre-op) versus postoperative (post-op)] plays a crucial role in evaluating surgical treatment. Segmentation of pre-op images is often a relatively straightforward problem and has been well researched. However, segmenting post-op computational tomographic (CT) scans becomes more challenging due to distorted anatomy and subdural hematoma collections pressing on the brain. Most intensity- and feature-based segmentation methods fail to separate subdurals from brain and CSF as subdural geometry varies greatly across different patients and their intensity varies with time. We combat this problem by a learning approach that treats segmentation as supervised classification at the pixel level, i.e., a training set of CT scans with labeled pixel identities is employed. METHODS Our contributions include: 1) a dictionary learning framework that learns class (segment) specific dictionaries that can efficiently represent test samples from the same class while poorly represent corresponding samples from other classes; 2) quantification of associated computation and memory footprint; and 3) a customized training and test procedure for segmenting post-op hydrocephalic CT images. RESULTS Experiments performed on infant CT brain images acquired from the CURE Children's Hospital of Uganda reveal the success of our method against the state-of-the-art alternatives. We also demonstrate that the proposed algorithm is computationally less burdensome and exhibits a graceful degradation against a number of training samples, enhancing its deployment potential.
Collapse
|
20
|
Ma L, Guo R, Zhang G, Schuster DM, Fei B. A combined learning algorithm for prostate segmentation on 3D CT images. Med Phys 2017; 44:5768-5781. [PMID: 28834585 DOI: 10.1002/mp.12528] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 07/17/2017] [Accepted: 07/28/2017] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. METHODS We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. RESULTS The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. CONCLUSIONS By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA.,Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA.,Winship Cancer Institute of Emory University, Atlanta, GA, USA.,Department of Mathematics and Computer Science, Emory University, Atlanta, GA, USA
| |
Collapse
|
21
|
Wachinger C, Brennan M, Sharp GC, Golland P. Efficient Descriptor-Based Segmentation of Parotid Glands With Nonlocal Means. IEEE Trans Biomed Eng 2017; 64:1492-1502. [PMID: 28113224 PMCID: PMC5469701 DOI: 10.1109/tbme.2016.2603119] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE We introduce descriptor-based segmentation that extends existing patch-based methods by combining intensities, features, and location information. Since it is unclear which image features are best suited for patch selection, we perform a broad empirical study on a multitude of different features. METHODS We extend nonlocal means segmentation by including image features and location information. We search larger windows with an efficient nearest neighbor search based on kd-trees. We compare a large number of image features. RESULTS The best results were obtained for entropy image features, which have not yet been used for patch-based segmentation. We further show that searching larger image regions with an approximate nearest neighbor search and location information yields a significant improvement over the bounded nearest neighbor search traditionally employed in patch-based segmentation methods. CONCLUSION Features and location information significantly increase the segmentation accuracy. The best features highlight boundaries in the image. SIGNIFICANCE Our detailed analysis of several aspects of nonlocal means-based segmentation yields new insights about patch and neighborhood sizes together with the inclusion of location information. The presented approach advances the state-of-the-art in the segmentation of parotid glands for radiation therapy planning.
Collapse
|
22
|
Bao S, Chung ACS. Feature Sensitive Label Fusion With Random Walker for Atlas-Based Image Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:2797-2810. [PMID: 28410107 DOI: 10.1109/tip.2017.2691799] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, a novel label fusion method is proposed for brain magnetic resonance image segmentation. This label fusion method is formulated on a graph, which embraces both label priors from atlases and anatomical priors from target image. To represent a pixel in a comprehensive way, three kinds of feature vectors are generated, including intensity, gradient, and structural signature. To select candidate atlas nodes for fusion, rather than exact searching, randomized k-d tree with spatial constraint is introduced as an efficient approximation for high-dimensional feature matching. Feature sensitive label prior (FSLP), which takes both the consistency and variety of different features into consideration, is proposed to gather atlas priors. As FSLP is a non-convex problem, one heuristic approach is further designed to solve it efficiently. Moreover, based on the anatomical knowledge, parts of the target pixels are also employed as the graph seeds to assist the label fusion process, and an iterative strategy is utilized to gradually update the label map. The comprehensive experiments carried out on two publicly available databases give results to demonstrate that the proposed method can obtain better segmentation quality.
Collapse
|
23
|
Nishio M, Nagashima C. Computer-aided Diagnosis for Lung Cancer: Usefulness of Nodule Heterogeneity. Acad Radiol 2017; 24:328-336. [PMID: 28110797 DOI: 10.1016/j.acra.2016.11.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Revised: 10/14/2016] [Accepted: 11/02/2016] [Indexed: 10/20/2022]
Abstract
RATIONALE AND OBJECTIVES To develop a computer-aided diagnosis system to differentiate between malignant and benign nodules. MATERIALS AND METHODS Seventy-three lung nodules revealed on 60 sets of computed tomography (CT) images were analyzed. Contrast-enhanced CT was performed in 46 CT examinations. The images were provided by the LUNGx Challenge, and the ground truth of the lung nodules was unavailable; a surrogate ground truth was, therefore, constructed by radiological evaluation. Our proposed method involved novel patch-based feature extraction using principal component analysis, image convolution, and pooling operations. This method was compared to three other systems for the extraction of nodule features: histogram of CT density, local binary pattern on three orthogonal planes, and three-dimensional random local binary pattern. The probabilistic outputs of the systems and surrogate ground truth were analyzed using receiver operating characteristic analysis and area under the curve. The LUNGx Challenge team also calculated the area under the curve of our proposed method based on the actual ground truth of their dataset. RESULTS Based on the surrogate ground truth, the areas under the curve were as follows: histogram of CT density, 0.640; local binary pattern on three orthogonal planes, 0.688; three-dimensional random local binary pattern, 0.725; and the proposed method, 0.837. Based on the actual ground truth, the area under the curve of the proposed method was 0.81. CONCLUSIONS The proposed method could capture discriminative characteristics of lung nodules and was useful for the differentiation between malignant and benign nodules.
Collapse
|
24
|
Niu XK, Li J, Das SK, Xiong Y, Yang CB, Peng T. Developing a nomogram based on multiparametric magnetic resonance imaging for forecasting high-grade prostate cancer to reduce unnecessary biopsies within the prostate-specific antigen gray zone. BMC Med Imaging 2017; 17:11. [PMID: 28143433 PMCID: PMC5286806 DOI: 10.1186/s12880-017-0184-x] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2016] [Accepted: 01/26/2017] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Since 1980s the application of Prostate specific antigen (PSA) brought the revolution in prostate cancer diagnosis. However, it is important to underline that PSA is not the ideal screening tool due to its low specificity, which leads to the possible biopsy for the patient without High-grade prostate cancer (HGPCa). Therefore, the aim of this study was to establish a predictive nomogram for HGPCa in patients with PSA 4-10 ng/ml based on Prostate Imaging Reporting and Data System version 2 (PI-RADS v2), MRI-based prostate volume (PV), MRI-based PV-adjusted Prostate Specific Antigen Density (adjusted-PSAD) and other traditional classical parameters. METHODS Between January 2014 and September 2015, Of 151 men who were eligible for analysis were formed the training cohort. A prediction model for HGPCa was built by using backward logistic regression and was presented on a nomogram. The prediction model was evaluated by a validation cohort between October 2015 and October 2016 (n = 74). The relationship between the nomogram-based risk-score as well as other parameters with Gleason score (GS) was evaluated. All patients underwent 12-core systematic biopsy and at least one core targeted biopsy with transrectal ultrasonographic guidance. RESULTS The multivariate analysis revealed that patient age, PI-RADS v2 score and adjusted-PSAD were independent predictors for HGPCa. Logistic regression (LR) model had a larger AUC as compared with other parameters alone. The most discriminative cutoff value for LR model was 0.36, the sensitivity, specificity, positive predictive value and negative predictive value were 87.3, 78.4, 76.3, and 90.4%, respectively and the diagnostic performance measures retained similar values in the validation cohort (AUC 0.82 [95% CI, 0.76-0.89]). For all patients with HGPCa (n = 50), adjusted-PSAD and nomogram-based risk-score were positively correlated with the GS of HGPCa in PSA gray zone (r = 0.455, P = 0.002 and r = 0.509, P = 0.001, respectively). CONCLUSION The nomogram based on multiparametric magnetic resonance imaging (mp-MRI) for forecasting HGPCa is effective, which could reduce unnecessary prostate biopsies in patients with PSA 4-10 ng/ml and nomogram-based risk-score could provide a more robust parameter of assessing the aggressiveness of HGPCa in PSA gray zone.
Collapse
Affiliation(s)
- Xiang-ke Niu
- Department of Radiology, Affiliated Hospital of Chengdu University, Chengdu, 610081 China
| | - Jun Li
- Department of General Surgery, Affiliated Hospital of Chengdu University, No. 82 2nd North Section of Second Ring Road, Chengdu, Sichuan 610081 China
| | - Susant Kumar Das
- Department of Intervention Radiology, Tenth People’s Hospital of Tongji University, Shanghai, 200072 China
| | - Yan Xiong
- Department of Radiology, Affiliated Hospital of Chengdu University, Chengdu, 610081 China
| | - Chao-bing Yang
- Department of Radiology, Affiliated Hospital of Chengdu University, Chengdu, 610081 China
| | - Tao Peng
- Department of Radiology, Affiliated Hospital of Chengdu University, Chengdu, 610081 China
| |
Collapse
|
25
|
Abstract
Automatic and reliable segmentation of hippocampus from MR brain images is of great importance in studies of neurological diseases, such as epilepsy and Alzheimer's disease. In this paper, we proposed a novel metric learning method to fuse segmentation labels in multi-atlas based image segmentation. Different from current label fusion methods that typically adopt a predefined distance metric model to compute a similarity measure between image patches of atlas images and the image to be segmented, we learn a distance metric model from the atlases to keep image patches of the same structure close to each other while those of different structures are separated. The learned distance metric model is then used to compute the similarity measure between image patches in the label fusion. The proposed method has been validated for segmenting hippocampus based on the EADC-ADNI dataset with manually labelled hippocampus of 100 subjects. The experiment results demonstrated that our method achieved statistically significant improvement in segmentation accuracy, compared with state-of-the-art multi-atlas image segmentation methods.
Collapse
Affiliation(s)
- Hancan Zhu
- School of Mathematics Physics and Information, Shaoxing University, Shaoxing, 312000, China
| | - Hewei Cheng
- Department of Biomedical Engineering, School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Xuesong Yang
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
26
|
Yang X, Rossi PJ, Jani AB, Mao H, Zhou Z, Curran WJ, Liu T. Improved prostate delineation in prostate HDR brachytherapy with TRUS-CT deformable registration technology: A pilot study with MRI validation. J Appl Clin Med Phys 2017; 18:202-210. [PMID: 28291925 PMCID: PMC5689894 DOI: 10.1002/acm2.12040] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 11/01/2016] [Accepted: 11/28/2016] [Indexed: 11/30/2022] Open
Abstract
Accurate prostate delineation is essential to ensure proper target coverage and normal-tissue sparing in prostate HDR brachytherapy. We have developed a prostate HDR brachytherapy technology that integrates intraoperative TRUS-based prostate contour into HDR treatment planning through TRUS-CT deformable registration (TCDR) to improve prostate contour accuracy. In a perspective study of 16 patients, we investigated the clinical feasibility as well as the performance of this TCDR-based HDR approach. We compared the performance of the TCDR-based approach with the conventional CT-based HDR in terms of prostate contour accuracy using MRI as the gold standard. For all patients, the average Dice prostate volume overlap was 91.1 ± 2.3% between the TCDR-based and the MRI-defined prostate volumes. In a subset of eight patients, inter and intro-observer reliability study was conducted among three experienced physicians (two radiation oncologists and one radiologist) for the TCDR-based HDR approach. Overall, a 10 to 40% improvement in prostate volume accuracy can be achieved with the TCDR-based approach as compared with the conventional CT-based prostate volumes. The TCDR-based prostate volumes match closely to the MRI-defined prostate volumes for all 3 observers (mean volume difference: 0.5 ± 7.2%, 1.8 ± 7.2%, and 3.5 ± 5.1%); while CT-based contours overestimated prostate volumes by 10.9 ± 28.7%, 13.7 ± 20.1%, and 44.7 ± 32.1%. This study has shown that the TCDR-based HDR brachytherapy is clinically feasible and can significantly improve prostate contour accuracy over the conventional CT-based prostate contour. We also demonstrated the reliability of the TCDR-based prostate delineation. This TCDR-based HDR approach has the potential to enable accurate dose planning and delivery, and potentially enhance prostate HDR treatment outcome.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Peter J. Rossi
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Zhengyang Zhou
- Department of RadiologyNanjing Drum Tower HospitalNanjingChina
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
27
|
Gao Y, Shao Y, Lian J, Wang AZ, Chen RC, Shen D. Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1532-43. [PMID: 26800531 PMCID: PMC4918760 DOI: 10.1109/tmi.2016.2519264] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Computer Science, the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Yeqin Shao
- Nantong University, Jiangsu 226019, China and also with the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Andrew Z. Wang
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Ronald C. Chen
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA and also with Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea ()
| |
Collapse
|
28
|
Guo Y, Gao Y, Shen D. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1077-89. [PMID: 26685226 PMCID: PMC5002995 DOI: 10.1109/tmi.2015.2508280] [Citation(s) in RCA: 123] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.
Collapse
Affiliation(s)
| | | | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599 USA; and also with Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
29
|
Yang X, Rossi PJ, Jani AB, Mao H, Curran WJ, Liu T. 3D Transrectal Ultrasound (TRUS) Prostate Segmentation Based on Optimal Feature Learning Framework. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9784. [PMID: 31467459 DOI: 10.1117/12.2216396] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We propose a 3D prostate segmentation method for transrectal ultrasound (TRUS) images, which is based on patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to train the kernel support vector machine (KSVM). The well-trained SVM was used to localize the prostate of the new patient. Our segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentations (gold standard). The mean volume Dice overlap coefficient was 89.7%. In this study, we have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute
| | - Peter J Rossi
- Department of Radiation Oncology and Winship Cancer Institute
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute
| |
Collapse
|
30
|
Shi Y, Gao Y, Liao S, Zhang D, Gao Y, Shen D. A Learning-Based CT Prostate Segmentation Method via Joint Transductive Feature Selection and Regression. Neurocomputing 2016; 173:317-331. [PMID: 26752809 PMCID: PMC4704800 DOI: 10.1016/j.neucom.2014.11.098] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In1 recent years, there has been a great interest in prostate segmentation, which is a important and challenging task for CT image guided radiotherapy. In this paper, a learning-based segmentation method via joint transductive feature selection and transductive regression is presented, which incorporates the physician's simple manual specification (only taking a few seconds), to aid accurate segmentation, especially for the case with large irregular prostate motion. More specifically, for the current treatment image, experienced physician is first allowed to manually assign the labels for a small subset of prostate and non-prostate voxels, especially in the first and last slices of the prostate regions. Then, the proposed method follows the two step: in prostate-likelihood estimation step, two novel algorithms: tLasso and wLapRLS, will be sequentially employed for transductive feature selection and transductive regression, respectively, aiming to generate the prostate-likelihood map. In multi-atlases based label fusion step, the final segmentation result will be obtained according to the corresponding prostate-likelihood map and the previous images of the same patient. The proposed method has been substantially evaluated on a real prostate CT dataset including 24 patients with 330 CT images, and compared with several state-of-the-art methods. Experimental results show that the proposed method outperforms the state-of-the-arts in terms of higher Dice ratio, higher true positive fraction, and lower centroid distances. Also, the results demonstrate that simple manual specification can help improve the segmentation performance, which is clinically feasible in real practice.
Collapse
Affiliation(s)
- Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, China; Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | - Yaozong Gao
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | - Shu Liao
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | | | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| |
Collapse
|
31
|
Shi Y, Gao Y, Liao S, Zhang D, Gao Y, Shen D. Semi-automatic segmentation of prostate in CT images via coupled feature representation and spatial-constrained transductive lasso. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2015; 37:2286-2303. [PMID: 26440268 DOI: 10.1109/tpami.2015.2424869] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Conventional learning-based methods for segmenting prostate in CT images ignore the relations among the low-level features by assuming all these features are independent. Also, their feature selection steps usually neglect the image appearance changes in different local regions of CT images. To this end, we present a novel semi-automatic learning-based prostate segmentation method in this article. For segmenting the prostate in a certain treatment image, the radiation oncologist will be first asked to take a few seconds to manually specify the first and last slices of the prostate. Then, prostate is segmented with the following two steps: (i) Estimation of 3D prostate-likelihood map to predict the likelihood of each voxel being prostate by employing the coupled feature representation, and the proposed Spatial-COnstrained Transductive LassO (SCOTO); (ii) Multi-atlases based label fusion to generate the final segmentation result by using the prostate shape information obtained from both planning and previous treatment images. The major contribution of the proposed method mainly includes: (i) incorporating radiation oncologist's manual specification to aid segmentation, (ii) adopting coupled features to relax previous assumption of feature independency for voxel representation, and (iii) developing SCOTO for joint feature selection across different local regions. The experimental result shows that the proposed method outperforms the state-of-the-art methods in a real-world prostate CT dataset, consisting of 24 patients with totally 330 images, all of which were manually delineated by the radiation oncologist for performance evaluation. Moreover, our method is also clinically feasible, since the segmentation performance can be improved by just requiring the radiation oncologist to spend only a few seconds for manual specification of ending slices in the current treatment CT image.
Collapse
|
32
|
Derraz F, Forzy G, Delebarre A, Taleb-Ahmed A, Oussalah M, Peyrodie L, Verclytte S. Prostate contours delineation using interactive directional active contours model and parametric shape prior model. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2015; 31. [PMID: 26009857 DOI: 10.1002/cnm.2726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Revised: 05/17/2015] [Accepted: 05/17/2015] [Indexed: 06/04/2023]
Abstract
Prostate contours delineation on Magnetic Resonance (MR) images is a challenging and important task in medical imaging with applications of guiding biopsy, surgery and therapy. While a fully automated method is highly desired for this application, it can be a very difficult task due to the structure and surrounding tissues of the prostate gland. Traditional active contours-based delineation algorithms are typically quite successful for piecewise constant images. Nevertheless, when MR images have diffuse edges or multiple similar objects (e.g. bladder close to prostate) within close proximity, such approaches have proven to be unsuccessful. In order to mitigate these problems, we proposed a new framework for bi-stage contours delineation algorithm based on directional active contours (DAC) incorporating prior knowledge of the prostate shape. We first explicitly addressed the prostate contour delineation problem based on fast globally DAC that incorporates both statistical and parametric shape prior model. In doing so, we were able to exploit the global aspects of contour delineation problem by incorporating a user feedback in contours delineation process where it is shown that only a small amount of user input can sometimes resolve ambiguous scenarios raised by DAC. In addition, once the prostate contours have been delineated, a cost functional is designed to incorporate both user feedback interaction and the parametric shape prior model. Using data from publicly available prostate MR datasets, which includes several challenging clinical datasets, we highlighted the effectiveness and the capability of the proposed algorithm. Besides, the algorithm has been compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Foued Derraz
- Telecommunications Laboratory, Technology Faculty, Abou Bekr Belkaïd University, Tlemcen, 13000, Algeria
- Université Nord de France, F-59000, Lille, France
- Unité de Traitement de Signaux Biomédicaux, Faculté de médecine et maïeutique, Lille, France
- LAMIH UMR CNRS 8201, Le Mont Houy, Université de Valenciennes et Cambresis, 59313, Valenciennes, France
| | - Gérard Forzy
- Unité de Traitement de Signaux Biomédicaux, Faculté de médecine et maïeutique, Lille, France
- Groupement des Hopitaux de l'́Institut Catholique de Lille, France
| | - Arnaud Delebarre
- Groupement des Hopitaux de l'́Institut Catholique de Lille, France
| | - Abdelmalik Taleb-Ahmed
- Université Nord de France, F-59000, Lille, France
- LAMIH UMR CNRS 8201, Le Mont Houy, Université de Valenciennes et Cambresis, 59313, Valenciennes, France
| | - Mourad Oussalah
- School of Electronics, Electrical and Computer Engineering, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
| | - Laurent Peyrodie
- Université Nord de France, F-59000, Lille, France
- Hautes Etudes dÍngénieur, 13 rue de Toul, 59000, Lille, France
| | | |
Collapse
|
33
|
Park SH, Gao Y, Shen D. Multiatlas-Based Segmentation Editing With Interaction-Guided Patch Selection and Label Fusion. IEEE Trans Biomed Eng 2015; 63:1208-1219. [PMID: 26485353 DOI: 10.1109/tbme.2015.2491612] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We propose a novel multiatlas-based segmentation method to address the segmentation editing scenario, where an incomplete segmentation is given along with a set of existing reference label images (used as atlases). Unlike previous multiatlas-based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate atlas label patches in the reference label set and derive their weights for label fusion. Specifically, user interactions provided on the erroneous parts are first divided into multiple local combinations. For each combination, the atlas label patches well-matched with both interactions and the previous segmentation are identified. Then, the segmentation is updated through the voxelwise label fusion of selected atlas label patches with their weights derived from the distances of each underlying voxel to the interactions. Since the atlas label patches well-matched with different local combinations are used in the fusion step, our method can consider various local shape variations during the segmentation update, even with only limited atlas label images and user interactions. Besides, since our method does not depend on either image appearance or sophisticated learning steps, it can be easily applied to general editing problems. To demonstrate the generality of our method, we apply it to editing segmentations of CT prostate, CT brainstem, and MR hippocampus, respectively. Experimental results show that our method outperforms existing editing methods in all three datasets.
Collapse
|
34
|
Shao Y, Gao Y, Wang Q, Yang X, Shen D. Locally-constrained boundary regression for segmentation of prostate and rectum in the planning CT images. Med Image Anal 2015; 26:345-56. [PMID: 26439938 DOI: 10.1016/j.media.2015.06.007] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Revised: 04/17/2015] [Accepted: 06/17/2015] [Indexed: 11/24/2022]
Abstract
Automatic and accurate segmentation of the prostate and rectum in planning CT images is a challenging task due to low image contrast, unpredictable organ (relative) position, and uncertain existence of bowel gas across different patients. Recently, regression forest was adopted for organ deformable segmentation on 2D medical images by training one landmark detector for each point on the shape model. However, it seems impractical for regression forest to guide 3D deformable segmentation as a landmark detector, due to large number of vertices in the 3D shape model as well as the difficulty in building accurate 3D vertex correspondence for each landmark detector. In this paper, we propose a novel boundary detection method by exploiting the power of regression forest for prostate and rectum segmentation. The contributions of this paper are as follows: (1) we introduce regression forest as a local boundary regressor to vote the entire boundary of a target organ, which avoids training a large number of landmark detectors and building an accurate 3D vertex correspondence for each landmark detector; (2) an auto-context model is integrated with regression forest to improve the accuracy of the boundary regression; (3) we further combine a deformable segmentation method with the proposed local boundary regressor for the final organ segmentation by integrating organ shape priors. Our method is evaluated on a planning CT image dataset with 70 images from 70 different patients. The experimental results show that our proposed boundary regression method outperforms the conventional boundary classification method in guiding the deformable model for prostate and rectum segmentations. Compared with other state-of-the-art methods, our method also shows a competitive performance.
Collapse
Affiliation(s)
- Yeqin Shao
- Institute of Image Processing & Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China; Nantong University, Jiangsu 226019, China
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, United States; Department of Computer Science, University of North Carolina at Chapel Hill, NC 27599, United States
| | - Qian Wang
- Med-X Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xin Yang
- Institute of Image Processing & Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, United States; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
35
|
Guo Y, Wu G, Yap PT, Jewells V, Lin W, Shen D. Segmentation of Infant Hippocampus Using Common Feature Representations Learned for Multimodal Longitudinal Data. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2015; 9351:63-71. [PMID: 27019875 DOI: 10.1007/978-3-319-24574-4_8] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Aberrant development of the human brain during the first year after birth is known to cause critical implications in later stages of life. In particular, neuropsychiatric disorders, such as attention deficit hyperactivity disorder (ADHD), have been linked with abnormal early development of the hippocampus. Despite its known importance, studying the hippocampus in infant subjects is very challenging due to the significantly smaller brain size, dynamically varying image contrast, and large across-subject variation. In this paper, we present a novel method for effective hippocampus segmentation by using a multi-atlas approach that integrates the complementary multimodal information from longitudinal T1 and T2 MR images. In particular, considering the highly heterogeneous nature of the longitudinal data, we propose to learn their common feature representations by using hierarchical multi-set kernel canonical correlation analysis (CCA). Specifically, we will learn (1) within-time-point common features by projecting different modality features of each time point to its own modality-free common space, and (2) across-time-point common features by mapping all time-point-specific common features to a global common space for all time points. These final features are then employed in patch matching across different modalities and time points for hippocampus segmentation, via label propagation and fusion. Experimental results demonstrate the improved performance of our method over the state-of-the-art methods.
Collapse
Affiliation(s)
- Yanrong Guo
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Valerie Jewells
- Department of Radiology, University of North Carolina at Chapel Hill, NC, USA
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| |
Collapse
|
36
|
Iglesias JE, Sabuncu MR. Multi-atlas segmentation of biomedical images: A survey. Med Image Anal 2015; 24:205-219. [PMID: 26201875 PMCID: PMC4532640 DOI: 10.1016/j.media.2015.06.012] [Citation(s) in RCA: 371] [Impact Index Per Article: 37.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Revised: 06/12/2015] [Accepted: 06/15/2015] [Indexed: 10/23/2022]
Abstract
Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, et al. (2004), Klein, et al. (2005), and Heckemann, et al. (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of "atlases" (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003-2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation.
Collapse
Affiliation(s)
| | - Mert R Sabuncu
- A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA.
| |
Collapse
|
37
|
Park SH, Gao Y, Shi Y, Shen D. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection. Med Phys 2015; 41:111715. [PMID: 25370629 DOI: 10.1118/1.4898200] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. METHODS The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. RESULTS The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency and the robustness. The automatic segmentation results with the original average Dice similarity coefficient of 0.78 were improved to 0.865-0.872 after conducting 55-59 interactions by using the proposed method, where each editing procedure took less than 3 s. In addition, the proposed method obtained the most consistent editing results with respect to different user interactions, compared to other methods. CONCLUSIONS The proposed method obtains robust editing results with few interactions for various wrong segmentation cases, by selecting the location-adaptive features and further imposing the manifold regularization. The authors expect the proposed method to largely reduce the laborious burdens of manual editing, as well as both the intra- and interobserver variability across clinicians.
Collapse
Affiliation(s)
- Sang Hyun Park
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yaozong Gao
- Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, Republic of Korea
| |
Collapse
|
38
|
Dai X, Gao Y, Shen D. Online updating of context-aware landmark detectors for prostate localization in daily treatment CT images. Med Phys 2015; 42:2594-606. [PMID: 25979051 PMCID: PMC4409630 DOI: 10.1118/1.4918755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Revised: 02/22/2015] [Accepted: 03/20/2015] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In image guided radiation therapy, it is crucial to fast and accurately localize the prostate in the daily treatment images. To this end, the authors propose an online update scheme for landmark-guided prostate segmentation, which can fully exploit valuable patient-specific information contained in the previous treatment images and can achieve improved performance in landmark detection and prostate segmentation. METHODS To localize the prostate in the daily treatment images, the authors first automatically detect six anatomical landmarks on the prostate boundary by adopting a context-aware landmark detection method. Specifically, in this method, a two-layer regression forest is trained as a detector for each target landmark. Once all the newly detected landmarks from new treatment images are reviewed or adjusted (if necessary) by clinicians, they are further included into the training pool as new patient-specific information to update all the two-layer regression forests for the next treatment day. As more and more treatment images of the current patient are acquired, the two-layer regression forests can be continually updated by incorporating the patient-specific information into the training procedure. After all target landmarks are detected, a multiatlas random sample consensus (multiatlas RANSAC) method is used to segment the entire prostate by fusing multiple previously segmented prostates of the current patient after they are aligned to the current treatment image. Subsequently, the segmented prostate of the current treatment image is again reviewed (or even adjusted if needed) by clinicians before including it as a new shape example into the prostate shape dataset for helping localize the entire prostate in the next treatment image. RESULTS The experimental results on 330 images of 24 patients show the effectiveness of the authors' proposed online update scheme in improving the accuracies of both landmark detection and prostate segmentation. Besides, compared to the other state-of-the-art prostate segmentation methods, the authors' method achieves the best performance. CONCLUSIONS By appropriate use of valuable patient-specific information contained in the previous treatment images, the authors' proposed online update scheme can obtain satisfactory results for both landmark detection and prostate segmentation.
Collapse
Affiliation(s)
- Xiubin Dai
- College of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210015, China and IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510
| | - Yaozong Gao
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510
| | - Dinggang Shen
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510 and Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
39
|
Nouranian S, Mahdavi SS, Spadinger I, Morris WJ, Salcudean SE, Abolmaesumi P. A multi-atlas-based segmentation framework for prostate brachytherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:950-961. [PMID: 25474806 DOI: 10.1109/tmi.2014.2371823] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Low-dose-rate brachytherapy is a radiation treatment method for localized prostate cancer. The standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate in order to devise a plan to deliver sufficient radiation dose to the cancerous tissue. Brachytherapy planning involves delineation of contours in these images, which closely follow the prostate boundary, i.e., clinical target volume. This process is currently performed either manually or semi-automatically, which requires user interaction for landmark initialization. In this paper, we propose a multi-atlas fusion framework to automatically delineate the clinical target volume in ultrasound images. A dataset of a priori segmented ultrasound images, i.e., atlases, is registered to a target image. We introduce a pairwise atlas agreement factor that combines an image-similarity metric and similarity between a priori segmented contours. This factor is used in an atlas selection algorithm to prune the dataset before combining the atlas contours to produce a consensus segmentation. We evaluate the proposed segmentation approach on a set of 280 transrectal prostate volume studies. The proposed method produces segmentation results that are within the range of observer variability when compared to a semi-automatic segmentation technique that is routinely used in our cancer clinic.
Collapse
|
40
|
Song Y, Cai W, Huang H, Zhou Y, Wang Y, Feng DD. Locality-constrained Subcluster Representation Ensemble for lung image classification. Med Image Anal 2015; 22:102-13. [PMID: 25839422 DOI: 10.1016/j.media.2015.03.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2014] [Revised: 03/06/2015] [Accepted: 03/13/2015] [Indexed: 11/30/2022]
Abstract
In this paper, we propose a new Locality-constrained Subcluster Representation Ensemble (LSRE) model, to classify high-resolution computed tomography (HRCT) images of interstitial lung diseases (ILDs). Medical images normally exhibit large intra-class variation and inter-class ambiguity in the feature space. Modelling of feature space separation between different classes is thus problematic and this affects the classification performance. Our LSRE model tackles this issue in an ensemble classification construct. The image set is first partitioned into subclusters based on spectral clustering with approximation-based affinity matrix. Basis representations of the test image are then generated with sparse approximation from the subclusters. These basis representations are finally fused with approximation- and distribution-based weights to classify the test image. Our experimental results on a large HRCT database show good performance improvement over existing popular classifiers.
Collapse
Affiliation(s)
- Yang Song
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of IT, University of Sydney, NSW 2006, Australia.
| | - Weidong Cai
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of IT, University of Sydney, NSW 2006, Australia
| | - Heng Huang
- Department of Computer Science and Engineering, University of Texas, Arlington, TX 76019, USA
| | - Yun Zhou
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Yue Wang
- Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, Arlington, VA 22203, USA
| | - David Dagan Feng
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of IT, University of Sydney, NSW 2006, Australia
| |
Collapse
|
41
|
Shi W, Lombaert H, Bai W, Ledig C, Zhuang X, Marvao A, Dawes T, O'Regan D, O'Regan D. Multi-atlas spectral PatchMatch: application to cardiac image segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2014; 17:348-55. [PMID: 25333137 DOI: 10.1007/978-3-319-10404-1_44] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
The automatic segmentation of cardiac magnetic resonance images poses many challenges arising from the large variation between different anatomies, scanners and acquisition protocols. In this paper, we address these challenges with a global graph search method and a novel spectral embedding of the images. Firstly, we propose the use of an approximate graph search approach to initialize patch correspondences between the image to be segmented and a database of labelled atlases, Then, we propose an innovative spectral embedding using a multi-layered graph of the images in order to capture global shape properties. Finally, we estimate the patch correspondences based on a joint spectral representation of the image and atlases. We evaluated the proposed approach using 155 images from the recent MICCAI SATA segmentation challenge and demonstrated that the proposed algorithm significantly outperforms current state-of-the-art methods on both training and test sets.
Collapse
|
42
|
Yang X, Rossi P, Ogunleye T, Marcus DM, Jani AB, Mao H, Curran WJ, Liu T. Prostate CT segmentation method based on nonrigid registration in ultrasound-guided CT-based HDR prostate brachytherapy. Med Phys 2014; 41:111915. [PMID: 25370648 PMCID: PMC4241831 DOI: 10.1118/1.4897615] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2014] [Revised: 09/22/2014] [Accepted: 09/24/2014] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The technological advances in real-time ultrasound image guidance for high-dose-rate (HDR) prostate brachytherapy have placed this treatment modality at the forefront of innovation in cancer radiotherapy. Prostate HDR treatment often involves placing the HDR catheters (needles) into the prostate gland under the transrectal ultrasound (TRUS) guidance, then generating a radiation treatment plan based on CT prostate images, and subsequently delivering high dose of radiation through these catheters. The main challenge for this HDR procedure is to accurately segment the prostate volume in the CT images for the radiation treatment planning. In this study, the authors propose a novel approach that integrates the prostate volume from 3D TRUS images into the treatment planning CT images to provide an accurate prostate delineation for prostate HDR treatment. METHODS The authors' approach requires acquisition of 3D TRUS prostate images in the operating room right after the HDR catheters are inserted, which takes 1-3 min. These TRUS images are used to create prostate contours. The HDR catheters are reconstructed from the intraoperative TRUS and postoperative CT images, and subsequently used as landmarks for the TRUS-CT image fusion. After TRUS-CT fusion, the TRUS-based prostate volume is deformed to the CT images for treatment planning. This method was first validated with a prostate-phantom study. In addition, a pilot study of ten patients undergoing HDR prostate brachytherapy was conducted to test its clinical feasibility. The accuracy of their approach was assessed through the locations of three implanted fiducial (gold) markers, as well as T2-weighted MR prostate images of patients. RESULTS For the phantom study, the target registration error (TRE) of gold-markers was 0.41 ± 0.11 mm. For the ten patients, the TRE of gold markers was 1.18 ± 0.26 mm; the prostate volume difference between the authors' approach and the MRI-based volume was 7.28% ± 0.86%, and the prostate volume Dice overlap coefficient was 91.89% ± 1.19%. CONCLUSIONS The authors have developed a novel approach to improve prostate contour utilizing intraoperative TRUS-based prostate volume in the CT-based prostate HDR treatment planning, demonstrated its clinical feasibility, and validated its accuracy with MRIs. The proposed segmentation method would improve prostate delineations, enable accurate dose planning and treatment delivery, and potentially enhance the treatment outcome of prostate HDR brachytherapy.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Peter Rossi
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - David M Marcus
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| |
Collapse
|
43
|
Shao Y, Gao Y, Guo Y, Shi Y, Yang X, Shen D. Hierarchical lung field segmentation with joint shape and appearance sparse learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1761-80. [PMID: 25181734 DOI: 10.1109/tmi.2014.2305691] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Lung field segmentation in the posterior-anterior (PA) chest radiograph is important for pulmonary disease diagnosis and hemodialysis treatment. Due to high shape variation and boundary ambiguity, accurate lung field segmentation from chest radiograph is still a challenging task. To tackle these challenges, we propose a joint shape and appearance sparse learning method for robust and accurate lung field segmentation. The main contributions of this paper are: 1) a robust shape initialization method is designed to achieve an initial shape that is close to the lung boundary under segmentation; 2) a set of local sparse shape composition models are built based on local lung shape segments to overcome the high shape variations; 3) a set of local appearance models are similarly adopted by using sparse representation to capture the appearance characteristics in local lung boundary segments, thus effectively dealing with the lung boundary ambiguity; 4) a hierarchical deformable segmentation framework is proposed to integrate the scale-dependent shape and appearance information together for robust and accurate segmentation. Our method is evaluated on 247 PA chest radiographs in a public dataset. The experimental results show that the proposed local shape and appearance models outperform the conventional shape and appearance models. Compared with most of the state-of-the-art lung field segmentation methods under comparison, our method also shows a higher accuracy, which is comparable to the inter-observer annotation variation.
Collapse
|
44
|
Wu Y, Liu G, Huang M, Guo J, Jiang J, Yang W, Chen W, Feng Q. Prostate segmentation based on variant scale patch and local independent projection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1290-1303. [PMID: 24893258 DOI: 10.1109/tmi.2014.2308901] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Accurate segmentation of the prostate in computed tomography (CT) images is important in image-guided radiotherapy; however, difficulties remain associated with this task. In this study, an automatic framework is designed for prostate segmentation in CT images. We propose a novel image feature extraction method, namely, variant scale patch, which can provide rich image information in a low dimensional feature space. We assume that the samples from different classes lie on different nonlinear submanifolds and design a new segmentation criterion called local independent projection (LIP). In our method, a dictionary containing training samples is constructed. To utilize the latest image information, we use an online updated strategy to construct this dictionary. In the proposed LIP, locality is emphasized rather than sparsity; local anchor embedding is performed to determine the dictionary coefficients. Several morphological operations are performed to improve the achieved results. The proposed method has been evaluated based on 330 3-D images of 24 patients. Results show that the proposed method is robust and effective in segmenting prostate in CT images.
Collapse
|
45
|
Wang L, Shi F, Gao Y, Li G, Gilmore JH, Lin W, Shen D. Integration of sparse multi-modality representation and anatomical constraint for isointense infant brain MR image segmentation. Neuroimage 2014; 89:152-64. [PMID: 24291615 PMCID: PMC3944142 DOI: 10.1016/j.neuroimage.2013.11.040] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2013] [Revised: 10/21/2013] [Accepted: 11/18/2013] [Indexed: 01/18/2023] Open
Abstract
Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination processes. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6-8months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter.
Collapse
Affiliation(s)
- Li Wang
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Feng Shi
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Yaozong Gao
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; Department of Computer Science, University of North Carolina at Chapel Hill, NC, USA
| | - Gang Li
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - John H Gilmore
- Department of Psychiatry, University of North Carolina at Chapel Hill, NC, USA
| | - Weili Lin
- MRI Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Dinggang Shen
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea.
| |
Collapse
|
46
|
Yang X, Rossi P, Ogunleye T, Jani AB, Curran WJ, Liu T. A New CT Prostate Segmentation for CT-Based HDR Brachytherapy. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2014; 9036:90362K. [PMID: 25821388 DOI: 10.1117/12.2043695] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
High-dose-rate (HDR) brachytherapy has become a popular treatment modality for localized prostate cancer. Prostate HDR treatment involves placing 10 to 20 catheters (needles) into the prostate gland, and then delivering radiation dose to the cancerous regions through these catheters. These catheters are often inserted with transrectal ultrasound (TRUS) guidance and the HDR treatment plan is based on the CT images. The main challenge for CT-based HDR planning is to accurately segment prostate volume in CT images due to the poor soft tissue contrast and additional artifacts introduced by the catheters. To overcome these limitations, we propose a novel approach to segment the prostate in CT images through TRUS-CT deformable registration based on the catheter locations. In this approach, the HDR catheters are reconstructed from the intra-operative TRUS and planning CT images, and then used as landmarks for the TRUS-CT image registration. The prostate contour generated from the TRUS images captured during the ultrasound-guided HDR procedure was used to segment the prostate on the CT images through deformable registration. We conducted two studies. A prostate-phantom study demonstrated a submillimeter accuracy of our method. A pilot study of 5 prostate-cancer patients was conducted to further test its clinical feasibility. All patients had 3 gold markers implanted in the prostate that were used to evaluate the registration accuracy, as well as previous diagnostic MR images that were used as the gold standard to assess the prostate segmentation. For the 5 patients, the mean gold-marker displacement was 1.2 mm; the prostate volume difference between our approach and the MRI was 7.2%, and the Dice volume overlap was over 91%. Our proposed method could improve prostate delineation, enable accurate dose planning and delivery, and potentially enhance prostate HDR treatment outcome.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Peter Rossi
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Ashesh B Jani
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
47
|
Fulham MJ, Feng DD. Lesion detection and characterization with context driven approximation in thoracic FDG PET-CT images of NSCLC studies. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:408-421. [PMID: 24235248 DOI: 10.1109/tmi.2013.2285931] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present a lesion detection and characterization method for (18)F-fluorodeoxyglucose positron emission tomography-computed tomography (FDG PET-CT) images of the thorax in the evaluation of patients with primary nonsmall cell lung cancer (NSCLC) with regional nodal disease. Lesion detection can be difficult due to low contrast between lesions and normal anatomical structures. Lesion characterization is also challenging due to similar spatial characteristics between the lung tumors and abnormal lymph nodes. To tackle these problems, we propose a context driven approximation (CDA) method. There are two main components of our method. First, a sparse representation technique with region-level contexts was designed for lesion detection. To discriminate low-contrast data with sparse representation, we propose a reference consistency constraint and a spatial consistent constraint. Second, a multi-atlas technique with image-level contexts was designed to represent the spatial characteristics for lesion characterization. To accommodate inter-subject variation in a multi-atlas model, we propose an appearance constraint and a similarity constraint. The CDA method is effective with a simple feature set, and does not require parametric modeling of feature space separation. The experiments on a clinical FDG PET-CT dataset show promising performance improvement over the state-of-the-art.
Collapse
|
48
|
Gao Y, Zhan Y, Shen D. Incremental learning with selective memory (ILSM): towards fast prostate localization for image guided radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:518-34. [PMID: 24495983 PMCID: PMC4379484 DOI: 10.1109/tmi.2013.2291495] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to "personalize" the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼ 0.89 ) and fast ( ∼ 4 s), which satisfies the real-world clinical requirements of IGRT.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Computer Science and the Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Yiqiang Zhan
- SYNGO Division, Siemens Medical Solutions, Malvern, PA 19355 USA
| | - Dinggang Shen
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 136-701, Korea
| |
Collapse
|
49
|
Guo Y, Wu G, Commander LA, Szary S, Jewells V, Lin W, Shent D. Segmenting hippocampus from infant brains by sparse patch matching with deep-learned features. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2014; 17:308-15. [PMID: 25485393 DOI: 10.1007/978-3-319-10470-6_39] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Accurate segmentation of the hippocampus from infant MR brain images is a critical step for investigating early brain development. Unfortunately, the previous tools developed for adult hippocampus segmentation are not suitable for infant brain images acquired from the first year of life, which often have poor tissue contrast and variable structural patterns of early hippocampal development. From our point of view, the main problem is lack of discriminative and robust feature representations for distinguishing the hippocampus from the surrounding brain structures. Thus, instead of directly using the predefined features as popularly used in the conventional methods, we propose to learn the latent feature representations of infant MR brain images by unsupervised deep learning. Since deep learning paradigms can learn low-level features and then successfully build up more comprehensive high-level features in a layer-by-layer manner, such hierarchical feature representations can be more competitive for distinguishing the hippocampus from entire brain images. To this end, we apply Stacked Auto Encoder (SAE) to learn the deep feature representations from both T1- and T2-weighed MR images combining their complementary information, which is important for characterizing different development stages of infant brains after birth. Then, we present a sparse patch matching method for transferring hippocampus labels from multiple atlases to the new infant brain image, by using deep-learned feature representations to measure the interpatch similarity. Experimental results on 2-week-old to 9-month-old infant brain images show the effectiveness of the proposed method, especially compared to the state-of-the-art counterpart methods.
Collapse
|