1
|
Wang S, Liu M, Lian J, Shen D. Boundary Coding Representation for Organ Segmentation in Prostate Cancer Radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:310-320. [PMID: 32956051 PMCID: PMC8202780 DOI: 10.1109/tmi.2020.3025517] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Accurate segmentation of the prostate and organs at risk (OARs, e.g., bladder and rectum) in male pelvic CT images is a critical step for prostate cancer radiotherapy. Unfortunately, the unclear organ boundary and large shape variation make the segmentation task very challenging. Previous studies usually used representations defined directly on unclear boundaries as context information to guide segmentation. Those boundary representations may not be so discriminative, resulting in limited performance improvement. To this end, we propose a novel boundary coding network (BCnet) to learn a discriminative representation for organ boundary and use it as the context information to guide the segmentation. Specifically, we design a two-stage learning strategy in the proposed BCnet: 1) Boundary coding representation learning. Two sub-networks under the supervision of the dilation and erosion masks transformed from the manually delineated organ mask are first separately trained to learn the spatial-semantic context near the organ boundary. Then we encode the organ boundary based on the predictions of these two sub-networks and design a multi-atlas based refinement strategy by transferring the knowledge from training data to inference. 2) Organ segmentation. The boundary coding representation as context information, in addition to the image patches, are used to train the final segmentation network. Experimental results on a large and diverse male pelvic CT dataset show that our method achieves superior performance compared with several state-of-the-art methods.
Collapse
|
2
|
Yang W, Shi Y, Park SH, Yang M, Gao Y, Shen D. An Effective MR-Guided CT Network Training for Segmenting Prostate in CT Images. IEEE J Biomed Health Inform 2020; 24:2278-2291. [DOI: 10.1109/jbhi.2019.2960153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
3
|
Wang S, Nie D, Qu L, Shao Y, Lian J, Wang Q, Shen D. CT Male Pelvic Organ Segmentation via Hybrid Loss Network With Incomplete Annotation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2151-2162. [PMID: 31940526 PMCID: PMC8195629 DOI: 10.1109/tmi.2020.2966389] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Sufficient data with complete annotation is essential for training deep models to perform automatic and accurate segmentation of CT male pelvic organs, especially when such data is with great challenges such as low contrast and large shape variation. However, manual annotation is expensive in terms of both finance and human effort, which usually results in insufficient completely annotated data in real applications. To this end, we propose a novel deep framework to segment male pelvic organs in CT images with incomplete annotation delineated in a very user-friendly manner. Specifically, we design a hybrid loss network derived from both voxel classification and boundary regression, to jointly improve the organ segmentation performance in an iterative way. Moreover, we introduce a label completion strategy to complete the labels of the rich unannotated voxels and then embed them into the training data to enhance the model capability. To reduce the computation complexity and improve segmentation performance, we locate the pelvic region based on salient bone structures to focus on the candidate segmentation organs. Experimental results on a large planning CT pelvic organ dataset show that our proposed method with incomplete annotation achieves comparable segmentation performance to the state-of-the-art methods with complete annotation. Moreover, our proposed method requires much less effort of manual contouring from medical professionals such that an institutional specific model can be more easily established.
Collapse
|
4
|
Lei Y, Dong X, Tian Z, Liu Y, Tian S, Wang T, Jiang X, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. CT prostate segmentation based on synthetic MRI-aided deep attention fully convolution network. Med Phys 2020; 47:530-540. [PMID: 31745995 PMCID: PMC7764436 DOI: 10.1002/mp.13933] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2019] [Revised: 10/10/2019] [Accepted: 11/13/2019] [Indexed: 01/02/2023] Open
Abstract
PURPOSE Accurate segmentation of the prostate on computed tomography (CT) for treatment planning is challenging due to CT's poor soft tissue contrast. Magnetic resonance imaging (MRI) has been used to aid prostate delineation, but its final accuracy is limited by MRI-CT registration errors. We developed a deep attention-based segmentation strategy on CT-based synthetic MRI (sMRI) to deal with the CT prostate delineation challenge without MRI acquisition. METHODS AND MATERIALS We developed a prostate segmentation strategy which employs an sMRI-aided deep attention network to accurately segment the prostate on CT. Our method consists of three major steps. First, a cycle generative adversarial network was used to estimate an sMRI from CT images. Second, a deep attention fully convolution network was trained based on sMRI and the prostate contours deformed from MRIs. Attention models were introduced to pay more attention to prostate boundary. The prostate contour for a query patient was obtained by feeding the patient's CT images into the trained sMRI generation model and segmentation model. RESULTS The segmentation technique was validated with a clinical study of 49 patients by leave-one-out experiments and validated with an additional 50 patients by hold-out test. The Dice similarity coefficient, Hausdorff distance, and mean surface distance indices between our segmented and deformed MRI-defined prostate manual contours were 0.92 ± 0.09, 4.38 ± 4.66, and 0.62 ± 0.89 mm, respectively, with leave-one-out experiments, and were 0.91 ± 0.07, 4.57 ± 3.03, and 0.62 ± 0.65 mm, respectively, with hold-out test. CONCLUSIONS We have proposed a novel CT-only prostate segmentation strategy using CT-based sMRI, and validated its accuracy against the prostate contours that were manually drawn on MRI images and deformed to CT images. This technique could provide accurate prostate volume for treatment planning without requiring MRI acquisition, greatly facilitating the routine clinical workflow.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaojun Jiang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
5
|
Shahedi M, Halicek M, Dormer JD, Schuster DM, Fei B. Deep learning-based three-dimensional segmentation of the prostate on computed tomography images. J Med Imaging (Bellingham) 2019; 6:025003. [PMID: 31065570 DOI: 10.1117/1.jmi.6.2.025003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2019] [Accepted: 04/04/2019] [Indexed: 11/14/2022] Open
Abstract
Segmentation of the prostate in computed tomography (CT) is used for planning and guidance of prostate treatment procedures. However, due to the low soft-tissue contrast of the images, manual delineation of the prostate on CT is a time-consuming task with high interobserver variability. We developed an automatic, three-dimensional (3-D) prostate segmentation algorithm based on a customized U-Net architecture. Our dataset contained 92 3-D abdominal CT scans from 92 patients, of which 69 images were used for training and validation and the remaining for testing the convolutional neural network model. Compared to manual segmentation by an expert radiologist, our method achieved 83 % ± 6 % for Dice similarity coefficient (DSC), 2.3 ± 0.6 mm for mean absolute distance (MAD), and 1.9 ± 4.0 cm 3 for signed volume difference ( Δ V ). The average recorded interexpert difference measured on the same test dataset was 92% (DSC), 1.1 mm (MAD), and 2.1 cm 3 ( Δ V ). The proposed algorithm is fast, accurate, and robust for 3-D segmentation of the prostate on CT images.
Collapse
Affiliation(s)
- Maysam Shahedi
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States
| | - Martin Halicek
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States.,Emory University and Georgia Institute of Technology, Department of Biomedical Engineering, Atlanta, Georgia, United States
| | - James D Dormer
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States
| | - David M Schuster
- Emory University School of Medicine, Department of Radiology and Imaging Science, Atlanta, Georgia, United States
| | - Baowei Fei
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States.,University of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, Texas, United States.,University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
| |
Collapse
|
6
|
He K, Cao X, Shi Y, Nie D, Gao Y, Shen D. Pelvic Organ Segmentation Using Distinctive Curve Guided Fully Convolutional Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:585-595. [PMID: 30176583 PMCID: PMC6392049 DOI: 10.1109/tmi.2018.2867837] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Accurate segmentation of pelvic organs (i.e., prostate, bladder, and rectum) from CT image is crucial for effective prostate cancer radiotherapy. However, it is a challenging task due to: 1) low soft tissue contrast in CT images and 2) large shape and appearance variations of pelvic organs. In this paper, we employ a two-stage deep learning-based method, with a novel distinctive curve-guided fully convolutional network (FCN), to solve the aforementioned challenges. Specifically, the first stage is for fast and robust organ detection in the raw CT images. It is designed as a coarse segmentation network to provide region proposals for three pelvic organs. The second stage is for fine segmentation of each organ, based on the region proposal results. To better identify those indistinguishable pelvic organ boundaries, a novel morphological representation, namely, distinctive curve, is also introduced to help better conduct the precise segmentation. To implement this, in this second stage, a multi-task FCN is initially utilized to learn the distinctive curve and the segmentation map separately and then combine these two tasks to produce accurate segmentation map. The final segmentation results of all three pelvic organs are generated by a weighted max-voting strategy. We have conducted exhaustive experiments on a large and diverse pelvic CT data set for evaluating our proposed method. The experimental results demonstrate that our proposed method is accurate and robust for this challenging segmentation task, by also outperforming the state-of-the-art segmentation methods.
Collapse
|
7
|
Balagopal A, Kazemifar S, Nguyen D, Lin MH, Hannan R, Owrangi A, Jiang S. Fully automated organ segmentation in male pelvic CT images. Phys Med Biol 2018; 63:245015. [PMID: 30523973 DOI: 10.1088/1361-6560/aaf11c] [Citation(s) in RCA: 78] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Accurate segmentation of prostate and surrounding organs at risk is important for prostate cancer radiotherapy treatment planning. We present a fully automated workflow for male pelvic CT image segmentation using deep learning. The architecture consists of a 2D organ volume localization network followed by a 3D segmentation network for volumetric segmentation of prostate, bladder, rectum, and femoral heads. We used a multi-channel 2D U-Net followed by a 3D U-Net with encoding arm modified with aggregated residual networks, known as ResNeXt. The models were trained and tested on a pelvic CT image dataset comprising 136 patients. Test results show that 3D U-Net based segmentation achieves mean (±SD) Dice coefficient values of 90 (±2.0)%, 96 (±3.0)%, 95 (±1.3)%, 95 (±1.5)%, and 84 (±3.7)% for prostate, left femoral head, right femoral head, bladder, and rectum, respectively, using the proposed fully automated segmentation method.
Collapse
Affiliation(s)
- Anjali Balagopal
- Department of Radiation Oncology, Medical Artificial Intelligence and Automation Laboratory, University of Texas Southwestern, Dallas, TX, United States of America. Co-first authors
| | | | | | | | | | | | | |
Collapse
|
8
|
Shahedi M, Ma L, Halicek M, Guo R, Zhang G, Schuster DM, Nieh P, Master V, Fei B. A semiautomatic algorithm for three-dimensional segmentation of the prostate on CT images using shape and local texture characteristics. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10576. [PMID: 30245541 DOI: 10.1117/12.2293195] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Prostate segmentation in computed tomography (CT) images is useful for planning and guidance of the diagnostic and therapeutic procedures. However, the low soft-tissue contrast of CT images makes the manual prostate segmentation a time-consuming task with high inter-observer variation. We developed a semi-automatic, three-dimensional (3D) prostate segmentation algorithm using shape and texture analysis and have evaluated the method against manual reference segmentations. In a training data set we defined an inter-subject correspondence between surface points in the spherical coordinate system. We applied this correspondence to model the globular and smoothly curved shape of the prostate with 86, well-distributed surface points using a point distribution model that captures prostate shape variation. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. For segmentation, we used the learned shape and texture characteristics of the prostate in CT images and we used a set of user inputs for prostate localization. We trained our algorithm using 23 CT images and tested it on 10 images. We evaluated the results compared with those of two experts' manual reference segmentations using different error metrics. The average measured Dice similarity coefficient (DSC) and mean absolute distance (MAD) were 88 ± 2% and 1.9 ± 0.5 mm, respectively. The averaged inter-expert difference measured on the same dataset was 91 ± 4% (DSC) and 1.3 ± 0.6 mm (MAD). With no prior intra-patient information, the proposed algorithm showed a fast, robust and accurate performance for 3D CT segmentation.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Martin Halicek
- The Wallace H. Coulter Department of Biomedical Engineering Georgia Institute of Technology and Emory University, Atlanta, GA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Peter Nieh
- Department of Urology, Emory University, Atlanta, GA
| | - Viraj Master
- Department of Urology, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,Department of Urology, Emory University, Atlanta, GA.,Winship Cancer Institute of Emory University, Atlanta, GA
| |
Collapse
|
9
|
Shahedi M, Halicek M, Guo R, Zhang G, Schuster DM, Fei B. A semiautomatic segmentation method for prostate in CT images using local texture classification and statistical shape modeling. Med Phys 2018; 45:2527-2541. [PMID: 29611216 DOI: 10.1002/mp.12898] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 03/15/2018] [Accepted: 03/24/2018] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Prostate segmentation in computed tomography (CT) images is useful for treatment planning and procedure guidance such as external beam radiotherapy and brachytherapy. However, because of the low, soft tissue contrast of CT images, manual segmentation of the prostate is a time-consuming task with high interobserver variation. In this study, we proposed a semiautomated, three-dimensional (3D) segmentation for prostate CT images using shape and texture analysis and we evaluated the method against manual reference segmentations. METHODS The prostate gland usually has a globular shape with a smoothly curved surface, and its shape could be accurately modeled or reconstructed having a limited number of well-distributed surface points. In a training dataset, using the prostate gland centroid point as the origin of a coordination system, we defined an intersubject correspondence between the prostate surface points based on the spherical coordinates. We applied this correspondence to generate a point distribution model for prostate shape using principal component analysis and to study the local texture difference between prostate and nonprostate tissue close to the different prostate surface subregions. We used the learned shape and texture characteristics of the prostate in CT images and then combined them with user inputs to segment a new image. We trained our segmentation algorithm using 23 CT images and tested the algorithm on two sets of 10 nonbrachytherapy and 37 postlow dose rate brachytherapy CT images. We used a set of error metrics to evaluate the segmentation results using two experts' manual reference segmentations. RESULTS For both nonbrachytherapy and post-brachytherapy image sets, the average measured Dice similarity coefficient (DSC) was 88% and the average mean absolute distance (MAD) was 1.9 mm. The average measured differences between the two experts on both datasets were 92% (DSC) and 1.1 mm (MAD). CONCLUSIONS The proposed, semiautomatic segmentation algorithm showed a fast, robust, and accurate performance for 3D prostate segmentation of CT images, specifically when no previous, intrapatient information, that is, previously segmented images, was available. The accuracy of the algorithm is comparable to the best performance results reported in the literature and approaches the interexpert variability observed in manual segmentation.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Martin Halicek
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, 30322, USA.,Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, 30332, USA.,Winship Cancer Institute of Emory University, Atlanta, GA, 30322, USA.,Department of Mathematics and Computer Science, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
10
|
Ma L, Guo R, Zhang G, Schuster DM, Fei B. A combined learning algorithm for prostate segmentation on 3D CT images. Med Phys 2017; 44:5768-5781. [PMID: 28834585 DOI: 10.1002/mp.12528] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 07/17/2017] [Accepted: 07/28/2017] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. METHODS We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. RESULTS The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. CONCLUSIONS By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA.,Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA.,Winship Cancer Institute of Emory University, Atlanta, GA, USA.,Department of Mathematics and Computer Science, Emory University, Atlanta, GA, USA
| |
Collapse
|
11
|
Sun J, Shi Y, Gao Y, Shen D. A Point Says a Lot: An Interactive Segmentation Method for MR Prostate via One-Point Labeling. MACHINE LEARNING FOR MULTIMODAL INTERACTION : ... INTERNATIONAL WORKSHOP, MLMI ... : REVISED SELECTED PAPERS. WORKSHOP ON MACHINE LEARNING FOR MULTIMODAL INTERACTION 2017; 10541:220-228. [PMID: 30345431 PMCID: PMC6193503 DOI: 10.1007/978-3-319-67389-9_26] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
In this paper, we investigate if the MR prostate segmentation performance could be improved, by only providing one-point labeling information in the prostate region. To achieve this goal, by asking the physician to first click one point inside the prostate region, we present a novel segmentation method by simultaneously integrating the boundary detection results and the patch-based prediction. Particularly, since the clicked point belongs to the prostate, we first generate the location-prior maps, with two basic assumptions: (1) a point closer to the clicked point should be with higher probability to be the prostate voxel, (2) a point separated by more boundaries to the clicked point, will have lower chance to be the prostate voxel. We perform the Canny edge detector and obtain two location-prior maps from horizontal and vertical directions, respectively. Then, the obtained location-prior maps along with the original MR images are fed into a multi-channel fully convolutional network to conduct the patch-based prediction. With the obtained prostate-likelihood map, we employ a level-set method to achieve the final segmentation. We evaluate the performance of our method on 22 MR images collected from 22 different patients, with the manual delineation provided as the ground truth for evaluation. The experimental results not only show the promising performance of our method but also demonstrate the one-point labeling could largely enhance the results when a pure patch-based prediction fails.
Collapse
Affiliation(s)
- Jinquan Sun
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC Chapel Hill, Chapel Hill, USA
| |
Collapse
|
12
|
Ma L, Guo R, Zhang G, Tade F, Schuster DM, Nieh P, Master V, Fei B. Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10133. [PMID: 30220767 DOI: 10.1117/12.2255755] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,School of Computer Science, Beijing Institute of Technology
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Funmilayo Tade
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Peter Nieh
- Department of Urology, Emory University, Atlanta, GA
| | - Viraj Master
- Department of Urology, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,Winship Cancer Institute of Emory University, Atlanta, GA.,The Wallace H. Coulter Department of Biomedical Engineering Georgia Institute of Technology and Emory University, Atlanta, GA
| |
Collapse
|
13
|
Commandeur F, Simon A, Mathieu R, Nassef M, Arango JDO, Rolland Y, Haigron P, de Crevoisier R, Acosta O. MRI to CT Prostate Registration for Improved Targeting in Cancer External Beam Radiotherapy. IEEE J Biomed Health Inform 2016; 21:1015-1026. [PMID: 27333613 DOI: 10.1109/jbhi.2016.2581881] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
External radiotherapy is a major clinical treatment for localized prostate cancer. Currently, computed tomography (CT) is used to delineate the prostate and to plan the radiotherapy treatment. However, CT images suffer from a poor soft-tissue contrast and do not allow an accurate organ delineation. On the contrary, magnetic resonance imaging (MRI) provides rich details and high soft-tissue contrast, allowing tumor detection. Thus, the intraindividual propagation of MRI delineations toward the planning CT may improve tumor targeting. In this paper, we introduce a new method to propagate MRI prostate delineations to the planning CT. In the first step, a random forest classification is performed to coarsely detect the prostate in the CT images, yielding a prostate probability membership for each voxel and a prostate hard segmentation. Then, the registration is performed using a new similarity metric which maximizes the probability and the collinearity between the normals of the manual registration (MR) existing contour and the contour resulting from the CT classification. The first study on synthetic data was performed to analyze the influence of the metric parameters with different levels of noise. Then, the method was also evaluated on real MR-CT data using manual alignments and intraprostatic fiducial markers and compared to a classically used mutual information (MI) approach. The proposed metric outperformed MI by 7% in terms of Dice score coefficient, by 3.14 mm the Hausdorff distance, and 2.13 mm the markers position errors. Finally, the impact of registration uncertainties on the treatment planning was evaluated, demonstrating the potential advantage of the proposed approach in a clinical setup to define a precise target.
Collapse
|
14
|
Ma L, Guo R, Tian Z, Venkataraman R, Sarkar S, Liu X, Tade F, Schuster DM, Fei B. Combining Population and Patient-Specific Characteristics for Prostate Segmentation on 3D CT Images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9784:978427. [PMID: 27660382 PMCID: PMC5029417 DOI: 10.1117/12.2216255] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- School of Computer Science, Beijing Institute of Technology, Beijing
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | | | | - Xiabi Liu
- School of Computer Science, Beijing Institute of Technology, Beijing
| | - Funmilayo Tade
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- Winship Cancer Institute of Emory University, Atlanta, GA
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA
| |
Collapse
|
15
|
Shi Y, Gao Y, Liao S, Zhang D, Gao Y, Shen D. Semi-automatic segmentation of prostate in CT images via coupled feature representation and spatial-constrained transductive lasso. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2015; 37:2286-2303. [PMID: 26440268 DOI: 10.1109/tpami.2015.2424869] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Conventional learning-based methods for segmenting prostate in CT images ignore the relations among the low-level features by assuming all these features are independent. Also, their feature selection steps usually neglect the image appearance changes in different local regions of CT images. To this end, we present a novel semi-automatic learning-based prostate segmentation method in this article. For segmenting the prostate in a certain treatment image, the radiation oncologist will be first asked to take a few seconds to manually specify the first and last slices of the prostate. Then, prostate is segmented with the following two steps: (i) Estimation of 3D prostate-likelihood map to predict the likelihood of each voxel being prostate by employing the coupled feature representation, and the proposed Spatial-COnstrained Transductive LassO (SCOTO); (ii) Multi-atlases based label fusion to generate the final segmentation result by using the prostate shape information obtained from both planning and previous treatment images. The major contribution of the proposed method mainly includes: (i) incorporating radiation oncologist's manual specification to aid segmentation, (ii) adopting coupled features to relax previous assumption of feature independency for voxel representation, and (iii) developing SCOTO for joint feature selection across different local regions. The experimental result shows that the proposed method outperforms the state-of-the-art methods in a real-world prostate CT dataset, consisting of 24 patients with totally 330 images, all of which were manually delineated by the radiation oncologist for performance evaluation. Moreover, our method is also clinically feasible, since the segmentation performance can be improved by just requiring the radiation oncologist to spend only a few seconds for manual specification of ending slices in the current treatment CT image.
Collapse
|
16
|
Yang W, Gao Y, Shi Y, Cao L. MRM-Lasso: A Sparse Multiview Feature Selection Method via Low-Rank Analysis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2801-2815. [PMID: 25706891 DOI: 10.1109/tnnls.2015.2396937] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Learning about multiview data involves many applications, such as video understanding, image classification, and social media. However, when the data dimension increases dramatically, it is important but very challenging to remove redundant features in multiview feature selection. In this paper, we propose a novel feature selection algorithm, multiview rank minimization-based Lasso (MRM-Lasso), which jointly utilizes Lasso for sparse feature selection and rank minimization for learning relevant patterns across views. Instead of simply integrating multiple Lasso from view level, we focus on the performance of sample-level (sample significance) and introduce pattern-specific weights into MRM-Lasso. The weights are utilized to measure the contribution of each sample to the labels in the current view. In addition, the latent correlation across different views is successfully captured by learning a low-rank matrix consisting of pattern-specific weights. The alternating direction method of multipliers is applied to optimize the proposed MRM-Lasso. Experiments on four real-life data sets show that features selected by MRM-Lasso have better multiview classification performance than the baselines. Moreover, pattern-specific weights are demonstrated to be significant for learning about multiview data, compared with view-specific weights.
Collapse
|
17
|
Park SH, Gao Y, Shen D. Multiatlas-Based Segmentation Editing With Interaction-Guided Patch Selection and Label Fusion. IEEE Trans Biomed Eng 2015; 63:1208-1219. [PMID: 26485353 DOI: 10.1109/tbme.2015.2491612] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We propose a novel multiatlas-based segmentation method to address the segmentation editing scenario, where an incomplete segmentation is given along with a set of existing reference label images (used as atlases). Unlike previous multiatlas-based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate atlas label patches in the reference label set and derive their weights for label fusion. Specifically, user interactions provided on the erroneous parts are first divided into multiple local combinations. For each combination, the atlas label patches well-matched with both interactions and the previous segmentation are identified. Then, the segmentation is updated through the voxelwise label fusion of selected atlas label patches with their weights derived from the distances of each underlying voxel to the interactions. Since the atlas label patches well-matched with different local combinations are used in the fusion step, our method can consider various local shape variations during the segmentation update, even with only limited atlas label images and user interactions. Besides, since our method does not depend on either image appearance or sophisticated learning steps, it can be easily applied to general editing problems. To demonstrate the generality of our method, we apply it to editing segmentations of CT prostate, CT brainstem, and MR hippocampus, respectively. Experimental results show that our method outperforms existing editing methods in all three datasets.
Collapse
|
18
|
Park SH, Gao Y, Shi Y, Shen D. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection. Med Phys 2015; 41:111715. [PMID: 25370629 DOI: 10.1118/1.4898200] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. METHODS The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. RESULTS The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency and the robustness. The automatic segmentation results with the original average Dice similarity coefficient of 0.78 were improved to 0.865-0.872 after conducting 55-59 interactions by using the proposed method, where each editing procedure took less than 3 s. In addition, the proposed method obtained the most consistent editing results with respect to different user interactions, compared to other methods. CONCLUSIONS The proposed method obtains robust editing results with few interactions for various wrong segmentation cases, by selecting the location-adaptive features and further imposing the manifold regularization. The authors expect the proposed method to largely reduce the laborious burdens of manual editing, as well as both the intra- and interobserver variability across clinicians.
Collapse
Affiliation(s)
- Sang Hyun Park
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yaozong Gao
- Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, Republic of Korea
| |
Collapse
|
19
|
Dai X, Gao Y, Shen D. Online updating of context-aware landmark detectors for prostate localization in daily treatment CT images. Med Phys 2015; 42:2594-606. [PMID: 25979051 PMCID: PMC4409630 DOI: 10.1118/1.4918755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Revised: 02/22/2015] [Accepted: 03/20/2015] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In image guided radiation therapy, it is crucial to fast and accurately localize the prostate in the daily treatment images. To this end, the authors propose an online update scheme for landmark-guided prostate segmentation, which can fully exploit valuable patient-specific information contained in the previous treatment images and can achieve improved performance in landmark detection and prostate segmentation. METHODS To localize the prostate in the daily treatment images, the authors first automatically detect six anatomical landmarks on the prostate boundary by adopting a context-aware landmark detection method. Specifically, in this method, a two-layer regression forest is trained as a detector for each target landmark. Once all the newly detected landmarks from new treatment images are reviewed or adjusted (if necessary) by clinicians, they are further included into the training pool as new patient-specific information to update all the two-layer regression forests for the next treatment day. As more and more treatment images of the current patient are acquired, the two-layer regression forests can be continually updated by incorporating the patient-specific information into the training procedure. After all target landmarks are detected, a multiatlas random sample consensus (multiatlas RANSAC) method is used to segment the entire prostate by fusing multiple previously segmented prostates of the current patient after they are aligned to the current treatment image. Subsequently, the segmented prostate of the current treatment image is again reviewed (or even adjusted if needed) by clinicians before including it as a new shape example into the prostate shape dataset for helping localize the entire prostate in the next treatment image. RESULTS The experimental results on 330 images of 24 patients show the effectiveness of the authors' proposed online update scheme in improving the accuracies of both landmark detection and prostate segmentation. Besides, compared to the other state-of-the-art prostate segmentation methods, the authors' method achieves the best performance. CONCLUSIONS By appropriate use of valuable patient-specific information contained in the previous treatment images, the authors' proposed online update scheme can obtain satisfactory results for both landmark detection and prostate segmentation.
Collapse
Affiliation(s)
- Xiubin Dai
- College of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210015, China and IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510
| | - Yaozong Gao
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510
| | - Dinggang Shen
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510 and Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
20
|
Yang W, Gao Y, Cao L, Yang M, Shi Y. mPadal: a joint local-and-global multi-view feature selection method for activity recognition. APPL INTELL 2014. [DOI: 10.1007/s10489-014-0566-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
21
|
Gao Y, Zhan Y, Shen D. Incremental learning with selective memory (ILSM): towards fast prostate localization for image guided radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:518-34. [PMID: 24495983 PMCID: PMC4379484 DOI: 10.1109/tmi.2013.2291495] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to "personalize" the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼ 0.89 ) and fast ( ∼ 4 s), which satisfies the real-world clinical requirements of IGRT.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Computer Science and the Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Yiqiang Zhan
- SYNGO Division, Siemens Medical Solutions, Malvern, PA 19355 USA
| | - Dinggang Shen
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 136-701, Korea
| |
Collapse
|