1
|
Zhang J, Shi Y, Sun J, Wang L, Zhou L, Gao Y, Shen D. Interactive medical image segmentation via a point-based interaction. Artif Intell Med 2020; 111:101998. [PMID: 33461691 DOI: 10.1016/j.artmed.2020.101998] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 10/05/2020] [Accepted: 11/23/2020] [Indexed: 11/20/2022]
Abstract
Due to low tissue contrast, irregular shape, and large location variance, segmenting the objects from different medical imaging modalities (e.g., CT, MR) is considered as an important yet challenging task. In this paper, a novel method is presented for interactive medical image segmentation with the following merits. (1) Its design is fundamentally different from previous pure patch-based and image-based segmentation methods. It is observed that during delineation, the physician repeatedly check the intensity from area inside-object to outside-object to determine the boundary, which indicates that comparison in an inside-out manner is extremely important. Thus, the method innovatively models the segmentation task as learning the representation of bi-directional sequential patches, starting from (or ending in) the given central point of the object. This can be realized by the proposed ConvRNN network embedded with a gated memory propagation unit. (2) Unlike previous interactive methods (requiring bounding box or seed points), the proposed method only asks the physician to merely click on the rough central point of the object before segmentation, which could simultaneously enhance the performance and reduce the segmentation time. (3) The method is utilized in a multi-level framework for better performance. It has been systematically evaluated in three different segmentation tasks, including CT kidney tumor, MR prostate, and PROMISE12 challenge, showing promising results compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Jian Zhang
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, China; National Institute of Healthcare Data Science, Nanjing University, China
| | - Jinquan Sun
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Lei Wang
- School of Computing and Information Technology, University of Wollongong, Australia
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia
| | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, China; National Institute of Healthcare Data Science, Nanjing University, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, China; Shanghai United Imaging Intelligence Co., Ltd., China; Department of Artificial Intelligence, Korea University, Republic of Korea
| |
Collapse
|
2
|
Das PK, Meher S, Panda R, Abraham A. A Review of Automated Methods for the Detection of Sickle Cell Disease. IEEE Rev Biomed Eng 2019; 13:309-324. [PMID: 31107662 DOI: 10.1109/rbme.2019.2917780] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Detection of sickle cell disease is a crucial job in medical image analysis. It emphasizes elaborate analysis of proper disease diagnosis after accurate detection followed by a classification of irregularities, which plays a vital role in the sickle cell disease diagnosis, treatment planning, and treatment outcome evaluation. Proper segmentation of complex cell clusters makes sickle cell detection more accurate and robust. Cell morphology has a key role in the detection of the sickle cell because the shapes of the normal blood cell and sickle cell differ significantly. This review emphasizes state-of-the-art methods and recent advances in detection, segmentation, and classification of sickle cell disease. We discuss key challenges encountered during the segmentation of overlapping blood cells. Moreover, standard validation measures that have been employed to yield performance analysis of various methods are also discussed. The methodologies and experiments in this review will be useful to further research and work in this area.
Collapse
|
3
|
Ma L, Guo R, Zhang G, Schuster DM, Fei B. A combined learning algorithm for prostate segmentation on 3D CT images. Med Phys 2017; 44:5768-5781. [PMID: 28834585 DOI: 10.1002/mp.12528] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 07/17/2017] [Accepted: 07/28/2017] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. METHODS We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. RESULTS The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. CONCLUSIONS By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA.,Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA.,Winship Cancer Institute of Emory University, Atlanta, GA, USA.,Department of Mathematics and Computer Science, Emory University, Atlanta, GA, USA
| |
Collapse
|
4
|
Dora L, Agrawal S, Panda R, Abraham A. State-of-the-Art Methods for Brain Tissue Segmentation: A Review. IEEE Rev Biomed Eng 2017. [PMID: 28622675 DOI: 10.1109/rbme.2017.2715350] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Brain tissue segmentation is one of the most sought after research areas in medical image processing. It provides detailed quantitative brain analysis for accurate disease diagnosis, detection, and classification of abnormalities. It plays an essential role in discriminating healthy tissues from lesion tissues. Therefore, accurate disease diagnosis and treatment planning depend merely on the performance of the segmentation method used. In this review, we have studied the recent advances in brain tissue segmentation methods and their state-of-the-art in neuroscience research. The review also highlights the major challenges faced during tissue segmentation of the brain. An effective comparison is made among state-of-the-art brain tissue segmentation methods. Moreover, a study of some of the validation measures to evaluate different segmentation methods is also discussed. The brain tissue segmentation, content in terms of methodologies, and experiments presented in this review are encouraging enough to attract researchers working in this field.
Collapse
|
5
|
Alvarez C, Martínez F, Romero E. A multiresolution prostate representation for automatic segmentation in magnetic resonance images. Med Phys 2017; 44:1312-1323. [PMID: 28134979 DOI: 10.1002/mp.12141] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2016] [Revised: 11/18/2016] [Accepted: 01/09/2017] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Accurate prostate delineation is necessary in radiotherapy processes for concentrating the dose onto the prostate and reducing side effects in neighboring organs. Currently, manual delineation is performed over magnetic resonance imaging (MRI) taking advantage of its high soft tissue contrast property. Nevertheless, as human intervention is a consuming task with high intra- and interobserver variability rates, (semi)-automatic organ delineation tools have emerged to cope with these challenges, reducing the time spent for these tasks. This work presents a multiresolution representation that defines a novel metric and allows to segment a new prostate by combining a set of most similar prostates in a dataset. METHODS The proposed method starts by selecting the set of most similar prostates with respect to a new one using the proposed multiresolution representation. This representation characterizes the prostate through a set of salient points, extracted from a region of interest (ROI) that encloses the organ and refined using structural information, allowing to capture main relevant features of the organ boundary. Afterward, the new prostate is automatically segmented by combining the nonrigidly registered expert delineations associated to the previous selected similar prostates using a weighted patch-based strategy. Finally, the prostate contour is smoothed based on morphological operations. RESULTS The proposed approach was evaluated with respect to the expert manual segmentation under a leave-one-out scheme using two public datasets, obtaining averaged Dice coefficients of 82% ± 0.07 and 83% ± 0.06, and demonstrating a competitive performance with respect to atlas-based state-of-the-art methods. CONCLUSIONS The proposed multiresolution representation provides a feature space that follows a local salient point criteria and a global rule of the spatial configuration among these points to find out the most similar prostates. This strategy suggests an easy adaptation in the clinical routine, as supporting tool for annotation.
Collapse
Affiliation(s)
- Charlens Alvarez
- Computer Imaging and Medical Application Laboratory-CIM@LAB, Universidad Nacional de Colombia, Bogotá, Colombia
| | - Fabio Martínez
- Computer Imaging and Medical Application Laboratory-CIM@LAB, Universidad Nacional de Colombia, Bogotá, Colombia.,Escuela de Ingeniería de Sistemas e Informática, Universidad Industrial de Santander UIS, Bucaramanga, Colombia
| | - Eduardo Romero
- Computer Imaging and Medical Application Laboratory-CIM@LAB, Universidad Nacional de Colombia, Bogotá, Colombia
| |
Collapse
|
6
|
Ma L, Guo R, Zhang G, Tade F, Schuster DM, Nieh P, Master V, Fei B. Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10133. [PMID: 30220767 DOI: 10.1117/12.2255755] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,School of Computer Science, Beijing Institute of Technology
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Funmilayo Tade
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Peter Nieh
- Department of Urology, Emory University, Atlanta, GA
| | - Viraj Master
- Department of Urology, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,Winship Cancer Institute of Emory University, Atlanta, GA.,The Wallace H. Coulter Department of Biomedical Engineering Georgia Institute of Technology and Emory University, Atlanta, GA
| |
Collapse
|
7
|
Zhu X, Suk HI, Lee SW, Shen D. Subspace Regularized Sparse Multitask Learning for Multiclass Neurodegenerative Disease Identification. IEEE Trans Biomed Eng 2016; 63:607-18. [PMID: 26276982 PMCID: PMC4751062 DOI: 10.1109/tbme.2015.2466616] [Citation(s) in RCA: 104] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
The high feature-dimension and low sample-size problem is one of the major challenges in the study of computer-aided Alzheimer's disease (AD) diagnosis. To circumvent this problem, feature selection and subspace learning have been playing core roles in the literature. Generally, feature selection methods are preferable in clinical applications due to their ease for interpretation, but subspace learning methods can usually achieve more promising results. In this paper, we combine two different methodological approaches to discriminative feature selection in a unified framework. Specifically, we utilize two subspace learning methods, namely, linear discriminant analysis and locality preserving projection, which have proven their effectiveness in a variety of fields, to select class-discriminative and noise-resistant features. Unlike previous methods in neuroimaging studies that mostly focused on a binary classification, the proposed feature selection method is further applicable for multiclass classification in AD diagnosis. Extensive experiments on the Alzheimer's disease neuroimaging initiative dataset showed the effectiveness of the proposed method over other state-of-the-art methods.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | | |
Collapse
|
8
|
Ma L, Guo R, Tian Z, Venkataraman R, Sarkar S, Liu X, Tade F, Schuster DM, Fei B. Combining Population and Patient-Specific Characteristics for Prostate Segmentation on 3D CT Images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9784:978427. [PMID: 27660382 PMCID: PMC5029417 DOI: 10.1117/12.2216255] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- School of Computer Science, Beijing Institute of Technology, Beijing
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | | | | - Xiabi Liu
- School of Computer Science, Beijing Institute of Technology, Beijing
| | - Funmilayo Tade
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- Winship Cancer Institute of Emory University, Atlanta, GA
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA
| |
Collapse
|
9
|
Shi Y, Gao Y, Liao S, Zhang D, Gao Y, Shen D. A Learning-Based CT Prostate Segmentation Method via Joint Transductive Feature Selection and Regression. Neurocomputing 2016; 173:317-331. [PMID: 26752809 PMCID: PMC4704800 DOI: 10.1016/j.neucom.2014.11.098] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In1 recent years, there has been a great interest in prostate segmentation, which is a important and challenging task for CT image guided radiotherapy. In this paper, a learning-based segmentation method via joint transductive feature selection and transductive regression is presented, which incorporates the physician's simple manual specification (only taking a few seconds), to aid accurate segmentation, especially for the case with large irregular prostate motion. More specifically, for the current treatment image, experienced physician is first allowed to manually assign the labels for a small subset of prostate and non-prostate voxels, especially in the first and last slices of the prostate regions. Then, the proposed method follows the two step: in prostate-likelihood estimation step, two novel algorithms: tLasso and wLapRLS, will be sequentially employed for transductive feature selection and transductive regression, respectively, aiming to generate the prostate-likelihood map. In multi-atlases based label fusion step, the final segmentation result will be obtained according to the corresponding prostate-likelihood map and the previous images of the same patient. The proposed method has been substantially evaluated on a real prostate CT dataset including 24 patients with 330 CT images, and compared with several state-of-the-art methods. Experimental results show that the proposed method outperforms the state-of-the-arts in terms of higher Dice ratio, higher true positive fraction, and lower centroid distances. Also, the results demonstrate that simple manual specification can help improve the segmentation performance, which is clinically feasible in real practice.
Collapse
Affiliation(s)
- Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, China; Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | - Yaozong Gao
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | - Shu Liao
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | | | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| |
Collapse
|
10
|
Shi Y, Gao Y, Liao S, Zhang D, Gao Y, Shen D. Semi-automatic segmentation of prostate in CT images via coupled feature representation and spatial-constrained transductive lasso. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2015; 37:2286-2303. [PMID: 26440268 DOI: 10.1109/tpami.2015.2424869] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Conventional learning-based methods for segmenting prostate in CT images ignore the relations among the low-level features by assuming all these features are independent. Also, their feature selection steps usually neglect the image appearance changes in different local regions of CT images. To this end, we present a novel semi-automatic learning-based prostate segmentation method in this article. For segmenting the prostate in a certain treatment image, the radiation oncologist will be first asked to take a few seconds to manually specify the first and last slices of the prostate. Then, prostate is segmented with the following two steps: (i) Estimation of 3D prostate-likelihood map to predict the likelihood of each voxel being prostate by employing the coupled feature representation, and the proposed Spatial-COnstrained Transductive LassO (SCOTO); (ii) Multi-atlases based label fusion to generate the final segmentation result by using the prostate shape information obtained from both planning and previous treatment images. The major contribution of the proposed method mainly includes: (i) incorporating radiation oncologist's manual specification to aid segmentation, (ii) adopting coupled features to relax previous assumption of feature independency for voxel representation, and (iii) developing SCOTO for joint feature selection across different local regions. The experimental result shows that the proposed method outperforms the state-of-the-art methods in a real-world prostate CT dataset, consisting of 24 patients with totally 330 images, all of which were manually delineated by the radiation oncologist for performance evaluation. Moreover, our method is also clinically feasible, since the segmentation performance can be improved by just requiring the radiation oncologist to spend only a few seconds for manual specification of ending slices in the current treatment CT image.
Collapse
|
11
|
Park SH, Gao Y, Shi Y, Shen D. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection. Med Phys 2015; 41:111715. [PMID: 25370629 DOI: 10.1118/1.4898200] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. METHODS The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. RESULTS The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency and the robustness. The automatic segmentation results with the original average Dice similarity coefficient of 0.78 were improved to 0.865-0.872 after conducting 55-59 interactions by using the proposed method, where each editing procedure took less than 3 s. In addition, the proposed method obtained the most consistent editing results with respect to different user interactions, compared to other methods. CONCLUSIONS The proposed method obtains robust editing results with few interactions for various wrong segmentation cases, by selecting the location-adaptive features and further imposing the manifold regularization. The authors expect the proposed method to largely reduce the laborious burdens of manual editing, as well as both the intra- and interobserver variability across clinicians.
Collapse
Affiliation(s)
- Sang Hyun Park
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yaozong Gao
- Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, Republic of Korea
| |
Collapse
|
12
|
Dai X, Gao Y, Shen D. Online updating of context-aware landmark detectors for prostate localization in daily treatment CT images. Med Phys 2015; 42:2594-606. [PMID: 25979051 PMCID: PMC4409630 DOI: 10.1118/1.4918755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Revised: 02/22/2015] [Accepted: 03/20/2015] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In image guided radiation therapy, it is crucial to fast and accurately localize the prostate in the daily treatment images. To this end, the authors propose an online update scheme for landmark-guided prostate segmentation, which can fully exploit valuable patient-specific information contained in the previous treatment images and can achieve improved performance in landmark detection and prostate segmentation. METHODS To localize the prostate in the daily treatment images, the authors first automatically detect six anatomical landmarks on the prostate boundary by adopting a context-aware landmark detection method. Specifically, in this method, a two-layer regression forest is trained as a detector for each target landmark. Once all the newly detected landmarks from new treatment images are reviewed or adjusted (if necessary) by clinicians, they are further included into the training pool as new patient-specific information to update all the two-layer regression forests for the next treatment day. As more and more treatment images of the current patient are acquired, the two-layer regression forests can be continually updated by incorporating the patient-specific information into the training procedure. After all target landmarks are detected, a multiatlas random sample consensus (multiatlas RANSAC) method is used to segment the entire prostate by fusing multiple previously segmented prostates of the current patient after they are aligned to the current treatment image. Subsequently, the segmented prostate of the current treatment image is again reviewed (or even adjusted if needed) by clinicians before including it as a new shape example into the prostate shape dataset for helping localize the entire prostate in the next treatment image. RESULTS The experimental results on 330 images of 24 patients show the effectiveness of the authors' proposed online update scheme in improving the accuracies of both landmark detection and prostate segmentation. Besides, compared to the other state-of-the-art prostate segmentation methods, the authors' method achieves the best performance. CONCLUSIONS By appropriate use of valuable patient-specific information contained in the previous treatment images, the authors' proposed online update scheme can obtain satisfactory results for both landmark detection and prostate segmentation.
Collapse
Affiliation(s)
- Xiubin Dai
- College of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210015, China and IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510
| | - Yaozong Gao
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510
| | - Dinggang Shen
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510 and Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
13
|
Yang X, Rossi P, Ogunleye T, Marcus DM, Jani AB, Mao H, Curran WJ, Liu T. Prostate CT segmentation method based on nonrigid registration in ultrasound-guided CT-based HDR prostate brachytherapy. Med Phys 2014; 41:111915. [PMID: 25370648 PMCID: PMC4241831 DOI: 10.1118/1.4897615] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2014] [Revised: 09/22/2014] [Accepted: 09/24/2014] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The technological advances in real-time ultrasound image guidance for high-dose-rate (HDR) prostate brachytherapy have placed this treatment modality at the forefront of innovation in cancer radiotherapy. Prostate HDR treatment often involves placing the HDR catheters (needles) into the prostate gland under the transrectal ultrasound (TRUS) guidance, then generating a radiation treatment plan based on CT prostate images, and subsequently delivering high dose of radiation through these catheters. The main challenge for this HDR procedure is to accurately segment the prostate volume in the CT images for the radiation treatment planning. In this study, the authors propose a novel approach that integrates the prostate volume from 3D TRUS images into the treatment planning CT images to provide an accurate prostate delineation for prostate HDR treatment. METHODS The authors' approach requires acquisition of 3D TRUS prostate images in the operating room right after the HDR catheters are inserted, which takes 1-3 min. These TRUS images are used to create prostate contours. The HDR catheters are reconstructed from the intraoperative TRUS and postoperative CT images, and subsequently used as landmarks for the TRUS-CT image fusion. After TRUS-CT fusion, the TRUS-based prostate volume is deformed to the CT images for treatment planning. This method was first validated with a prostate-phantom study. In addition, a pilot study of ten patients undergoing HDR prostate brachytherapy was conducted to test its clinical feasibility. The accuracy of their approach was assessed through the locations of three implanted fiducial (gold) markers, as well as T2-weighted MR prostate images of patients. RESULTS For the phantom study, the target registration error (TRE) of gold-markers was 0.41 ± 0.11 mm. For the ten patients, the TRE of gold markers was 1.18 ± 0.26 mm; the prostate volume difference between the authors' approach and the MRI-based volume was 7.28% ± 0.86%, and the prostate volume Dice overlap coefficient was 91.89% ± 1.19%. CONCLUSIONS The authors have developed a novel approach to improve prostate contour utilizing intraoperative TRUS-based prostate volume in the CT-based prostate HDR treatment planning, demonstrated its clinical feasibility, and validated its accuracy with MRIs. The proposed segmentation method would improve prostate delineations, enable accurate dose planning and treatment delivery, and potentially enhance the treatment outcome of prostate HDR brachytherapy.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Peter Rossi
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - David M Marcus
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| |
Collapse
|
14
|
Shao Y, Gao Y, Guo Y, Shi Y, Yang X, Shen D. Hierarchical lung field segmentation with joint shape and appearance sparse learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1761-80. [PMID: 25181734 DOI: 10.1109/tmi.2014.2305691] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Lung field segmentation in the posterior-anterior (PA) chest radiograph is important for pulmonary disease diagnosis and hemodialysis treatment. Due to high shape variation and boundary ambiguity, accurate lung field segmentation from chest radiograph is still a challenging task. To tackle these challenges, we propose a joint shape and appearance sparse learning method for robust and accurate lung field segmentation. The main contributions of this paper are: 1) a robust shape initialization method is designed to achieve an initial shape that is close to the lung boundary under segmentation; 2) a set of local sparse shape composition models are built based on local lung shape segments to overcome the high shape variations; 3) a set of local appearance models are similarly adopted by using sparse representation to capture the appearance characteristics in local lung boundary segments, thus effectively dealing with the lung boundary ambiguity; 4) a hierarchical deformable segmentation framework is proposed to integrate the scale-dependent shape and appearance information together for robust and accurate segmentation. Our method is evaluated on 247 PA chest radiographs in a public dataset. The experimental results show that the proposed local shape and appearance models outperform the conventional shape and appearance models. Compared with most of the state-of-the-art lung field segmentation methods under comparison, our method also shows a higher accuracy, which is comparable to the inter-observer annotation variation.
Collapse
|
15
|
Wu Y, Liu G, Huang M, Guo J, Jiang J, Yang W, Chen W, Feng Q. Prostate segmentation based on variant scale patch and local independent projection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1290-1303. [PMID: 24893258 DOI: 10.1109/tmi.2014.2308901] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Accurate segmentation of the prostate in computed tomography (CT) images is important in image-guided radiotherapy; however, difficulties remain associated with this task. In this study, an automatic framework is designed for prostate segmentation in CT images. We propose a novel image feature extraction method, namely, variant scale patch, which can provide rich image information in a low dimensional feature space. We assume that the samples from different classes lie on different nonlinear submanifolds and design a new segmentation criterion called local independent projection (LIP). In our method, a dictionary containing training samples is constructed. To utilize the latest image information, we use an online updated strategy to construct this dictionary. In the proposed LIP, locality is emphasized rather than sparsity; local anchor embedding is performed to determine the dictionary coefficients. Several morphological operations are performed to improve the achieved results. The proposed method has been evaluated based on 330 3-D images of 24 patients. Results show that the proposed method is robust and effective in segmenting prostate in CT images.
Collapse
|
16
|
Gao Y, Zhan Y, Shen D. Incremental learning with selective memory (ILSM): towards fast prostate localization for image guided radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:518-34. [PMID: 24495983 PMCID: PMC4379484 DOI: 10.1109/tmi.2013.2291495] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to "personalize" the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼ 0.89 ) and fast ( ∼ 4 s), which satisfies the real-world clinical requirements of IGRT.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Computer Science and the Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Yiqiang Zhan
- SYNGO Division, Siemens Medical Solutions, Malvern, PA 19355 USA
| | - Dinggang Shen
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 136-701, Korea
| |
Collapse
|
17
|
Shi Y, Liao S, Gao Y, Zhang D, Gao Y, Shen D. Prostate Segmentation in CT Images via Spatial-Constrained Transductive Lasso. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2013. [PMID: 24336321 DOI: 10.1109/cvpr.2013.289] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Accurate prostate segmentation in CT images is a significant yet challenging task for image guided radiotherapy. In this paper, a novel semi-automated prostate segmentation method is presented. Specifically, to segment the prostate in the current treatment image, the physician first takes a few seconds to manually specify the first and last slices of the prostate in the image space. Then, the prostate is segmented automatically by the proposed two steps: (i) The first step of prostate-likelihood estimation to predict the prostate likelihood for each voxel in the current treatment image, aiming to generate the 3-D prostate-likelihood map by the proposed Spatial-COnstrained Transductive LassO (SCOTO); (ii) The second step of multi-atlases based label fusion to generate the final segmentation result by using the prostate shape information obtained from the planning and previous treatment images. The experimental result shows that the proposed method outperforms several state-of-the-art methods on prostate segmentation in a real prostate CT dataset, consisting of 24 patients with 330 images. Moreover, it is also clinically feasible since our method just requires the physician to spend a few seconds on manual specification of the first and last slices of the prostate.
Collapse
Affiliation(s)
- Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, China ; Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | | | | | | | | | | |
Collapse
|
18
|
Zhang Y, Yap PT, Wu G, Feng Q, Lian J, Chen W, Shen D. Resolution enhancement of lung 4D-CT data using multiscale interphase iterative nonlocal means. Med Phys 2013; 40:051916. [DOI: 10.1118/1.4802747] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Affiliation(s)
- Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | | | | | | | | | | | | |
Collapse
|
19
|
Geraghty JP, Grogan G, Ebert MA. Automatic segmentation of male pelvic anatomy on computed tomography images: a comparison with multiple observers in the context of a multicentre clinical trial. Radiat Oncol 2013; 8:106. [PMID: 23631832 PMCID: PMC3653737 DOI: 10.1186/1748-717x-8-106] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 04/19/2013] [Indexed: 11/12/2022] Open
Abstract
Background This study investigates the variation in segmentation of several pelvic anatomical structures on computed tomography (CT) between multiple observers and a commercial automatic segmentation method, in the context of quality assurance and evaluation during a multicentre clinical trial. Methods CT scans of two prostate cancer patients (‘benchmarking cases’), one high risk (HR) and one intermediate risk (IR), were sent to multiple radiotherapy centres for segmentation of prostate, rectum and bladder structures according to the TROG 03.04 “RADAR” trial protocol definitions. The same structures were automatically segmented using iPlan software for the same two patients, allowing structures defined by automatic segmentation to be quantitatively compared with those defined by multiple observers. A sample of twenty trial patient datasets were also used to automatically generate anatomical structures for quantitative comparison with structures defined by individual observers for the same datasets. Results There was considerable agreement amongst all observers and automatic segmentation of the benchmarking cases for bladder (mean spatial variations < 0.4 cm across the majority of image slices). Although there was some variation in interpretation of the superior-inferior (cranio-caudal) extent of rectum, human-observer contours were typically within a mean 0.6 cm of automatically-defined contours. Prostate structures were more consistent for the HR case than the IR case with all human observers segmenting a prostate with considerably more volume (mean +113.3%) than that automatically segmented. Similar results were seen across the twenty sample datasets, with disagreement between iPlan and observers dominant at the prostatic apex and superior part of the rectum, which is consistent with observations made during quality assurance reviews during the trial. Conclusions This study has demonstrated quantitative analysis for comparison of multi-observer segmentation studies. For automatic segmentation algorithms based on image-registration as in iPlan, it is apparent that agreement between observer and automatic segmentation will be a function of patient-specific image characteristics, particularly for anatomy with poor contrast definition. For this reason, it is suggested that automatic registration based on transformation of a single reference dataset adds a significant systematic bias to the resulting volumes and their use in the context of a multicentre trial should be carefully considered.
Collapse
Affiliation(s)
- John P Geraghty
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | | | | |
Collapse
|
20
|
Liao S, Gao Y, Lian J, Shen D. Sparse patch-based label propagation for accurate prostate localization in CT images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:419-434. [PMID: 23204280 PMCID: PMC3845245 DOI: 10.1109/tmi.2012.2230018] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
In this paper, we propose a new prostate computed tomography (CT) segmentation method for image guided radiation therapy. The main contributions of our method lie in the following aspects. 1) Instead of using voxel intensity information alone, patch-based representation in the discriminative feature space with logistic sparse LASSO is used as anatomical signature to deal with low contrast problem in prostate CT images. 2) Based on the proposed patch-based signature, a new multi-atlases label fusion method formulated under sparse representation framework is designed to segment prostate in the new treatment images, with guidance from the previous segmented images of the same patient. This method estimates the prostate likelihood of each voxel in the new treatment image from its nearby candidate voxels in the previous segmented images, based on the nonlocal mean principle and sparsity constraint. 3) A hierarchical labeling strategy is further designed to perform label fusion, where voxels with high confidence are first labeled for providing useful context information in the same image for aiding the labeling of the remaining voxels. 4) An online update mechanism is finally adopted to progressively collect more patient-specific information from newly segmented treatment images of the same patient, for adaptive and more accurate segmentation. The proposed method has been extensively evaluated on a prostate CT image database consisting of 24 patients where each patient has more than 10 treatment images, and further compared with several state-of-the-art prostate CT segmentation algorithms using various evaluation metrics. Experimental results demonstrate that the proposed method consistently achieves higher segmentation accuracy than any other methods under comparison.
Collapse
Affiliation(s)
- Shu Liao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), Chapel Hill, NC 27599, USA.
| | | | | | | |
Collapse
|
21
|
Gao Y, Zhan Y, Shen D. Incremental learning with selective memory (ILSM): towards fast prostate localization for image guided radiotherapy. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2013; 16:378-86. [PMID: 24579163 PMCID: PMC3939625 DOI: 10.1007/978-3-642-40763-5_47] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Image-guided radiotherapy (IGRT) requires fast and accurate localization of prostate in treatment CTs, which is challenging due to low tissue contrast and large anatomical variations across patients. On the other hand, in IGRT workflow, a series of CT images is acquired from the same patient under treatment, which contains valuable patient-specific information yet is often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to "personalize" the model to fit patient-specific appearance characteristics. Particularly, the model is personalized with two steps, backward pruning that discards obsolete population-based knowledge, and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of the specific patient much more accurately. Validated on a large dataset (349 CT scans), our method achieved high localization accuracy (DSC approximately 0.87) in 4 seconds.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill
| | - Yiqiang Zhan
- Siemens Medical Solutions USA, Inc., Malvern, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill
| |
Collapse
|