1
|
Li Y, Lin C, Zhang Y, Feng S, Huang M, Bai Z. Automatic segmentation of prostate MRI based on 3D pyramid pooling Unet. Med Phys 2023; 50:906-921. [PMID: 35923153 DOI: 10.1002/mp.15895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 06/23/2022] [Accepted: 06/25/2022] [Indexed: 01/01/2023] Open
Abstract
PURPOSE Automatic segmentation of prostate magnetic resonance (MR) images is crucial for the diagnosis, evaluation, and prognosis of prostate diseases (including prostate cancer). In recent years, the mainstream segmentation method for the prostate has been converted to convolutional neural networks. However, owing to the complexity of the tissue structure in MR images and the limitations of existing methods in spatial context modeling, the segmentation performance should be improved further. METHODS In this study, we proposed a novel 3D pyramid pool Unet that benefits from the pyramid pooling structure embedded in the skip connection (SC) and the deep supervision (DS) in the up-sampling of the 3D Unet. The parallel SC of the conventional 3D Unet network causes low-resolution information to be sent to the feature map repeatedly, resulting in blurred image features. To overcome the shortcomings of the conventional 3D Unet, we merge each decoder layer with the feature map of the same scale as the encoder and the smaller scale feature map of the pyramid pooling encoder. This SC combines the low-level details and high-level semantics at two different levels of feature maps. In addition, pyramid pooling performs multifaceted feature extraction on each image behind the convolutional layer, and DS learns hierarchical representations from comprehensive aggregated feature maps, which can improve the accuracy of the task. RESULTS Experiments on 3D prostate MR images of 78 patients demonstrated that our results were highly correlated with expert manual segmentation. The average relative volume difference and Dice similarity coefficient of the prostate volume area were 2.32% and 91.03%, respectively. CONCLUSION Quantitative experiments demonstrate that, compared with other methods, the results of our method are highly consistent with the expert manual segmentation.
Collapse
Affiliation(s)
- Yuchun Li
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of information and Communication Engineering, Hainan University, Haikou, China
| | - Cong Lin
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of information and Communication Engineering, Hainan University, Haikou, China.,College of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang, China
| | - Yu Zhang
- College of Computer science and Technology, Hainan University, Haikou, China
| | - Siling Feng
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of information and Communication Engineering, Hainan University, Haikou, China
| | - Mengxing Huang
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of information and Communication Engineering, Hainan University, Haikou, China
| | - Zhiming Bai
- Haikou Municipal People's Hospital and Central South University Xiangya Medical College Affiliated Hospital, Haikou, China
| |
Collapse
|
2
|
Zavala-Romero O, Breto AL, Xu IR, Chang YCC, Gautney N, Dal Pra A, Abramowitz MC, Pollack A, Stoyanova R. Segmentation of prostate and prostate zones using deep learning : A multi-MRI vendor analysis. Strahlenther Onkol 2020; 196:932-942. [PMID: 32221622 PMCID: PMC8418872 DOI: 10.1007/s00066-020-01607-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 03/10/2020] [Indexed: 11/25/2022]
Abstract
PURPOSE Develop a deep-learning-based segmentation algorithm for prostate and its peripheral zone (PZ) that is reliable across multiple MRI vendors. METHODS This is a retrospective study. The dataset consisted of 550 MRIs (Siemens-330, General Electric[GE]-220). A multistream 3D convolutional neural network is used for automatic segmentation of the prostate and its PZ using T2-weighted (T2-w) MRI. Prostate and PZ were manually contoured on axial T2‑w. The network uses axial, coronal, and sagittal T2‑w series as input. The preprocessing of the input data includes bias correction, resampling, and image normalization. A dataset from two MRI vendors (Siemens and GE) is used to test the proposed network. Six different models were trained, three for the prostate and three for the PZ. Of the three, two were trained on data from each vendor separately, and a third (Combined) on the aggregate of the datasets. The Dice coefficient (DSC) is used to compare the manual and predicted segmentation. RESULTS For prostate segmentation, the Combined model obtained DSCs of 0.893 ± 0.036 and 0.825 ± 0.112 (mean ± standard deviation) on Siemens and GE, respectively. For PZ, the best DSCs were from the Combined model: 0.811 ± 0.079 and 0.788 ± 0.093. While the Siemens model underperformed on the GE dataset and vice versa, the Combined model achieved robust performance on both datasets. CONCLUSION The proposed network has a performance comparable to the interexpert variability for segmenting the prostate and its PZ. Combining images from different MRI vendors on the training of the network is of paramount importance for building a universal model for prostate and PZ segmentation.
Collapse
Affiliation(s)
- Olmo Zavala-Romero
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Adrian L Breto
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Isaac R Xu
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | | | - Nicole Gautney
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Alan Dal Pra
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Matthew C Abramowitz
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Alan Pollack
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA.
| |
Collapse
|
3
|
da Silva GLF, Diniz PS, Ferreira JL, França JVF, Silva AC, de Paiva AC, de Cavalcanti EAA. Superpixel-based deep convolutional neural networks and active contour model for automatic prostate segmentation on 3D MRI scans. Med Biol Eng Comput 2020; 58:1947-1964. [DOI: 10.1007/s11517-020-02199-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Accepted: 05/22/2020] [Indexed: 10/24/2022]
|
4
|
Comelli A, Stefano A, Coronnello C, Russo G, Vernuccio F, Cannella R, Salvaggio G, Lagalla R, Barone S. Radiomics: A New Biomedical Workflow to Create a Predictive Model. COMMUNICATIONS IN COMPUTER AND INFORMATION SCIENCE 2020. [DOI: 10.1007/978-3-030-52791-4_22] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
5
|
Wang B, Lei Y, Tian S, Wang T, Liu Y, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Deeply supervised 3D fully convolutional networks with group dilated convolution for automatic MRI prostate segmentation. Med Phys 2019; 46:1707-1718. [PMID: 30702759 DOI: 10.1002/mp.13416] [Citation(s) in RCA: 123] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 01/18/2019] [Accepted: 01/24/2019] [Indexed: 12/15/2022] Open
Abstract
PURPOSE Reliable automated segmentation of the prostate is indispensable for image-guided prostate interventions. However, the segmentation task is challenging due to inhomogeneous intensity distributions, variation in prostate anatomy, among other problems. Manual segmentation can be time-consuming and is subject to inter- and intraobserver variation. We developed an automated deep learning-based method to address this technical challenge. METHODS We propose a three-dimensional (3D) fully convolutional networks (FCN) with deep supervision and group dilated convolution to segment the prostate on magnetic resonance imaging (MRI). In this method, a deeply supervised mechanism was introduced into a 3D FCN to effectively alleviate the common exploding or vanishing gradients problems in training deep models, which forces the update process of the hidden layer filters to favor highly discriminative features. A group dilated convolution which aggregates multiscale contextual information for dense prediction was proposed to enlarge the effective receptive field of convolutional neural networks, which improve the prediction accuracy of prostate boundary. In addition, we introduced a combined loss function including cosine and cross entropy, which measures similarity and dissimilarity between segmented and manual contours, to further improve the segmentation accuracy. Prostate volumes manually segmented by experienced physicians were used as a gold standard against which our segmentation accuracy was measured. RESULTS The proposed method was evaluated on an internal dataset comprising 40 T2-weighted prostate MR volumes. Our method achieved a Dice similarity coefficient (DSC) of 0.86 ± 0.04, a mean surface distance (MSD) of 1.79 ± 0.46 mm, 95% Hausdorff distance (95%HD) of 7.98 ± 2.91 mm, and absolute relative volume difference (aRVD) of 15.65 ± 10.82. A public dataset (PROMISE12) including 50 T2-weighted prostate MR volumes was also employed to evaluate our approach. Our method yielded a DSC of 0.88 ± 0.05, MSD of 1.02 ± 0.35 mm, 95% HD of 9.50 ± 5.11 mm, and aRVD of 8.93 ± 7.56. CONCLUSION We developed a novel deeply supervised deep learning-based approach with a group dilated convolution to automatically segment the MRI prostate, demonstrated its clinical feasibility, and validated its accuracy against manual segmentation. The proposed technique could be a useful tool for image-guided interventions in prostate cancer.
Collapse
Affiliation(s)
- Bo Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.,School of Physics and Electronic-Electrical Engineering, Ningxia University, Yinchuan, Ningxia, 750021, P.R. China
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
6
|
Shahedi M, Halicek M, Li Q, Liu L, Zhang Z, Verma S, Schuster DM, Fei B. A semiautomatic approach for prostate segmentation in MR images using local texture classification and statistical shape modeling. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10951:109512I. [PMID: 32528212 PMCID: PMC7289512 DOI: 10.1117/12.2512282] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Segmentation of the prostate in magnetic resonance (MR) images has many applications in image-guided treatment planning and procedures such as biopsy and focal therapy. However, manual delineation of the prostate boundary is a time-consuming task with high inter-observer variation. In this study, we proposed a semiautomated, three-dimensional (3D) prostate segmentation technique for T2-weighted MR images based on shape and texture analysis. The prostate gland shape is usually globular with a smoothly curved surface that could be accurately modeled and reconstructed if the locations of a limited number of well-distributed surface points are known. For a training image set, we used an inter-subject correspondence between the prostate surface points to model the prostate shape variation based on a statistical point distribution modeling. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. To segment a new image, we used the learned prostate shape and texture characteristics to search for the prostate border close to an initially estimated prostate surface. We used 23 MR images for training, and 14 images for testing the algorithm performance. We compared the results to two sets of experts' manual reference segmentations. The measured mean ± standard deviation of error values for the whole gland were 1.4 ± 0.4 mm, 8.5 ± 2.0 mm, and 86 ± 3% in terms of mean absolute distance (MAD), Hausdorff distance (HDist), and Dice similarity coefficient (DSC). The average measured differences between the two experts on the same datasets were 1.5 mm (MAD), 9.0 mm (HDist), and 83% (DSC). The proposed algorithm illustrated a fast, accurate, and robust performance for 3D prostate segmentation. The accuracy of the algorithm is within the inter-expert variability observed in manual segmentation and comparable to the best performance results reported in the literature.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
| | - Martin Halicek
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA
| | - Qinmei Li
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Lizhi Liu
- State Key Laboratory of Oncology Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenfeng Zhang
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Sadhna Verma
- Department of Radiology, University of Cincinnati Medical Center and The Veterans Administration Hospital, Cincinnati, OH
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
7
|
Towards a universal MRI atlas of the prostate and prostate zones : Comparison of MRI vendor and image acquisition parameters. Strahlenther Onkol 2018; 195:121-130. [PMID: 30140944 DOI: 10.1007/s00066-018-1348-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 07/31/2018] [Indexed: 12/31/2022]
Abstract
BACKGROUND AND PURPOSE The aim of this study was to evaluate an automatic multi-atlas-based segmentation method for generating prostate, peripheral (PZ), and transition zone (TZ) contours on MRIs with and without fat saturation (±FS), and compare MRIs from different vendor MRI systems. METHODS T2-weighted (T2) and fat-saturated (T2FS) MRIs were acquired on 3T GE (GE, Waukesha, WI, USA) and Siemens (Erlangen, Germany) systems. Manual prostate and PZ contours were used to create atlas libraries. As a test MRI is entered, the procedure for atlas segmentation automatically identifies the atlas subjects that best match the test subject, followed by a normalized intensity-based free-form deformable registration. The contours are transformed to the test subject, and Dice similarity coefficients (DSC) and Hausdorff distances between atlas-generated and manual contours were used to assess performance. RESULTS Three atlases were generated based on GE_T2 (n = 30), GE_T2FS (n = 30), and Siem_T2FS (n = 31). When test images matched the contrast and vendor of the atlas, DSCs of 0.81 and 0.83 for T2 ± FS were obtained (baseline performance). Atlases performed with higher accuracy when segmenting (i) T2FS vs. T2 images, likely due to a superior contrast between prostate vs. surrounding tissue; (ii) prostate vs. zonal anatomy; (iii) in the mid-gland vs. base and apex. Atlases performance declined when tested with images with differing contrast and MRI vendor. Conversely, combined atlases showed similar performance to baseline. CONCLUSION The MRI atlas-based segmentation method achieved good results for prostate, PZ, and TZ compared to expert contoured volumes. Combined atlases performed similarly to matching atlas and scan type. The technique is fast, fully automatic, and implemented on commercially available clinical platform.
Collapse
|
8
|
Iyaniwura JE, Elfarnawany M, Ladak HM, Agrawal SK. An automated A-value measurement tool for accurate cochlear duct length estimation. J Otolaryngol Head Neck Surg 2018; 47:5. [PMID: 29357924 PMCID: PMC5778705 DOI: 10.1186/s40463-018-0253-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2017] [Accepted: 01/08/2018] [Indexed: 01/14/2023] Open
Abstract
BACKGROUND There has been renewed interest in the cochlear duct length (CDL) for preoperative cochlear implant electrode selection and postoperative generation of patient-specific frequency maps. The CDL can be estimated by measuring the A-value, which is defined as the length between the round window and the furthest point on the basal turn. Unfortunately, there is significant intra- and inter-observer variability when these measurements are made clinically. The objective of this study was to develop an automated A-value measurement algorithm to improve accuracy and eliminate observer variability. METHOD Clinical and micro-CT images of 20 cadaveric cochleae specimens were acquired. The micro-CT of one sample was chosen as the atlas, and A-value fiducials were placed onto that image. Image registration (rigid affine and non-rigid B-spline) was applied between the atlas and the 19 remaining clinical CT images. The registration transform was applied to the A-value fiducials, and the A-value was then automatically calculated for each specimen. High resolution micro-CT images of the same 19 specimens were used to measure the gold standard A-values for comparison against the manual and automated methods. RESULTS The registration algorithm had excellent qualitative overlap between the atlas and target images. The automated method eliminated the observer variability and the systematic underestimation by experts. Manual measurement of the A-value on clinical CT had a mean error of 9.5 ± 4.3% compared to micro-CT, and this improved to an error of 2.7 ± 2.1% using the automated algorithm. Both the automated and manual methods correlated significantly with the gold standard micro-CT A-values (r = 0.70, p < 0.01 and r = 0.69, p < 0.01, respectively). CONCLUSION An automated A-value measurement tool using atlas-based registration methods was successfully developed and validated. The automated method eliminated the observer variability and improved accuracy as compared to manual measurements by experts. This open-source tool has the potential to benefit cochlear implant recipients in the future.
Collapse
Affiliation(s)
- John E Iyaniwura
- Biomedical Engineering Graduate Program, Western University, 1151 Richmond Street, London, ON, N6A 3K7, Canada.
| | - Mai Elfarnawany
- Department of Otolaryngology-Head and Neck Surgery, Western University, London, ON, Canada
| | - Hanif M Ladak
- Biomedical Engineering Graduate Program, Western University, 1151 Richmond Street, London, ON, N6A 3K7, Canada.,Department of Otolaryngology-Head and Neck Surgery, Western University, London, ON, Canada.,Department of Medical Biophysics, Western University, London, ON, Canada.,Department of Electrical and Computer Engineering, Western University, London, ON, Canada
| | - Sumit K Agrawal
- Biomedical Engineering Graduate Program, Western University, 1151 Richmond Street, London, ON, N6A 3K7, Canada.,Department of Otolaryngology-Head and Neck Surgery, Western University, London, ON, Canada.,Department of Electrical and Computer Engineering, Western University, London, ON, Canada.,London Health Science Centre, Room B1-333, University Hospital, 339 Windermere Rd., London, ON, Canada
| |
Collapse
|
9
|
Abstract
Multi-parametric magnetic resonance imaging (mp-MRI) has an increasingly important role in the diagnosis of prostate cancer. Due to the large amount of data and variations in mp-MRI, tumor detection can be affected by multiple factors, such as the observer's clinical experience, image quality, and appearance of the lesions. In order to improve the quantitative assessment of the disease and reduce the reporting time, various computer-aided diagnosis (CAD) systems have been designed to help radiologists identify lesions. This manuscript presents an overview of the literature regarding prostate CAD using mp-MRI, while focusing on the studies of the most recent five years. Current prostate CAD technologies and their utilization are discussed in this review.
Collapse
|
10
|
Tian Z, Liu L, Zhang Z, Xue J, Fei B. A supervoxel-based segmentation method for prostate MR images. Med Phys 2017; 44:558-569. [PMID: 27991675 DOI: 10.1002/mp.12048] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Revised: 12/02/2016] [Accepted: 12/07/2016] [Indexed: 01/22/2023] Open
Abstract
PURPOSE Segmentation of the prostate on MR images has many applications in prostate cancer management. In this work, we propose a supervoxel-based segmentation method for prostate MR images. METHODS A supervoxel is a set of pixels that have similar intensities, locations, and textures in a 3D image volume. The prostate segmentation problem is considered as assigning a binary label to each supervoxel, which is either the prostate or background. A supervoxel-based energy function with data and smoothness terms is used to model the label. The data term estimates the likelihood of a supervoxel belonging to the prostate by using a supervoxel-based shape feature. The geometric relationship between two neighboring supervoxels is used to build the smoothness term. The 3D graph cut is used to minimize the energy function to get the labels of the supervoxels, which yields the prostate segmentation. A 3D active contour model is then used to get a smooth surface by using the output of the graph cut as an initialization. The performance of the proposed algorithm was evaluated on 30 in-house MR image data and PROMISE12 dataset. RESULTS The mean Dice similarity coefficients are 87.2 ± 2.3% and 88.2 ± 2.8% for our 30 in-house MR volumes and the PROMISE12 dataset, respectively. The proposed segmentation method yields a satisfactory result for prostate MR images. CONCLUSION The proposed supervoxel-based method can accurately segment prostate MR images and can have a variety of application in prostate cancer diagnosis and therapy.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Department of Radiology and Imaging Sciences, School of Medicine, Emory University, 1841 Clifton Road NE, Atlanta, GA, 30329, USA
| | - Lizhi Liu
- Department of Radiology and Imaging Sciences, School of Medicine, Emory University, 1841 Clifton Road NE, Atlanta, GA, 30329, USA.,Center for Medical Imaging and Image-guided Therapy, Sun Yat-Sen University Cancer Center, 651 Dongfeng East Road, Guangzhou, 510060, P. R., China
| | - Zhenfeng Zhang
- Center for Medical Imaging and Image-guided Therapy, Sun Yat-Sen University Cancer Center, 651 Dongfeng East Road, Guangzhou, 510060, P. R., China
| | - Jianru Xue
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, No.28 Xianning West Road, Xi'an, 710049, P. R., China
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, School of Medicine, Emory University, 1841 Clifton Road NE, Atlanta, GA, 30329, USA.,Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, 1841 Clifton Road NE, Atlanta, GA, 30329, USA
| |
Collapse
|
11
|
Krebs J, Mansi T, Delingette H, Zhang L, Ghesu FC, Miao S, Maier AK, Ayache N, Liao R, Kamen A. Robust Non-rigid Registration Through Agent-Based Action Learning. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION − MICCAI 2017 2017. [DOI: 10.1007/978-3-319-66182-7_40] [Citation(s) in RCA: 74] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|