1
|
Wilms M, Ehrhardt J, Forkert ND. Localized Statistical Shape Models for Large-scale Problems With Few Training Data. IEEE Trans Biomed Eng 2022; 69:2947-2957. [PMID: 35271438 DOI: 10.1109/tbme.2022.3158278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Statistical shape models have been successfully used in numerous biomedical image analysis applications where prior shape information is helpful such as organ segmentation or data augmentation when training deep learning models. However, training such models requires large data sets, which are often not available and, hence, shape models frequently fail to represent local details of unseen shapes. This work introduces a kernel-based method to alleviate this problem via so-called model localization. It is specifically designed to be used in large-scale shape modeling scenarios like deep learning data augmentation and fits seamlessly into the classical shape modeling framework. METHOD Relying on recent advances in multi-level shape model localization via distance-based covariance matrix manipulations and Grassmannian-based level fusion, this work proposes a novel and computationally efficient kernel-based localization technique. Moreover, a novel way to improve the specificity of such models via normalizing flow-based density estimation is presented. RESULTS The method is evaluated on the publicly available JSRT/SCR chest X-ray and IXI brain data sets. The results confirm the effectiveness of the kernelized formulation and also highlight the models' improved specificity when utilizing the proposed density estimation method. CONCLUSION This work shows that flexible and specific shape models from few training samples can be generated in a computationally efficient way by combining ideas from kernel theory and normalizing flows. SIGNIFICANCE The proposed method together with its publicly available implementation allows to build shape models from few training samples directly usable for applications like data augmentation.
Collapse
|
2
|
Sun Y, Li Y, Yang Y, Yue H. Differential evolution algorithm with population knowledge fusion strategy for image registration. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00380-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
AbstractImage registration is a challenging NP-hard problem within the computer vision field. The differential evolutionary algorithm is a simple and efficient method to find the best among all the possible common parts of images. To improve the efficiency and accuracy of the registration, a knowledge-fusion-based differential evolution algorithm is proposed, which combines segmentation, gradient descent method, and hybrid selection strategy to enhance the exploration ability in the early stage and the exploitation ability in the later stage. The proposed algorithms have been implemented and tested with CEC2013 benchmark and real image data. The experimental results show that the proposed algorithm is superior to the existing algorithms in terms of solution quality, convergence speed, and solution success rate.
Collapse
|
3
|
A statistical weighted sparse-based local lung motion modelling approach for model-driven lung biopsy. Int J Comput Assist Radiol Surg 2020; 15:1279-1290. [PMID: 32347465 DOI: 10.1007/s11548-020-02154-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2019] [Accepted: 04/03/2020] [Indexed: 10/24/2022]
Abstract
PURPOSE Lung biopsy is currently the most effective procedure for cancer diagnosis. However, respiration-induced location uncertainty presents a challenge in precise lung biopsy. To reduce the medical image requirements for motion modelling, in this study, local lung motion information in the region of interest (ROI) is extracted from whole chest computed tomography (CT) and CT-fluoroscopy scans to predict the motion of potentially cancerous tissue and important vessels during the model-driven lung biopsy process. METHODS The motion prior of the ROI was generated via a sparse linear combination of a subset of motion information from a respiratory motion repository, and a weighted sparse-based statistical model was used to preserve the local respiratory motion details. We also employed a motion prior-based registration method to improve the motion estimation accuracy in the ROI and designed adaptive variable coefficients to interactively weigh the relative influence of the prior knowledge and image intensity information during the registration process. RESULTS The proposed method was applied to ten test subjects for the estimation of the respiratory motion field. The quantitative analysis resulted in a mean target registration error of 1.5 (0.8) mm and an average symmetric surface distance of 1.4 (0.6) mm. CONCLUSIONS The proposed method shows remarkable advantages over traditional methods in preserving local motion details and reducing the estimation error in the ROI. These results also provide a benchmark for lung respiratory motion modelling in the literature.
Collapse
|
4
|
Cui Z, Mahmoodi S, Guy M, Lewis E, Havelock T, Bennett M, Conway J. A general framework in single and multi-modality registration for lung imaging analysis using statistical prior shapes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105232. [PMID: 31809995 DOI: 10.1016/j.cmpb.2019.105232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Revised: 07/04/2019] [Accepted: 11/17/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE A fusion of multi-slice computed tomography (MSCT) and single photon emission computed tomography (SPECT) represents a powerful tool for chronic obstructive pulmonary disease (COPD) analysis. In this paper, a novel and high-performance MSCT/SPECT non-rigid registration algorithm is proposed to accurately map the lung lobe information onto the functional imaging. Such a fusion can then be used to guide lung volume reduction surgery. METHODS The multi-modality fusion method proposed here is developed by a multi-channel technique which performs registration from MSCT scan to ventilation and perfusion SPECT scans simultaneously. Furthermore, a novel function with less parameters is also proposed to avoid the adjustment of the weighting parameter and to achieve a better performance in comparison with the exisitng methods in the literature. RESULTS A lung imaging dataset from a hospital and a synthetic dataset created by software are employed to validate single- and multi-modality registration results. Our method is demonstrated to achieve the improvements in terms of registration accuracy and stability by up to 23% and 54% respectively. Our multi-channel technique proposed here is also proved to obtain improved registration accuracy in comparison with single-channel method. CONCLUSIONS The fusion of lung lobes onto SPECT imaging is achievable by accurate MSCT/SPECT alignment. It can also be used to perform lobar lung activity analysis for COPD diagnosis and treatment.
Collapse
Affiliation(s)
- Zheng Cui
- School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, United Kingdom.
| | - Sasan Mahmoodi
- School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, United Kingdom.
| | - Matthew Guy
- Department of Imaging Physics, University Hospital Southampton NHS Foundation Trust, Southampton, SO16 6YD, United Kingdom
| | - Emma Lewis
- Scientific Computing Section, Royal Surrey County Hospital NHS Foundation Trust, GuildfordGU2 7XX, United Kingdom
| | - Tom Havelock
- Southampton NIHR Respiratory Biomedical Research Unit, University Hospital Southampton NHS Foundation Trust, Southampton SO16 6YD, United Kingdom
| | - Michael Bennett
- Southampton NIHR Respiratory Biomedical Research Unit, University Hospital Southampton NHS Foundation Trust, Southampton SO16 6YD, United Kingdom
| | - Joy Conway
- Southampton NIHR Respiratory Biomedical Research Unit, University Hospital Southampton NHS Foundation Trust, Southampton SO16 6YD, United Kingdom
| |
Collapse
|
5
|
Novosad P, Fonov V, Collins DL. Accurate and robust segmentation of neuroanatomy in T1-weighted MRI by combining spatial priors with deep convolutional neural networks. Hum Brain Mapp 2019; 41:309-327. [PMID: 31633863 PMCID: PMC7267949 DOI: 10.1002/hbm.24803] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 09/07/2019] [Accepted: 09/09/2019] [Indexed: 12/02/2022] Open
Abstract
Neuroanatomical segmentation in magnetic resonance imaging (MRI) of the brain is a prerequisite for quantitative volume, thickness, and shape measurements, as well as an important intermediate step in many preprocessing pipelines. This work introduces a new highly accurate and versatile method based on 3D convolutional neural networks for the automatic segmentation of neuroanatomy in T1‐weighted MRI. In combination with a deep 3D fully convolutional architecture, efficient linear registration‐derived spatial priors are used to incorporate additional spatial context into the network. An aggressive data augmentation scheme using random elastic deformations is also used to regularize the networks, allowing for excellent performance even in cases where only limited labeled training data are available. Applied to hippocampus segmentation in an elderly population (mean Dice coefficient = 92.1%) and subcortical segmentation in a healthy adult population (mean Dice coefficient = 89.5%), we demonstrate new state‐of‐the‐art accuracies and a high robustness to outliers. Further validation on a multistructure segmentation task in a scan–rescan dataset demonstrates accuracy (mean Dice coefficient = 86.6%) similar to the scan–rescan reliability of expert manual segmentations (mean Dice coefficient = 86.9%), and improved reliability compared to both expert manual segmentations and automated segmentations using FIRST. Furthermore, our method maintains a highly competitive runtime performance (e.g., requiring only 10 s for left/right hippocampal segmentation in 1 × 1 × 1 mm3 MNI stereotaxic space), orders of magnitude faster than conventional multiatlas segmentation methods.
Collapse
Affiliation(s)
- Philip Novosad
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada.,Department of Biomedical Engineering, McGill University, Montreal, Canada
| | - Vladimir Fonov
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada.,Department of Biomedical Engineering, McGill University, Montreal, Canada
| | - D Louis Collins
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada.,Department of Biomedical Engineering, McGill University, Montreal, Canada
| | | |
Collapse
|
6
|
Ahmad S, Fan J, Dong P, Cao X, Yap PT, Shen D. Deep Learning Deformation Initialization for Rapid Groupwise Registration of Inhomogeneous Image Populations. Front Neuroinform 2019; 13:34. [PMID: 32760265 PMCID: PMC7373822 DOI: 10.3389/fninf.2019.00034] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Accepted: 04/23/2019] [Indexed: 12/22/2022] Open
Abstract
Groupwise image registration tackles biases that can potentially arise from inappropriate template selection. It typically involves simultaneous registration of a cohort of images to a common space that is not specified a priori. Existing groupwise registration methods are computationally complex and are only effective for image populations without large anatomical variations. In this paper, we propose a deep learning framework to rapidly estimate large deformations between images to significantly reduce structural variability. Specifically, we employ a multi-level graph coarsening method to agglomerate similar images into clusters, each represented by an exemplar image. We then use a deep learning framework to predict the initial deformations between images. Warping with the estimated deformations brings the images closer in the image manifold and their alignment can be further refined using conventional groupwise registration algorithms. We evaluated the effectiveness of our method in groupwise registration of MR brain images and compared it against state-of-the-art groupwise registration methods. Experimental results indicate that deformation initialization enables groupwise registration to converge significantly faster with competitive accuracy, therefore facilitates large-scale imaging studies.
Collapse
Affiliation(s)
- Sahar Ahmad
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, United States
| | - Jingfan Fan
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, United States
| | - Pei Dong
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, United States
| | - Xiaohuan Cao
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, United States.,School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, United States
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, United States.,Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| |
Collapse
|
7
|
Tang S, Cong W, Yang J, Fu T, Song H, Ai D, Wang Y. Local statistical deformation models for deformable image registration. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.03.039] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
8
|
Wilms M, Handels H, Ehrhardt J. Multi-resolution multi-object statistical shape models based on the locality assumption. Med Image Anal 2017; 38:17-29. [DOI: 10.1016/j.media.2017.02.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Revised: 11/18/2016] [Accepted: 02/01/2017] [Indexed: 10/20/2022]
|
9
|
Onofrey JA, Staib LH, Sarkar S, Venkataraman R, Nawaf CB, Sprenkle PC, Papademetris X. Learning Non-rigid Deformations for Robust, Constrained Point-based Registration in Image-Guided MR-TRUS Prostate Intervention. Med Image Anal 2017; 39:29-43. [PMID: 28431275 DOI: 10.1016/j.media.2017.04.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2016] [Revised: 02/28/2017] [Accepted: 04/03/2017] [Indexed: 01/13/2023]
Abstract
Accurate and robust non-rigid registration of pre-procedure magnetic resonance (MR) imaging to intra-procedure trans-rectal ultrasound (TRUS) is critical for image-guided biopsies of prostate cancer. Prostate cancer is one of the most prevalent forms of cancer and the second leading cause of cancer-related death in men in the United States. TRUS-guided biopsy is the current clinical standard for prostate cancer diagnosis and assessment. State-of-the-art, clinical MR-TRUS image fusion relies upon semi-automated segmentations of the prostate in both the MR and the TRUS images to perform non-rigid surface-based registration of the gland. Segmentation of the prostate in TRUS imaging is itself a challenging task and prone to high variability. These segmentation errors can lead to poor registration and subsequently poor localization of biopsy targets, which may result in false-negative cancer detection. In this paper, we present a non-rigid surface registration approach to MR-TRUS fusion based on a statistical deformation model (SDM) of intra-procedural deformations derived from clinical training data. Synthetic validation experiments quantifying registration volume of interest overlaps of the PI-RADS parcellation standard and tests using clinical landmark data demonstrate that our use of an SDM for registration, with median target registration error of 2.98 mm, is significantly more accurate than the current clinical method. Furthermore, we show that the low-dimensional SDM registration results are robust to segmentation errors that are not uncommon in clinical TRUS data.
Collapse
Affiliation(s)
| | - Lawrence H Staib
- Department of Radiology & Biomedical Imaging, USA; Department of Electrical Engineering, USA; Department of Biomedical Engineering, USA.
| | | | | | - Cayce B Nawaf
- Department of Urology, Yale University, New Haven, Connecticut, USA.
| | | | - Xenophon Papademetris
- Department of Radiology & Biomedical Imaging, USA; Department of Biomedical Engineering, USA.
| |
Collapse
|
10
|
|
11
|
Onofrey JA, Staib LH, Papademetris X. Learning intervention-induced deformations for non-rigid MR-CT registration and electrode localization in epilepsy patients. Neuroimage Clin 2015; 10:291-301. [PMID: 26900569 PMCID: PMC4724039 DOI: 10.1016/j.nicl.2015.12.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2015] [Revised: 11/08/2015] [Accepted: 12/03/2015] [Indexed: 11/02/2022]
Abstract
This paper describes a framework for learning a statistical model of non-rigid deformations induced by interventional procedures. We make use of this learned model to perform constrained non-rigid registration of pre-procedural and post-procedural imaging. We demonstrate results applying this framework to non-rigidly register post-surgical computed tomography (CT) brain images to pre-surgical magnetic resonance images (MRIs) of epilepsy patients who had intra-cranial electroencephalography electrodes surgically implanted. Deformations caused by this surgical procedure, imaging artifacts caused by the electrodes, and the use of multi-modal imaging data make non-rigid registration challenging. Our results show that the use of our proposed framework to constrain the non-rigid registration process results in significantly improved and more robust registration performance compared to using standard rigid and non-rigid registration methods.
Collapse
Affiliation(s)
- John A. Onofrey
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Lawrence H. Staib
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xenophon Papademetris
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| |
Collapse
|