1
|
Chen J, Liu Y, Wei S, Bian Z, Subramanian S, Carass A, Prince JL, Du Y. A survey on deep learning in medical image registration: New technologies, uncertainty, evaluation metrics, and beyond. Med Image Anal 2025; 100:103385. [PMID: 39612808 PMCID: PMC11730935 DOI: 10.1016/j.media.2024.103385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/27/2024] [Accepted: 11/01/2024] [Indexed: 12/01/2024]
Abstract
Deep learning technologies have dramatically reshaped the field of medical image registration over the past decade. The initial developments, such as regression-based and U-Net-based networks, established the foundation for deep learning in image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, network architectures, and uncertainty estimation. These advancements have not only enriched the field of image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration.
Collapse
Affiliation(s)
- Junyu Chen
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA.
| | - Yihao Liu
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Shuwen Wei
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Zhangxing Bian
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Shalini Subramanian
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA
| |
Collapse
|
2
|
Liu H, McKenzie E, Xu D, Xu Q, Chin RK, Ruan D, Sheng K. MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration. Med Image Anal 2025; 99:103351. [PMID: 39388843 PMCID: PMC11817760 DOI: 10.1016/j.media.2024.103351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 06/05/2024] [Accepted: 09/16/2024] [Indexed: 10/12/2024]
Abstract
Deep-learning-based deformable image registration (DL-DIR) has demonstrated improved accuracy compared to time-consuming non-DL methods across various anatomical sites. However, DL-DIR is still challenging in heterogeneous tissue regions with large deformation. In fact, several state-of-the-art DL-DIR methods fail to capture the large, anatomically plausible deformation when tested on head-and-neck computed tomography (CT) images. These results allude to the possibility that such complex head-and-neck deformation may be beyond the capacity of a single network structure or a homogeneous smoothness regularization. To address the challenge of combined multi-scale musculoskeletal motion and soft tissue deformation in the head-and-neck region, we propose a MUsculo-Skeleton-Aware (MUSA) framework to anatomically guide DL-DIR by leveraging the explicit multiresolution strategy and the inhomogeneous deformation constraints between the bony structures and soft tissue. The proposed method decomposes the complex deformation into a bulk posture change and residual fine deformation. It can accommodate both inter- and intra- subject registration. Our results show that the MUSA framework can consistently improve registration accuracy and, more importantly, the plausibility of deformation for various network architectures. The code will be publicly available at https://github.com/HengjieLiu/DIR-MUSA.
Collapse
Affiliation(s)
- Hengjie Liu
- Physics and Biology in Medicine Graduate Program, University of California Los Angeles, Los Angeles, CA, USA; Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Elizabeth McKenzie
- Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Di Xu
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Qifan Xu
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Robert K Chin
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Dan Ruan
- Physics and Biology in Medicine Graduate Program, University of California Los Angeles, Los Angeles, CA, USA; Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Ke Sheng
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA.
| |
Collapse
|
3
|
Zhu Z, Li Q, Wei Y, Song R. Hierarchical multi-level dynamic hyperparameter deformable image registration with convolutional neural network. Phys Med Biol 2024; 69:175007. [PMID: 39053510 DOI: 10.1088/1361-6560/ad67a6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 07/25/2024] [Indexed: 07/27/2024]
Abstract
Objective. To enable the registration network to be trained only once, achieving fast regularization hyperparameter selection during the inference phase, and to improve registration accuracy and deformation field regularity.Approach. Hyperparameter tuning is an essential process for deep learning deformable image registration (DLDIR). Most DLDIR methods usually perform a large number of independent experiments to select the appropriate regularization hyperparameters, which are time-consuming and resource-consuming. To address this issue, we propose a novel dynamic hyperparameter block, which comprises a distributed mapping network, dynamic convolution, attention feature extraction layer, and instance normalization layer. The dynamic hyperparameter block encodes the input feature vectors and regularization hyperparameters into learnable feature variables and dynamic convolution parameters which changes the feature statistics of the high-dimensional features layer feature variables, respectively. In addition, the proposed method replaced the single-level structure residual blocks in LapIRN with a hierarchical multi-level architecture for the dynamic hyperparameter block in order to improve registration performance.Main results. On the OASIS dataset, the proposed method reduced the percentage of|Jϕ|⩽0by 28.01%, 9.78%and improved Dice similarity coefficient by 1.17%, 1.17%, compared with LapIRN and CIR, respectively. On the DIR-Lab dataset, the proposed method reduced the percentage of|Jϕ|⩽0by 10.00%, 5.70%and reduced target registration error by 10.84%, 10.05%, compared with LapIRN and CIR, respectively.Significance. The proposed method can fast achieve the corresponding registration deformation field for arbitrary hyperparameter value during the inference phase. Extensive experiments demonstrate that the proposed method reduces training time compared to DLDIR with fixed regularization hyperparameters while outperforming the state-of-the-art registration methods concerning registration accuracy and deformation smoothness on brain dataset OASIS and lung dataset DIR-Lab.
Collapse
Affiliation(s)
- Zhenyu Zhu
- School of Control Science and Engineering, Shandong University, Jinan, People's Republic of China
| | - Qianqian Li
- School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University, Jinan, People's Republic of China
| | - Ying Wei
- School of Control Science and Engineering, Shandong University, Jinan, People's Republic of China
- Shandong Research Institute of Industrial Technology, Jinan, People's Republic of China
| | - Rui Song
- School of Control Science and Engineering, Shandong University, Jinan, People's Republic of China
- Shandong Research Institute of Industrial Technology, Jinan, People's Republic of China
| |
Collapse
|
4
|
Lorenzo Polo A, Nix M, Thompson C, O'Hara C, Entwisle J, Murray L, Appelt A, Weistrand O, Svensson S. Improving hybrid image and structure-based deformable image registration for large internal deformations. Phys Med Biol 2024; 69:095011. [PMID: 38518382 DOI: 10.1088/1361-6560/ad3723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 03/22/2024] [Indexed: 03/24/2024]
Abstract
Objective.Deformable image registration (DIR) is a widely used technique in radiotherapy. Complex deformations, resulting from large anatomical changes, are a regular challenge. DIR algorithms generally seek a balance between capturing large deformations and preserving a smooth deformation vector field (DVF). We propose a novel structure-based term that can enhance the registration efficacy while ensuring a smooth DVF.Approach.The proposed novel similarity metric for controlling structures was introduced as a new term into a commercially available algorithm. Its performance was compared to the original algorithm using a dataset of 46 patients who received pelvic re-irradiation, many of which exhibited complex deformations.Main results.The mean Dice Similarity Coefficient (DSC) under the improved algorithm was 0.96, 0.94, 0.76, and 0.91 for bladder, rectum, colon, and bone respectively, compared to 0.69, 0.89, 0.62, and 0.88 for the original algorithm. The improvement was more pronounced for complex deformations.Significance.With this work, we have demonstrated that the proposed term is able to improve registration accuracy for complex cases while maintaining realistic deformations.
Collapse
Affiliation(s)
| | - M Nix
- Leeds Cancer Centre, Department of Medical Physics, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
| | - C Thompson
- Leeds Cancer Centre, Department of Medical Physics, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
| | - C O'Hara
- Leeds Cancer Centre, Department of Medical Physics, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
| | - J Entwisle
- Leeds Cancer Centre, Department of Medical Physics, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
| | - L Murray
- Leeds Cancer Centre, Department of Clinical Oncology, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
- Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, United Kingdom
| | - A Appelt
- Leeds Cancer Centre, Department of Medical Physics, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
- Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, United Kingdom
| | - O Weistrand
- RaySearch Laboratories, SE-104 30 Stockholm, Sweden
| | - S Svensson
- RaySearch Laboratories, SE-104 30 Stockholm, Sweden
| |
Collapse
|
5
|
Wu Y, Wang Z, Chu Y, Peng R, Peng H, Yang H, Guo K, Zhang J. Current Research Status of Respiratory Motion for Thorax and Abdominal Treatment: A Systematic Review. Biomimetics (Basel) 2024; 9:170. [PMID: 38534855 DOI: 10.3390/biomimetics9030170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 02/29/2024] [Accepted: 03/09/2024] [Indexed: 03/28/2024] Open
Abstract
Malignant tumors have become one of the serious public health problems in human safety and health, among which the chest and abdomen diseases account for the largest proportion. Early diagnosis and treatment can effectively improve the survival rate of patients. However, respiratory motion in the chest and abdomen can lead to uncertainty in the shape, volume, and location of the tumor, making treatment of the chest and abdomen difficult. Therefore, compensation for respiratory motion is very important in clinical treatment. The purpose of this review was to discuss the research and development of respiratory movement monitoring and prediction in thoracic and abdominal surgery, as well as introduce the current research status. The integration of modern respiratory motion compensation technology with advanced sensor detection technology, medical-image-guided therapy, and artificial intelligence technology is discussed and analyzed. The future research direction of intraoperative thoracic and abdominal respiratory motion compensation should be non-invasive, non-contact, use a low dose, and involve intelligent development. The complexity of the surgical environment, the constraints on the accuracy of existing image guidance devices, and the latency of data transmission are all present technical challenges.
Collapse
Affiliation(s)
- Yuwen Wu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Zhisen Wang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Yuyi Chu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Renyuan Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Haoran Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Hongbo Yang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Kai Guo
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Juzhong Zhang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| |
Collapse
|
6
|
Xiao H, Xue X, Zhu M, Jiang X, Xia Q, Chen K, Li H, Long L, Peng K. Deep learning-based lung image registration: A review. Comput Biol Med 2023; 165:107434. [PMID: 37696177 DOI: 10.1016/j.compbiomed.2023.107434] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 08/13/2023] [Accepted: 08/28/2023] [Indexed: 09/13/2023]
Abstract
Lung image registration can effectively describe the relative motion of lung tissues, thereby helping to solve series problems in clinical applications. Since the lungs are soft and fairly passive organs, they are influenced by respiration and heartbeat, resulting in discontinuity of lung motion and large deformation of anatomic features. This poses great challenges for accurate registration of lung image and its applications. The recent application of deep learning (DL) methods in the field of medical image registration has brought promising results. However, a versatile registration framework has not yet emerged due to diverse challenges of registration for different regions of interest (ROI). DL-based image registration methods used for other ROI cannot achieve satisfactory results in lungs. In addition, there are few review articles available on DL-based lung image registration. In this review, the development of conventional methods for lung image registration is briefly described and a more comprehensive survey of DL-based methods for lung image registration is illustrated. The DL-based methods are classified according to different supervision types, including fully-supervised, weakly-supervised and unsupervised. The contributions of researchers in addressing various challenges are described, as well as the limitations of these approaches. This review also presents a comprehensive statistical analysis of the cited papers in terms of evaluation metrics and loss functions. In addition, publicly available datasets for lung image registration are also summarized. Finally, the remaining challenges and potential trends in DL-based lung image registration are discussed.
Collapse
Affiliation(s)
- Hanguang Xiao
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Xufeng Xue
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Mi Zhu
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| | - Xin Jiang
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Qingling Xia
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Kai Chen
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Huanqi Li
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Li Long
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Ke Peng
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| |
Collapse
|
7
|
Ruthven M, Miquel ME, King AP. A segmentation-informed deep learning framework to register dynamic two-dimensional magnetic resonance images of the vocal tract during speech. Biomed Signal Process Control 2023; 80:104290. [PMID: 36743699 PMCID: PMC9746295 DOI: 10.1016/j.bspc.2022.104290] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/29/2022] [Accepted: 10/08/2022] [Indexed: 11/06/2022]
Abstract
Objective Dynamic magnetic resonance (MR) imaging enables visualisation of articulators during speech. There is growing interest in quantifying articulator motion in two-dimensional MR images of the vocal tract, to better understand speech production and potentially inform patient management decisions. Image registration is an established way to achieve this quantification. Recently, segmentation-informed deformable registration frameworks have been developed and have achieved state-of-the-art accuracy. This work aims to adapt such a framework and optimise it for estimating displacement fields between dynamic two-dimensional MR images of the vocal tract during speech. Methods A deep-learning-based registration framework was developed and compared with current state-of-the-art registration methods and frameworks (two traditional methods and three deep-learning-based frameworks, two of which are segmentation informed). The accuracy of the methods and frameworks was evaluated using the Dice coefficient (DSC), average surface distance (ASD) and a metric based on velopharyngeal closure. The metric evaluated if the fields captured a clinically relevant and quantifiable aspect of articulator motion. Results The segmentation-informed frameworks achieved higher DSCs and lower ASDs and captured more velopharyngeal closures than the traditional methods and the framework that was not segmentation informed. All segmentation-informed frameworks achieved similar DSCs and ASDs. However, the proposed framework captured the most velopharyngeal closures. Conclusions A framework was successfully developed and found to more accurately estimate articulator motion than five current state-of-the-art methods and frameworks. Significance The first deep-learning-based framework specifically for registering dynamic two-dimensional MR images of the vocal tract during speech has been developed and evaluated.
Collapse
Affiliation(s)
- Matthieu Ruthven
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom,School of Biomedical Engineering & Imaging Sciences, King’s College London, King’s Health Partners, St Thomas’ Hospital, London SE1 7EH, United Kingdom,Corresponding author at: Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom.
| | - Marc E. Miquel
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom,Digital Environment Research Institute (DERI), Empire House, 67-75 New Road, Queen Mary University of London, London E1 1HH, United Kingdom,Advanced Cardiovascular Imaging, Barts NIHR BRC, Queen Mary University of London, London EC1M 6BQ, United Kingdom
| | - Andrew P. King
- School of Biomedical Engineering & Imaging Sciences, King’s College London, King’s Health Partners, St Thomas’ Hospital, London SE1 7EH, United Kingdom
| |
Collapse
|
8
|
Chen J, Frey EC, He Y, Segars WP, Li Y, Du Y. TransMorph: Transformer for unsupervised medical image registration. Med Image Anal 2022; 82:102615. [PMID: 36156420 PMCID: PMC9999483 DOI: 10.1016/j.media.2022.102615] [Citation(s) in RCA: 156] [Impact Index Per Article: 52.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 06/29/2022] [Accepted: 09/02/2022] [Indexed: 01/04/2023]
Abstract
In the last decade, convolutional neural networks (ConvNets) have been a major focus of research in medical image analysis. However, the performances of ConvNets may be limited by a lack of explicit consideration of the long-range spatial relationships in an image. Recently, Vision Transformer architectures have been proposed to address the shortcomings of ConvNets and have produced state-of-the-art performances in many medical imaging applications. Transformers may be a strong candidate for image registration because their substantially larger receptive field enables a more precise comprehension of the spatial correspondence between moving and fixed images. Here, we present TransMorph, a hybrid Transformer-ConvNet model for volumetric medical image registration. This paper also presents diffeomorphic and Bayesian variants of TransMorph: the diffeomorphic variants ensure the topology-preserving deformations, and the Bayesian variant produces a well-calibrated registration uncertainty estimate. We extensively validated the proposed models using 3D medical images from three applications: inter-patient and atlas-to-patient brain MRI registration and phantom-to-CT registration. The proposed models are evaluated in comparison to a variety of existing registration methods and Transformer architectures. Qualitative and quantitative results demonstrate that the proposed Transformer-based model leads to a substantial performance improvement over the baseline methods, confirming the effectiveness of Transformers for medical image registration.
Collapse
Affiliation(s)
- Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA; Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.
| | - Eric C Frey
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA; Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.
| | - Yufan He
- NVIDIA Corporation, Bethesda, MD, USA.
| | - William P Segars
- Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, NC, USA.
| | - Ye Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| | - Yong Du
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA.
| |
Collapse
|
9
|
He Y, Wang A, Li S, Hao A. Hierarchical anatomical structure-aware based thoracic CT images registration. Comput Biol Med 2022; 148:105876. [PMID: 35863247 DOI: 10.1016/j.compbiomed.2022.105876] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 06/17/2022] [Accepted: 07/09/2022] [Indexed: 11/25/2022]
Abstract
Accurate thoracic CT image registration remains challenging due to complex joint deformations and different motion patterns in multiple organs/tissues during breathing. To combat this, we devise a hierarchical anatomical structure-aware based registration framework. It affords a coordination scheme necessary for constraining a general free-form deformation (FFD) during thoracic CT registration. The key is to integrate the deformations of different anatomical structures in a divide-and-conquer way. Specifically, a deformation ability-aware dissimilarity metric is proposed for complex joint deformations containing large-scale flexible deformation of the lung region, rigid displacement of the bone region, and small-scale flexible deformation of the rest region. Furthermore, a motion pattern-aware regularization is devised to handle different motion patterns, which contain sliding motion along the lung surface, almost no displacement of the spine and smooth deformation of other regions. Moreover, to accommodate large-scale deformation, a novel hierarchical strategy, wherein different anatomical structures are fused on the same control lattice, registers images from coarse to fine via elaborate Gaussian pyramids. Extensive experiments and comprehensive evaluations have been executed on the 4D-CT DIR and 3D DIR COPD datasets. It confirms that this newly proposed method is locally comparable to state-of-the-art registration methods specializing in local deformations, while guaranteeing overall accuracy. Additionally, in contrast to the current popular learning-based methods that typically require dozens of hours or more pre-training with powerful graphics cards, our method only takes an average of 63 s to register a case with an ordinary graphics card of RTX2080 SUPER, making our method still worth promoting. Our code is available at https://github.com/heluxixue/Structure_Aware_Registration/tree/master.
Collapse
Affiliation(s)
- Yuanbo He
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aoyu Wang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China.
| | - Shuai Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aimin Hao
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| |
Collapse
|
10
|
Yang J, Wu Y, Zhang D, Cui W, Yue X, Du S, Zhang H. LDVoxelMorph: A precise loss function and cascaded architecture for unsupervised diffeomorphic large displacement registration. Med Phys 2022; 49:2427-2441. [PMID: 35106787 DOI: 10.1002/mp.15515] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 01/11/2022] [Accepted: 01/22/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The traditional learning-based non-rigid registration methods for medical images are trained by an invariant smoothness regularization parameter, which cannot satisfy the registration accuracy and diffeomorphic property simultaneously. The diffeomorphic property reflects the credibility of the registration results. METHOD To improve the diffeomorphic property in 3D medical image registration, we propose a diffeomorphic cascaded network based on the compressed loss, named LDVoxelMorph. the proposed network has several constituent U-Nets and is trained with deep supervision, which uses a different spatial smoothness regularization parameter in each constituent U-Nets for training. This cascade-variant smoothness regularization parameter can maintain the diffeomorphic property in top cascades with large displacement and achieves precise registration in bottom cascades. Besides, we develop the compressed loss (CL) as a penalty for the velocity field, which can accurately limit the velocity field that causes the deformation field overlap after the velocity field integration. RESULTS In our registration experiments, the dice scores of our method were 0.892±0.040 on liver CT datasets SLIVER37 , 0.848±0.044 on liver CT datasets LiTS38 , 0.689±0.014 on brain MRI datasets LPBA38 , and the number of overlapping voxels of deformation field were 325, 159 and 0, respectively. Ablation study shows the compressed loss improves the diffeomorphic property more effectively than increases. CONCLUSION Experiments results show that our method can achieve higher registration accuracy assessed by dice scores and overlapping voxels while maintaining the diffeomorphic property for large deformation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Jing Yang
- School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, 710049, China.,Institute of Artificial Intelligence and Robotics, College of Artificial Intelligence, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Yinghao Wu
- School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Dong Zhang
- School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, 710049, China.,Institute of Artificial Intelligence and Robotics, College of Artificial Intelligence, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Wenting Cui
- Institute of Artificial Intelligence and Robotics, College of Artificial Intelligence, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Xiaoli Yue
- School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Shaoyi Du
- Institute of Artificial Intelligence and Robotics, College of Artificial Intelligence, Xi'an Jiaotong University, Xi'an, 710049, China
| | | |
Collapse
|
11
|
Chen J, Li Y, Du Y, Frey EC. Generating anthropomorphic phantoms using fully unsupervised deformable image registration with convolutional neural networks. Med Phys 2020; 47:6366-6380. [PMID: 33078422 PMCID: PMC10026844 DOI: 10.1002/mp.14545] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 08/28/2020] [Accepted: 10/09/2020] [Indexed: 01/07/2023] Open
Abstract
PURPOSE Computerized phantoms have been widely used in nuclear medicine imaging for imaging system optimization and validation. Although the existing computerized phantoms can model anatomical variations through organ and phantom scaling, they do not provide a way to fully reproduce the anatomical variations and details seen in humans. In this work, we present a novel registration-based method for creating highly anatomically detailed computerized phantoms. We experimentally show substantially improved image similarity of the generated phantom to a patient image. METHODS We propose a deep-learning-based unsupervised registration method to generate a highly anatomically detailed computerized phantom by warping an XCAT phantom to a patient computed tomography (CT) scan. We implemented and evaluated the proposed method using the NURBS-based XCAT phantom and a publicly available low-dose CT dataset from TCIA. A rigorous tradeoff analysis between image similarity and deformation regularization was conducted to select the loss function and regularization term for the proposed method. A novel SSIM-based unsupervised objective function was proposed. Finally, ablation studies were conducted to evaluate the performance of the proposed method (using the optimal regularization and loss function) and the current state-of-the-art unsupervised registration methods. RESULTS The proposed method outperformed the state-of-the-art registration methods, such as SyN and VoxelMorph, by more than 8%, measured by the SSIM and less than 30%, by the MSE. The phantom generated by the proposed method was highly detailed and was almost identical in appearance to a patient image. CONCLUSIONS A deep-learning-based unsupervised registration method was developed to create anthropomorphic phantoms with anatomies labels that can be used as the basis for modeling organ properties. Experimental results demonstrate the effectiveness of the proposed method. The resulting anthropomorphic phantom is highly realistic. Combined with realistic simulations of the image formation process, the generated phantoms could serve in many applications of medical imaging research.
Collapse
Affiliation(s)
- Junyu Chen
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, 21287, USA
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, 21287, USA
| | - Ye Li
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, 21287, USA
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, 21287, USA
| | - Yong Du
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, 21287, USA
| | - Eric C Frey
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, 21287, USA
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, 21287, USA
| |
Collapse
|
12
|
Zachiu C, Denis de Senneville B, Willigenburg T, Voort van Zyp JRN, de Boer JCJ, Raaymakers BW, Ries M. Anatomically-adaptive multi-modal image registration for image-guided external-beam radiotherapy. ACTA ACUST UNITED AC 2020; 65:215028. [DOI: 10.1088/1361-6560/abad7d] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
13
|
Fu Y, Ippolito JE, Ludwig DR, Nizamuddin R, Li HH, Yang D. Technical Note: Automatic segmentation of CT images for ventral body composition analysis. Med Phys 2020; 47:5723-5730. [PMID: 32969050 DOI: 10.1002/mp.14465] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Revised: 08/28/2020] [Accepted: 09/04/2020] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Body composition is known to be associated with many diseases including diabetes, cancers, and cardiovascular diseases. In this paper, we developed a fully automatic body tissue decomposition procedure to segment three major compartments that are related to body composition analysis - subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and muscle. Three additional compartments - the ventral cavity, lung, and bones - were also segmented during the segmentation process to assist segmentation of the major compartments. METHODS A convolutional neural network (CNN) model with densely connected layers was developed to perform ventral cavity segmentation. An image processing workflow was developed to segment the ventral cavity in any patient's computed tomography (CT) using the CNN model, then further segment the body tissue into multiple compartments using hysteresis thresholding followed by morphological operations. It is important to segment ventral cavity firstly to allow accurate separation of compartments with similar Hounsfield unit (HU) inside and outside the ventral cavity. RESULTS The ventral cavity segmentation CNN model was trained and tested with manually labeled ventral cavities in 60 CTs. Dice scores (mean ± standard deviation) for ventral cavity segmentation were 0.966 ± 0.012. Tested on CT datasets with intravenous (IV) and oral contrast, the Dice scores were 0.96 ± 0.02, 0.94 ± 0.06, 0.96 ± 0.04, 0.95 ± 0.04, and 0.99 ± 0.01 for bone, VAT, SAT, muscle, and lung, respectively. The respective Dice scores were 0.97 ± 0.02, 0.94 ± 0.07, 0.93 ± 0.06, 0.91 ± 0.04, and 0.99 ± 0.01 for non-contrast CT datasets. CONCLUSION A body tissue decomposition procedure was developed to automatically segment multiple compartments of the ventral body. The proposed method enables fully automated quantification of three-dimensional (3D) ventral body composition metrics from CT images.
Collapse
Affiliation(s)
- Yabo Fu
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Joseph E Ippolito
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Daniel R Ludwig
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Rehan Nizamuddin
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Harold H Li
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Deshan Yang
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| |
Collapse
|
14
|
Niethammer M, Kwitt R, Vialard FX. Metric Learning for Image Registration. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2020; 2019:8455-8464. [PMID: 32523327 DOI: 10.1109/cvpr.2019.00866] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Image registration is a key technique in medical image analysis to estimate deformations between image pairs. A good deformation model is important for high-quality estimates. However, most existing approaches use ad-hoc deformation models chosen for mathematical convenience rather than to capture observed data variation. Recent deep learning approaches learn deformation models directly from data. However, they provide limited control over the spatial regularity of transformations. Instead of learning the entire registration approach, we learn a spatially-adaptive regularizer within a registration model. This allows controlling the desired level of regularity and preserving structural properties of a registration model. For example, diffeomorphic transformations can be attained. Our approach is a radical departure from existing deep learning approaches to image registration by embedding a deep learning model in an optimization-based registration algorithm to parameterize and data-adapt the registration model itself. Source code is publicly-available at https://github.com/uncbiag/registration.
Collapse
|
15
|
Fu Y, Lei Y, Wang T, Higgins K, Bradley JD, Curran WJ, Liu T, Yang X. LungRegNet: An unsupervised deformable image registration method for 4D-CT lung. Med Phys 2020; 47:1763-1774. [PMID: 32017141 PMCID: PMC7165051 DOI: 10.1002/mp.14065] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 01/09/2020] [Accepted: 01/27/2020] [Indexed: 12/11/2022] Open
Abstract
PURPOSE To develop an accurate and fast deformable image registration (DIR) method for four-dimensional computed tomography (4D-CT) lung images. Deep learning-based methods have the potential to quickly predict the deformation vector field (DVF) in a few forward predictions. We have developed an unsupervised deep learning method for 4D-CT lung DIR with excellent performances in terms of registration accuracies, robustness, and computational speed. METHODS A fast and accurate 4D-CT lung DIR method, namely LungRegNet, was proposed using deep learning. LungRegNet consists of two subnetworks which are CoarseNet and FineNet. As the name suggests, CoarseNet predicts large lung motion on a coarse scale image while FineNet predicts local lung motion on a fine scale image. Both the CoarseNet and FineNet include a generator and a discriminator. The generator was trained to directly predict the DVF to deform the moving image. The discriminator was trained to distinguish the deformed images from the original images. CoarseNet was first trained to deform the moving images. The deformed images were then used by the FineNet for FineNet training. To increase the registration accuracy of the LungRegNet, we generated vessel-enhanced images by generating pulmonary vasculature probability maps prior to the network prediction. RESULTS We performed fivefold cross validation on ten 4D-CT datasets from our department. To compare with other methods, we also tested our method using separate 10 DIRLAB datasets that provide 300 manual landmark pairs per case for target registration error (TRE) calculation. Our results suggested that LungRegNet has achieved better registration accuracy in terms of TRE than other deep learning-based methods available in the literature on DIRLAB datasets. Compared to conventional DIR methods, LungRegNet could generate comparable registration accuracy with TRE smaller than 2 mm. The integration of both the discriminator and pulmonary vessel enhancements into the network was crucial to obtain high registration accuracy for 4D-CT lung DIR. The mean and standard deviation of TRE were 1.00 ± 0.53 mm and 1.59 ± 1.58 mm on our datasets and DIRLAB datasets respectively. CONCLUSIONS An unsupervised deep learning-based method has been developed to rapidly and accurately register 4D-CT lung images. LungRegNet has outperformed its deep-learning-based peers and achieved excellent registration accuracy in terms of TRE.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
16
|
Gong L, Duan L, Dai Y, He Q, Zuo S, Fu T, Yang X, Zheng J. Locally Adaptive Total p-Variation Regularization for Non-Rigid Image Registration With Sliding Motion. IEEE Trans Biomed Eng 2020; 67:2560-2571. [PMID: 31940514 DOI: 10.1109/tbme.2020.2964695] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Due to the complicated thoracic movements which contain both sliding motion occurring at lung surfaces and smooth motion within individual organs, respiratory estimation is still an intrinsically challenging task. In this paper, we propose a novel regularization term called locally adaptive total p-variation (LaTpV) and embed it into a parametric registration framework to accurately recover lung motion. LaTpV originates from a modified Lp-norm constraint (1 < p < 2), where a prior distribution of p modeled by the Dirac-shaped function is constructed to specifically assign different values to voxels. LaTpV adaptively balances the smoothness and discontinuity of the displacement field to encourage an expected sliding interface. Additionally, we also analytically deduce the gradient of the cost function with respect to transformation parameters. To validate the performance of LaTpV, we not only test it on two mono-modal databases including synthetic images and pulmonary computed tomography (CT) images, but also on a more difficult thoracic CT and positron emission tomography (PET) dataset for the first time. For all experiments, both the quantitative and qualitative results indicate that LaTpV significantly surpasses some existing regularizers such as bending energy and parametric total variation. The proposed LaTpV based registration scheme might be more superior for sliding motion correction and more potential for clinical applications such as the diagnosis of pleural mesothelioma and the adjustment of radiotherapy plans.
Collapse
|
17
|
Takemura A, Kaido R, Idesako M, Kadman B, Kojima H, Ueda S. [Database of Radiotherapy Plan Image for Deformable Image Registration]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2020; 76:500-504. [PMID: 32435034 DOI: 10.6009/jjrt.2020_jsrt_76.5.500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Guidelines require commissioning for deformable image registration (DIR) software before clinical use. The accuracy of DIR software depends upon data used. If common datasets for the DIR commissioning are available, the DIR results using the common datasets would be useful as an accuracy benchmark. Thus, the DIR-database (DIR-DB) was developed for DIR accuracy check and was open to access, which included radiotherapy plan data. This study was approved by Institutional Review Board (IRB). The DIR-DB recorded radiotherapy plans which had been finished on June 2017 and which at least two radiotherapy plans were built for a case in a treatment course. Cone-beam computed tomography (CBCT) images for patient setup were also collected and recorded in the DIR-DB, if it is available. All recorded data were anonymized and were allowed to access by users in Japan with the IRB approval. The accuracy metrics of DIR; Hausdorff distance, mean distance to agreement, Dice similarity coefficient, Jaccard were put up on the DIR-DB web site. The number of recorded cases were 11 cases for head and neck, 16 cases for thorax, 7 cases for abdomen, 8 cases for pelvis and 6 cases for prostate treated with brachytherapy. The number of case for CBCT was 17 cases. It was meaningful for DIR accuracy check in Japan that the DIR-DB and DIR results using the data in the database were released.
Collapse
Affiliation(s)
- Akihiro Takemura
- Faculty of Health Sciences, Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University
| | - Ryoto Kaido
- Division of Health Sciences, Graduate School of Medical Sciences, Kanazawa University
- Department of Radiology, University of Fukui Hospital
| | - Miki Idesako
- Division of Health Sciences, Graduate School of Medical Sciences, Kanazawa University
| | - Boriphat Kadman
- Division of Health Sciences, Graduate School of Medical Sciences, Kanazawa University
| | - Hironori Kojima
- Division of Health Sciences, Graduate School of Medical Sciences, Kanazawa University
- Department of Radiology, Kanazawa University Hospital
| | - Shinichi Ueda
- Department of Radiology, Kanazawa University Hospital
| |
Collapse
|
18
|
Špiclin Ž, McClelland J, Kybic J, Goksel O. An Image Registration Framework for Discontinuous Mappings Along Cracks. BIOMEDICAL IMAGE REGISTRATION 2020. [PMCID: PMC7279931 DOI: 10.1007/978-3-030-50120-4_16] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
A novel crack capable image registration framework is proposed. The approach is designed for registration problems suffering from cracks, gaps, or holes. The approach enables discontinuous transformation fields and also features an automatically computed crack indicator function and therefore does not require a pre-segmentation. The new approach is a generalization of the commonly used variational image registration approach. New contributions are an additional dissipation term in the overall energy, a proper balancing of different ingredients, and a joint optimization for both, the crack indicator function and the transformation. Results for histological serial sectioning of marmoset brain images demonstrate the potential of the approach and its superiority as compared to a standard registration.
Collapse
Affiliation(s)
- Žiga Špiclin
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - Jamie McClelland
- Centre for Medical Image Computing, University College London, London, UK
| | - Jan Kybic
- Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic
| | - Orcun Goksel
- Computer Vision Lab, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
19
|
Shen Z, Vialard FX, Niethammer M. Region-specific Diffeomorphic Metric Mapping. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2019; 32:1098-1108. [PMID: 36081637 PMCID: PMC9450565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
We introduce a region-specific diffeomorphic metric mapping (RDMM) registration approach. RDMM is non-parametric, estimating spatio-temporal velocity fields which parameterize the sought-for spatial transformation. Regularization of these velocity fields is necessary. In contrast to existing non-parametric registration approaches using a fixed spatially-invariant regularization, for example, the large displacement diffeomorphic metric mapping (LDDMM) model, our approach allows for spatially-varying regularization which is advected via the estimated spatio-temporal velocity field. Hence, not only can our model capture large displacements, it does so with a spatio-temporal regularizer that keeps track of how regions deform, which is a more natural mathematical formulation. We explore a family of RDMM registration approaches: 1) a registration model where regions with separate regularizations are pre-defined (e.g., in an atlas space or for distinct foreground and background regions), 2) a registration model where a general spatially-varying regularizer is estimated, and 3) a registration model where the spatially-varying regularizer is obtained via an end-to-end trained deep learning (DL) model. We provide a variational derivation of RDMM, showing that the model can assure diffeomorphic transformations in the continuum, and that LDDMM is a particular instance of RDMM. To evaluate RDMM performance we experiment 1) on synthetic 2D data and 2) on two 3D datasets: knee magnetic resonance images (MRIs) of the Osteoarthritis Initiative (OAI) and computed tomography images (CT) of the lung. Results show that our framework achieves comparable performance to state-of-the-art image registration approaches, while providing additional information via a learned spatio-temporal regularizer. Further, our deep learning approach allows for very fast RDMM and LDDMM estimations. Code is available at https://github.com/uncbiag/registration.
Collapse
|
20
|
S P, A ET. A Study on Robustness of Various Deformable Image Registration Algorithms on Image Reconstruction Using 4DCT Thoracic Images. J Biomed Phys Eng 2019; 9:559-568. [PMID: 31750270 PMCID: PMC6820026 DOI: 10.31661/jbpe.v0i0.377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2015] [Accepted: 07/16/2015] [Indexed: 11/16/2022]
Abstract
Background Medical image interpolation is recently introduced as a helpful tool to obtain further information via initial available images taken by tomography systems. To do this, deformable image registration algorithms are mainly utilized to perform image interpolation using tomography images. Materials and Methods In this work, 4DCT thoracic images of five real patients provided by DIR-lab group were utilized. Four implemented registration algorithms as 1) Original Horn-Schunck, 2) Inverse consistent Horn-Schunck, 3) Original Demons and 4) Fast Demons were implemented by means of DIRART software packages. Then, the calculated vector fields are processed to reconstruct 4DCT images at any desired time using optical flow based on interpolation method. As a comparative study, the accuracy of interpolated image obtained by each strategy is measured by calculating mean square error between the interpolated image and real middle image as ground truth dataset. Results Final results represent the ability to accomplish image interpolation among given two-paired images. Among them, Inverse Consistent Horn-Schunck algorithm has the best performance to reconstruct interpolated image with the highest accuracy while Demons method had the worst performance. Conclusion Since image interpolation is affected by increasing the distance between two given available images, the performance accuracy of four different registration algorithms is investigated concerning this issue. As a result, Inverse Consistent Horn-Schunck does not essentially have the best performance especially in facing large displacements happened due to distance increment.
Collapse
Affiliation(s)
- Parande S
- MSc, Faculty of Sciences and Modern Technologies Graduate University of Advanced Technology Haftbagh St. Kerman Iran
| | - Esmaili Torshabi A
- PhD, Faculty of Sciences and Modern Technologies Graduate University of Advanced Technology Haftbagh St. Kerman Iran
| |
Collapse
|
21
|
Bashiri FS, Baghaie A, Rostami R, Yu Z, D’Souza RM. Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach. J Imaging 2018; 5:5. [PMID: 34470183 PMCID: PMC8320870 DOI: 10.3390/jimaging5010005] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 12/23/2018] [Accepted: 12/25/2018] [Indexed: 11/16/2022] Open
Abstract
Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations and structural differences between images, they may have partial or full overlap, which adds an extra hurdle to the success of registration process. In this contribution, we propose a multi-modal to mono-modal transformation method that facilitates direct application of well-founded mono-modal registration methods in order to obtain accurate alignment of multi-modal images in both cases, with complete (full) and incomplete (partial) overlap. The proposed transformation facilitates recovering strong scales, rotations, and translations. We explain the method thoroughly and discuss the choice of parameters. For evaluation purposes, the effectiveness of the proposed method is examined and compared with widely used information theory-based techniques using simulated and clinical human brain images with full data. Using RIRE dataset, mean absolute error of 1.37, 1.00, and 1.41 mm are obtained for registering CT images with PD-, T1-, and T2-MRIs, respectively. In the end, we empirically investigate the efficacy of the proposed transformation in registering multi-modal partially overlapped images.
Collapse
Affiliation(s)
- Fereshteh S. Bashiri
- Department of Electrical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
| | - Ahmadreza Baghaie
- Department of Electrical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
| | - Reihaneh Rostami
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
| | - Zeyun Yu
- Department of Electrical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
| | - Roshan M. D’Souza
- Department of Mechanical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
| |
Collapse
|
22
|
Li D, Zhong W, Deh KM, Nguyen TD, Prince MR, Wang Y, Spincemaille P. Discontinuity Preserving Liver MR Registration with 3D Active Contour Motion Segmentation. IEEE Trans Biomed Eng 2018; 66:10.1109/TBME.2018.2880733. [PMID: 30418878 PMCID: PMC6565504 DOI: 10.1109/tbme.2018.2880733] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
OBJECTIVE The sliding motion of the liver during respiration violates the homogeneous motion smoothness assumption in conventional non-rigid image registration and commonly results in compromised registration accuracy. This paper presents a novel approach, registration with 3D active contour motion segmentation (RAMS), to improve registration accuracy with discontinuity-aware motion regularization. METHODS A Markov random field-based discrete optimization with dense displacement sampling and self-similarity context metric is used for registration, while a graph cuts-based 3D active contour approach is applied to segment the sliding interface. In the first registration pass, a mask-free L1 regularization on an image-derived minimum spanning tree is performed to allow motion discontinuity. Based on the motion field estimates, a coarse segmentation finds the motion boundaries. Next, based on MR signal intensity, a fine segmentation aligns the motion boundaries with anatomical boundaries. In the second registration pass, smoothness constraints across the segmented sliding interface are removed by masked regularization on a minimum spanning forest and masked interpolation of the motion field. RESULTS For in vivo breath-hold abdominal MRI data, the motion masks calculated by RAMS are highly consistent with manual segmentations in terms of Dice similarity and bidirectional local distance measure. These automatically obtained masks are shown to substantially improve registration accuracy for both the proposed discrete registration as well as conventional continuous non-rigid algorithms. CONCLUSION/SIGNIFICANCE The presented results demonstrated the feasibility of automated segmentation of the respiratory sliding motion interface in liver MR images and the effectiveness of using the derived motion masks to preserve motion discontinuity.
Collapse
Affiliation(s)
- Dongxiao Li
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
| | - Wenxiong Zhong
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
| | - Kofi M. Deh
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| | - Thanh D. Nguyen
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| | - Martin R. Prince
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| | - Yi Wang
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA., Department of Biomedical Engineering, Cornell University, Ithaca, NY 14853, USA
| | - Pascal Spincemaille
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| |
Collapse
|
23
|
Gori P, Colliot O, Kacem LM, Worbe Y, Routier A, Poupon C, Hartmann A, Ayache N, Durrleman S. Double Diffeomorphism: Combining Morphometry and Structural Connectivity Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2033-2043. [PMID: 29993599 DOI: 10.1109/tmi.2018.2813062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The brain is composed of several neural circuits which may be seen as anatomical complexes composed of grey matter structures interconnected by white matter tracts. Grey and white matter components may be modeled as 3-D surfaces and curves, respectively. Neurodevelopmental disorders involve morphological and organizational alterations which cannot be jointly captured by usual shape analysis techniques based on single diffeomorphisms. We propose a new deformation scheme, called double diffeomorphism, which is a combination of two diffeomorphisms. The first one captures changes in structural connectivity, whereas the second one recovers the global morphological variations of both grey and white matter structures. This deformation model is integrated into a Bayesian framework for atlas construction. We evaluate it on a data-set of 3-D structures representing the neural circuits of patients with Gilles de la Tourette syndrome (GTS). We show that this approach makes it possible to localise, quantify, and easily visualise the pathological anomalies altering the morphology and organization of the neural circuits. Furthermore, results also indicate that the proposed deformation model better discriminates between controls and GTS patients than a single diffeomorphism.
Collapse
|
24
|
Gupta V, Wang Y, Méndez Romero A, Myronenko A, Jordan P, Maurer C, Heijmen B, Hoogeman M. Fast and robust adaptation of organs-at-risk delineations from planning scans to match daily anatomy in pre-treatment scans for online-adaptive radiotherapy of abdominal tumors. Radiother Oncol 2018. [DOI: 10.1016/j.radonc.2018.02.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
25
|
Papież BW, Franklin JM, Heinrich MP, Gleeson FV, Brady M, Schnabel JA. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications. J Med Imaging (Bellingham) 2018; 5:024001. [PMID: 29662918 PMCID: PMC5886381 DOI: 10.1117/1.jmi.5.2.024001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Accepted: 03/13/2018] [Indexed: 11/14/2022] Open
Abstract
Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.
Collapse
Affiliation(s)
- Bartłomiej W Papież
- University of Oxford, Institute of Biomedical Engineering, Department of Engineering Science, Oxford, United Kingdom
| | - James M Franklin
- University of Oxford, Department of Oncology, Oxford, United Kingdom
| | | | - Fergus V Gleeson
- Oxford University Hospitals NHS Trust, Churchill Hospital, Department of Radiology, Oxford, United Kingdom
| | - Michael Brady
- University of Oxford, Department of Oncology, Oxford, United Kingdom
| | - Julia A Schnabel
- University of Oxford, Institute of Biomedical Engineering, Department of Engineering Science, Oxford, United Kingdom.,King's College London, School of Biomedical Engineering and Imaging Sciences, London, United Kingdom
| |
Collapse
|
26
|
Cryo-Imaging and Software Platform for Analysis of Molecular MR Imaging of Micrometastases. Int J Biomed Imaging 2018; 2018:9780349. [PMID: 29805438 PMCID: PMC5899875 DOI: 10.1155/2018/9780349] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 01/24/2018] [Indexed: 11/25/2022] Open
Abstract
We created and evaluated a preclinical, multimodality imaging, and software platform to assess molecular imaging of small metastases. This included experimental methods (e.g., GFP-labeled tumor and high resolution multispectral cryo-imaging), nonrigid image registration, and interactive visualization of imaging agent targeting. We describe technological details earlier applied to GFP-labeled metastatic tumor targeting by molecular MR (CREKA-Gd) and red fluorescent (CREKA-Cy5) imaging agents. Optimized nonrigid cryo-MRI registration enabled nonambiguous association of MR signals to GFP tumors. Interactive visualization of out-of-RAM volumetric image data allowed one to zoom to a GFP-labeled micrometastasis, determine its anatomical location from color cryo-images, and establish the presence/absence of targeted CREKA-Gd and CREKA-Cy5. In a mouse with >160 GFP-labeled tumors, we determined that in the MR images every tumor in the lung >0.3 mm2 had visible signal and that some metastases as small as 0.1 mm2 were also visible. More tumors were visible in CREKA-Cy5 than in CREKA-Gd MRI. Tape transfer method and nonrigid registration allowed accurate (<11 μm error) registration of whole mouse histology to corresponding cryo-images. Histology showed inflammation and necrotic regions not labeled by imaging agents. This mouse-to-cells multiscale and multimodality platform should uniquely enable more informative and accurate studies of metastatic cancer imaging and therapy.
Collapse
|
27
|
Fu Y, Liu S, Li HH, Li H, Yang D. An adaptive motion regularization technique to support sliding motion in deformable image registration. Med Phys 2018; 45:735-747. [DOI: 10.1002/mp.12734] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 11/30/2017] [Accepted: 11/30/2017] [Indexed: 01/28/2023] Open
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology; School of Medicine; Washington University in Saint Louis; 4921 Parkview Place St. Louis MO 63110 USA
| | - Shi Liu
- Department of Radiation Oncology; School of Medicine; Washington University in Saint Louis; 4921 Parkview Place St. Louis MO 63110 USA
| | - H. Harold Li
- Department of Radiation Oncology; School of Medicine; Washington University in Saint Louis; 4921 Parkview Place St. Louis MO 63110 USA
| | - Hua Li
- Department of Radiation Oncology; School of Medicine; Washington University in Saint Louis; 4921 Parkview Place St. Louis MO 63110 USA
| | - Deshan Yang
- Department of Radiation Oncology; School of Medicine; Washington University in Saint Louis; 4921 Parkview Place St. Louis MO 63110 USA
| |
Collapse
|
28
|
Szmul A, Papież BW, Hallack A, Grau V, Schnabel JA. Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering. JOURNAL OF ELECTRONIC IMAGING 2017; 26:061607. [PMID: 29225433 PMCID: PMC5722202 DOI: 10.1117/1.jei.26.6.061607] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.
Collapse
Affiliation(s)
- Adam Szmul
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK
| | - Bartłomiej W. Papież
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK
| | - Andre Hallack
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK
| | - Vicente Grau
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK
| | - Julia A. Schnabel
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King’s College London, UK
| |
Collapse
|
29
|
Ruhaak J, Polzin T, Heldmann S, Simpson IJA, Handels H, Modersitzki J, Heinrich MP. Estimation of Large Motion in Lung CT by Integrating Regularized Keypoint Correspondences into Dense Deformable Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1746-1757. [PMID: 28391192 DOI: 10.1109/tmi.2017.2691259] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We present a novel algorithm for the registration of pulmonary CT scans. Our method is designed for large respiratory motion by integrating sparse keypoint correspondences into a dense continuous optimization framework. The detection of keypoint correspondences enables robustness against large deformations by jointly optimizing over a large number of potential discrete displacements, whereas the dense continuous registration achieves subvoxel alignment with smooth transformations. Both steps are driven by the same normalized gradient fields data term. We employ curvature regularization and a volume change control mechanism to prevent foldings of the deformation grid and restrict the determinant of the Jacobian to physiologically meaningful values. Keypoint correspondences are integrated into the dense registration by a quadratic penalty with adaptively determined weight. Using a parallel matrix-free derivative calculation scheme, a runtime of about 5 min was realized on a standard PC. The proposed algorithm ranks first in the EMPIRE10 challenge on pulmonary image registration. Moreover, it achieves an average landmark distance of 0.82 mm on the DIR-Lab COPD database, thereby improving upon the state of the art in accuracy by 15%. Our algorithm is the first to reach the inter-observer variability in landmark annotation on this dataset.
Collapse
|
30
|
Spatial patterns and frequency distributions of regional deformation in the healthy human lung. Biomech Model Mechanobiol 2017; 16:1413-1423. [DOI: 10.1007/s10237-017-0895-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2016] [Accepted: 03/09/2017] [Indexed: 12/31/2022]
|
31
|
Dang J, Yin FF, You T, Dai C, Chen D, Wang J. Simultaneous 4D-CBCT reconstruction with sliding motion constraint. Med Phys 2017; 43:5453. [PMID: 27782722 DOI: 10.1118/1.4959998] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Current approaches using deformable vector field (DVF) for motion-compensated 4D-cone beam CT (CBCT) reconstruction typically utilize an isotropically smoothed DVF between different respiration phases. Such isotropically smoothed DVF does not work well if sliding motion exists between neighboring organs. This study investigated an anisotropic motion modeling scheme by extracting organ boundary local motions (e.g., sliding) and incorporated them into 4D-CBCT reconstruction to optimize the motion modeling and reconstruction methods. METHODS Initially, a modified simultaneous algebraic reconstruction technique (mSART) was applied to reconstruct high quality reference phase CBCT using all phase projections. The initial DVFs were precalculated and subsequently updated to achieve the optimized solution. During the DVF update, sliding motion estimation was performed by matching the measured projections to the forward projection of the deformed reference phase CBCT. In this process, each moving organ boundary was first segmented. The normal vectors of the boundary DVF were then extracted and incorporated for further DVF optimization. The regularization term in the objective function adaptively regularizes the DVF by (1) isotopically smoothing the DVF within each organ; (2) smoothing the DVF at boundary along the normal direction; and (3) leaving the tangent direction of boundary DVF unsmoothed (i.e., allowing for sliding motion). A nonlinear conjugate gradient optimizer was used. The algorithm was validated on a digital cubic tube phantom with sliding motion, nonuniform rotational B-spline based cardiac-torso (NCAT) phantom, and two anonymized patient data. The relative reconstruction error (RE), the motion trajectory's root mean square error (RMSE) together with its maximum error (MaxE), and the Dice coefficient of the lung boundary were calculated to evaluate the algorithm performance. RESULTS For the cubic tube and NCAT phantom tests, the REs are 10.2% and 7.4% with sliding motion compensation, compared to 13.4% and 8.9% without sliding modeling. The motion trajectory's RMSE and MaxE for NCAT phantom tests are 0.5 and 0.8 mm with sliding motion constraint compared to 3.5 and 7.3 mm without sliding motion modeling. The Dice coefficients for both NCAT phantom and the patients show a consistent trend that sliding motion constraint achieves better similarity for segmented lung boundary compared with the ground truth or patient reference. CONCLUSIONS A sliding motion-compensated 4D-CBCT reconstruction and the motion modeling scheme was developed. Both phantom and patient study demonstrated the improved accuracy and motion modeling accuracy in reconstructed 4D-CBCT.
Collapse
Affiliation(s)
- Jun Dang
- Department of Radiation Oncology, Affiliated Hospital of Jiangsu University, Zhenjiang 212000, China
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 and Department of Medical Physics, Duke Kunshan University, Kunshan 215316, China
| | - Tao You
- Department of Radiation Oncology, Affiliated Hospital of Jiangsu University, Zhenjiang 212000, China
| | - Chunhua Dai
- Department of Radiation Oncology, Affiliated Hospital of Jiangsu University, Zhenjiang 212000, China
| | - Deyu Chen
- Department of Radiation Oncology, Affiliated Hospital of Jiangsu University, Zhenjiang 212000, China
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas 75390
| |
Collapse
|
32
|
Hua R, Pozo JM, Taylor ZA, Frangi AF. Multiresolution eXtended Free-Form Deformations (XFFD) for non-rigid registration with discontinuous transforms. Med Image Anal 2017; 36:113-122. [PMID: 27894001 DOI: 10.1016/j.media.2016.10.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2016] [Revised: 10/18/2016] [Accepted: 10/26/2016] [Indexed: 10/20/2022]
|
33
|
Kurz C, Kamp F, Park YK, Zöllner C, Rit S, Hansen D, Podesta M, Sharp GC, Li M, Reiner M, Hofmaier J, Neppl S, Thieke C, Nijhuis R, Ganswindt U, Belka C, Winey BA, Parodi K, Landry G. Investigating deformable image registration and scatter correction for CBCT-based dose calculation in adaptive IMPT. Med Phys 2016; 43:5635. [DOI: 10.1118/1.4962933] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
|
34
|
Mang A, Biros G. Constrained H1-regularization schemes for diffeomorphic image registration. SIAM JOURNAL ON IMAGING SCIENCES 2016; 9:1154-1194. [PMID: 29075361 PMCID: PMC5654641 DOI: 10.1137/15m1010919] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss-)Newton-Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient.
Collapse
Affiliation(s)
- Andreas Mang
- The Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas, 78712-0027, US
| | - George Biros
- The Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas, 78712-0027, US
| |
Collapse
|
35
|
Zhong Z, Gu X, Mao W, Wang J. 4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling. Phys Med Biol 2016; 61:996-1020. [PMID: 26758496 PMCID: PMC5026392 DOI: 10.1088/0031-9155/61/3/996] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.
Collapse
Affiliation(s)
- Zichun Zhong
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75235, USA
| | - Xuejun Gu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75235, USA
| | - Weihua Mao
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75235, USA
| | - Jing Wang
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75235, USA
| |
Collapse
|
36
|
Hodneland E, Hanson E, Munthe-Kaas AZ, Lundervold A, Nordbotten JM. Physical Models for Simulation and Reconstruction of Human Tissue Deformation Fields in Dynamic MRI. IEEE Trans Biomed Eng 2016; 63:2200-10. [PMID: 26742122 DOI: 10.1109/tbme.2015.2514262] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Medical image registration can be formulated as a tissue deformation problem, where parameter estimation methods are used to obtain the inverse deformation. However, there is limited knowledge about the ability to recover an unknown deformation. The main objective of this study is to estimate the quality of a restored deformation field obtained from image registration of dynamic MR sequences. METHODS We investigate the behavior of forward deformation models of various complexities. Further, we study the accuracy of restored inverse deformations generated by image registration. RESULTS We found that the choice of 1) heterogeneous tissue parameters and 2) a poroelastic (instead of elastic) model had significant impact on the forward deformation. In the image registration problem, both 1) and 2) were found not to be significant. Here, the presence of image features were dominating the performance. We also found that existing algorithms will align images with high precision while at the same time obtain a deformation field with a relative error of 40%. CONCLUSION Image registration can only moderately well restore the true deformation field. Still, estimation of volume changes instead of deformation fields can be fairly accurate and may represent a proxy for variations in tissue characteristics. Volume changes remain essentially unchanged under choice of discretization and the prevalence of pronounced image features. SIGNIFICANCE We suggest that image registration of high-contrast MR images has potential to be used as a tool to produce imaging biomarkers sensitive to pathology affecting tissue stiffness.
Collapse
|
37
|
Zhen X, Chen H, Yan H, Zhou L, Mell LK, Yashar CM, Jiang S, Jia X, Gu X, Cervino L. A segmentation and point-matching enhanced efficient deformable image registration method for dose accumulation between HDR CT images. Phys Med Biol 2015; 60:2981-3002. [DOI: 10.1088/0031-9155/60/7/2981] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
38
|
Derksen A, Heldmann S, Polzin T, Berkels B. Image Registration with Sliding Motion Constraints for 4D CT Motion Correction. ACTA ACUST UNITED AC 2015. [DOI: 10.1007/978-3-662-46224-9_58] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
39
|
Papież BW, Heinrich MP, Fehrenbach J, Risser L, Schnabel JA. An implicit sliding-motion preserving regularisation via bilateral filtering for deformable image registration. Med Image Anal 2014; 18:1299-311. [PMID: 24968741 DOI: 10.1016/j.media.2014.05.005] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2014] [Revised: 03/24/2014] [Accepted: 05/15/2014] [Indexed: 12/27/2022]
Abstract
Several biomedical applications require accurate image registration that can cope effectively with complex organ deformations. This paper addresses this problem by introducing a generic deformable registration algorithm with a new regularization scheme, which is performed through bilateral filtering of the deformation field. The proposed approach is primarily designed to handle smooth deformations both between and within body structures, and also more challenging deformation discontinuities exhibited by sliding organs. The conventional Gaussian smoothing of deformation fields is replaced by a bilateral filtering procedure, which compromises between the spatial smoothness and local intensity similarity kernels, and is further supported by a deformation field similarity kernel. Moreover, the presented framework does not require any explicit prior knowledge about the organ motion properties (e.g. segmentation) and therefore forms a fully automated registration technique. Validation was performed using synthetic phantom data and publicly available clinical 4D CT lung data sets. In both cases, the quantitative analysis shows improved accuracy when compared to conventional Gaussian smoothing. In addition, we provide experimental evidence that masking the lungs in order to avoid the problem of sliding motion during registration performs similarly in terms of the target registration error when compared to the proposed approach, however it requires accurate lung segmentation. Finally, quantification of the level and location of detected sliding motion yields visually plausible results by demonstrating noticeable sliding at the pleural cavity boundaries.
Collapse
Affiliation(s)
- Bartłomiej W Papież
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK.
| | | | - Jérome Fehrenbach
- Institut de Mathématiques de Toulouse (UMR 5219), Université Paul Sabatier, France
| | - Laurent Risser
- Institut de Mathématiques de Toulouse (UMR 5219), CNRS, France
| | - Julia A Schnabel
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK
| |
Collapse
|
40
|
Aganj I, Reuter M, Sabuncu MR, Fischl B. Avoiding symmetry-breaking spatial non-uniformity in deformable image registration via a quasi-volume-preserving constraint. Neuroimage 2014; 106:238-51. [PMID: 25449738 DOI: 10.1016/j.neuroimage.2014.10.059] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2014] [Revised: 10/16/2014] [Accepted: 10/26/2014] [Indexed: 11/28/2022] Open
Abstract
The choice of a reference image typically influences the results of deformable image registration, thereby making it asymmetric. This is a consequence of a spatially non-uniform weighting in the cost function integral that leads to general registration inaccuracy. The inhomogeneous integral measure--which is the local volume change in the transformation, thus varying through the course of the registration--causes image regions to contribute differently to the objective function. More importantly, the optimization algorithm is allowed to minimize the cost function by manipulating the volume change, instead of aligning the images. The approaches that restore symmetry to deformable registration successfully achieve inverse-consistency, but do not eliminate the regional bias that is the source of the error. In this work, we address the root of the problem: the non-uniformity of the cost function integral. We introduce a new quasi-volume-preserving constraint that allows for volume change only in areas with well-matching image intensities, and show that such a constraint puts a bound on the error arising from spatial non-uniformity. We demonstrate the advantages of adding the proposed constraint to standard (asymmetric and symmetrized) demons and diffeomorphic demons algorithms through experiments on synthetic images, and real X-ray and 2D/3D brain MRI data. Specifically, the results show that our approach leads to image alignment with more accurate matching of manually defined neuroanatomical structures, better tradeoff between image intensity matching and registration-induced distortion, improved native symmetry, and lower susceptibility to local optima. In summary, the inclusion of this space- and time-varying constraint leads to better image registration along every dimension that we have measured it.
Collapse
Affiliation(s)
- Iman Aganj
- Athinoula A. Martinos Center for Biomedical Imaging, Radiology Department, Massachusetts General Hospital, Harvard Medical School, 149, 13th St., Room 2301, Charlestown, MA 02129, USA.
| | - Martin Reuter
- Athinoula A. Martinos Center for Biomedical Imaging, Radiology Department, Massachusetts General Hospital, Harvard Medical School, 149, 13th St., Room 2301, Charlestown, MA 02129, USA; Computer Science and Artificial Intelligence Laboratory, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 32 Vassar St., Cambridge, MA 02139, USA.
| | - Mert R Sabuncu
- Athinoula A. Martinos Center for Biomedical Imaging, Radiology Department, Massachusetts General Hospital, Harvard Medical School, 149, 13th St., Room 2301, Charlestown, MA 02129, USA; Computer Science and Artificial Intelligence Laboratory, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 32 Vassar St., Cambridge, MA 02139, USA.
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Radiology Department, Massachusetts General Hospital, Harvard Medical School, 149, 13th St., Room 2301, Charlestown, MA 02129, USA; Computer Science and Artificial Intelligence Laboratory, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 32 Vassar St., Cambridge, MA 02139, USA; Harvard-MIT Division of Health Sciences and Technology, 77 Massachusetts Ave., Room E25-519, Cambridge, MA 02139, USA.
| |
Collapse
|
41
|
Ou Y, Akbari H, Bilello M, Da X, Davatzikos C. Comparative evaluation of registration algorithms in different brain databases with varying difficulty: results and insights. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:2039-65. [PMID: 24951685 PMCID: PMC4371548 DOI: 10.1109/tmi.2014.2330355] [Citation(s) in RCA: 111] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms' similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations.
Collapse
|